on 12-07-2012 1:51 PM
Hi folks,
Suppose I have a query that returns a potentially large result set, such as
SELECT * FROM BUT000;
or
SELECT
OPBEL,
VKONT,
VTREF,
HVORG,
TVORG,
SUM ( BETRH ) AS BETRH
FROM DFKKOP
GROUP BY OPBEL, VKONT, VTREF, HVORG, TVORG;
I don't want to overwhelm the client (front-end, service consumer, ABAP application using a secondary database connection, whatever) with the entire results set but retrieve and display it in chunks of 100. I want to allow forward and backward navigation in the result set: Show me the first 100, then the second 100, the third 100, then back to the second 100, and so on.
I've found two things that come close to what I want:
1) SELECT ... LIMIT ... OFFSET
SELECT * FROM BUT000 -- 1st 100
LIMIT 100;
SELECT * FROM BUT000 -- 2nd 100
LIMIT 100 OFFSET 100;
SELECT * FROM BUT000 -- 3rd 100
LIMIT 100 OFFSET 200;
SELECT * FROM BUT000 -- 2nd 100 again
LIMIT 100 OFFSET 100;
Disadvantage: Each time I want to retrieve a package, the query is executed again.
2) ADBC package handling
DATA:
lr_sql TYPE REF TO cl_sql_statement,
lr_result TYPE REF TO cl_sql_result_set,
lr_results TYPE REF TO data.
CREATE OBJECT lr_sql
EXPORTING
con_ref = cl_sql_connection=>get_connection( 'SECONDARY' ).
lr_result = lr_sql->execute_query( |SELECT * FROM SAPDAT.BUT000| ).
GET REFERENCE OF lt_results INTO lr_results.
lr_result->set_param_table( lr_results ).
DO 3 TIMES.
CLEAR lt_results.
lr_result->next_package( 100 ).
ENDDO.
lr_result->close( ).
Disadvantage: can only navigate forward to the next package, no backward navigation or free positioning available.
I'd be happy with solutions that solve the problem at the HANA/SQL Script level or at the ABAP level. Who can help?
Thanks,
Thorsten
Hi Franz,
if you really want to avoid the re-execution of the statement I see two options:
1) you fill a temporary result table and run the pagination query just on that.
or
2) you hope that the result set cache works for your query (haven't tested this).
So all I have is the following example for option 1):
create global temporary table myresults as
(select mandt, versi, belnr, sum(VV020001)
from "LARS"."CE2IDEA_COPY12"
group by mandt, versi, belnr)
WITH NO DATA;
Now (at the latest) set AUTOCOMMIT to OFF!!
select * from myresults
Nothing in there (hopefully).
Let's fill the table:
insert into myresults (
select mandt, versi, belnr, sum(VV020001)
from "LARS"."CE2IDEA_COPY12"
group by mandt, versi, belnr)
Statement 'insert into myresults ( select mandt, versi, belnr, sum(VV020001) from "LARS"."CE2IDEA_COPY12" ...' successfully executed in 490 ms 87 µs Started: 2012-12-07 15:33:12 (server processing time: 448 ms 752 µs) - Rows Affected: 2845
Awesome, right? Thanks to temporary table, there is no persistence involved here, no redo logging, no nothing.
Let's check the data (let's run the stuff in parallel, just for the fun of it):
select * from myresults limit 50 offset 0;
select * from myresults limit 50 offset 50;
select * from myresults limit 50 offset 100;
select * from myresults limit 50 offset 150;
Statement 'select * from myresults limit 50 offset 0' successfully executed in 23 ms 680 µs Started: 2012-12-07 15:33:50 (server processing time: 0 ms 196 µs)
Fetched 50 row(s) in 15 ms 709 µs (server processing time: 0 ms 11 µs)
Statement 'select * from myresults limit 50 offset 50' successfully executed in 28 ms 580 µs Started: 2012-12-07 15:33:50 (server processing time: 0 ms 297 µs)
Fetched 50 row(s) in 16 ms 21 µs (server processing time: 0 ms 11 µs)
Statement 'select * from myresults limit 50 offset 100' successfully executed in 23 ms 117 µs Started: 2012-12-07 15:33:50 (server processing time: 0 ms 293 µs)
Fetched 50 row(s) in 15 ms 491 µs (server processing time: 0 ms 10 µs)
Statement 'select * from myresults limit 50 offset 150' successfully executed in 23 ms 417 µs Started: 2012-12-07 15:33:51 (server processing time: 0 ms 338 µs)
Duration of 4 statements: 98 ms
Fetched 50 row(s) in 15 ms 938 µs (server processing time: 0 ms 11 µs)
Statement 'commit' successfully executed
Of course if you want to save a bit more parsing time and effort and sql plan cache memory you go and use parameters for the limit clause:
select * from myresults limit ? offset ?;
Works nicely.
After you're done with scrolling through the result set you simply disconnect (brutally :-D) or you truncate the myresult table.
To me this is not too bad for a pagination over an aggregated result set...
Cheers, Lars
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
AFAIK there is no specific command to change the AUTOCOMMIT setting.
It's a connection setting and when using Netweaver stack AUTOCOMMIT is usually deactivated anyhow to avoid interference with the ABAP transaction handling.
In hdbsql you can script the change by using \a but I'm not aware of any "ALTER SESSION ..." type of command for this.
For the sake of this example I was actually wrong - you could also leave the autocommit mode as it is. The global temporary table (GTT) and it's content does survive the end of a transaction - but the content will be removed with the end of the session/connection.
cheers,
Lars
That's not what I wrote.
When running SQL from an NetWeaver work process you always work within the business transaction scheme.
If you need to commit in between due to your business logic, by all means: do so!
Sometimes you will even get an implicit commit e.g. due to work process dispatching.
To close the circuit here: from ABAP you don't have to care about setting AUTOCOMMIT off, because you are always running in that mode.
AUTOCOMMIT ON is a setting that really only makes sense for interactive SQL user scenarios, where the wait/think time of the users potentially could be *very* long (e.g. coffee or lunch break). And you don't want to loose work there or have data sets locked for an unnecessary long time.
cheers, Lars
Hello Lars,
We have a similar requirement where we are required to reduce the memory used by the application when the data in the HANA table is huge. I also had the idea of using "LIMIT" and "OFFSET", but the downside being that we need to execute more queries to the DB which might slow it down.
Then I saw your answer here where you suggest the same thing but on temporary tables. Can you explain me a little bit on how this is beneficial to fetching data from actual table? Because the number of DB calls remains the same here as well. I know that the data from temporary table and the table will be deleted at the end of the session but is it anyways helping in reducing the memory footprint? The link for temporary table which is there in your answer is not working now.
Thanks
OFFSET keyword in not working for me in Select statements. Its always fetching the same result set.
Can anyone tell me why is this happening. Does offset work only for a specific vesion of hana studio?
I tested this on version 1.00.85.02.399857 .
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
78 | |
10 | |
9 | |
7 | |
6 | |
6 | |
5 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.