Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Performance on inner join with respect to the data in DB

0 Kudos

Hi Experts,

We are performing inner join on two tables (BKPF and BSAK). Few months back this query was functioning properly and was retrieving the records successfully.

However now this query is taking lot of time and we are getting the pop up message as "SAP system message; work process restarted; session terminated".

Eventually no data is retrieved.

I have two queries:

1. If the number of records have increased in the tables(BKPF and BSAK on which we are performing the join operation ) can affect the performance?

2. Under what circumstances work process will get restart? Does it depend upon any system setting? Can it vary with respect to the system?

Thanks & Regards,

Fahmeetha

1 ACCEPTED SOLUTION

Former Member
0 Kudos

Hi Fahmeetha, Hi Sandra,

1/ Yes : the more rows you have got to read, the more you will fetch blocks from DB, the more you will put the result in memory, and the more it will affect performances. (because you need RAM, you need I/Os , and so CPU to handle that)

2/ Under what circumstances work process will get restart?

Your process is restarting because it goes in HEAP memory.

To made short and easy :

Abap dialog mode is using a shareable memory but if shareable is filled, then it goes to “HEAP”.

(very very short, it’s not exact but it explains)

Why SHARING MEMORY <> HEAP MEMORY ? because HEAP is a “PRIVATE” memory so, by building, it could not be shared [ between process ]

That why you ve got the well known message “ leave transaction before having a break” on sapgui.

And you should do ! Why ?

Considered that HEAP Memory is a  “dirty” where you can put dust under the carpet.

So when your Abap Dialog is working in Heap: it ‘s not clean.

Since it dirty, to made it simple, SAP process is restarting to clean up the heap.

By the why you have got a beautiful new process to handle the other request.

Let’s have a look to SM04 / Goto / Memory. Heap memory is the right hand column.

  1. 2.2) Does it depend upon any system setting?

Yes, but It’s not a long term solution.

What happens if you open the valve ?

What happens regarding your Operating system ? your hardware ? your pagein/out ? 

abap/heap_area_dia    : Byte Max. heap memory for dialog workprocesses

abap/heap_area_nondia : Byte Max. heap memory for non-dialog workprocesses

abap/heap_area_total  : Byte Max. usable heap memory

abap/heaplimit              :   Byte Workprocess restart limit of heap memory

  1. 2.3) Can it vary with respect to the system?

  1. Yes.

3) Solution ?

The clean solution is don’t use SELECT or Joins. Use Function module.

Read BKPF via FI_DOCUMENT_READ for example.

Don’t try to put everything in memory.  Do a loop, read BKPF, do what you want, and next.

10 REPLIES 10

Sandra_Rossi
Active Contributor
0 Kudos

You should look at the developer trace file (transaction ST11, there will be a file named dev_w<number> corresponding to the failed work process <number>, first check SM21 to get the work process number, as I guess you'll have one general message there).

With the text in the developer trace file, search the SAP notes and the web to get more information.

0 Kudos

Hi Sandra,

I checked in ST11 for the respective work process id , it has the message as "User XYZ reached avail. heapsize = 5718 MB."

0 Kudos

Probably the amount of data returned by that SELECT is huge. But how are you sure it's this SELECT which fails? If you're sure, could you provide the code of that exact SELECT, with the contents of variables (by debug) right before it fails?

0 Kudos

Below shown select statement was functioning properly before 4 months.


   SELECT h~bukrs h~belnr h~gjahr h~budat h~bldat h~xblnr h~bktxt

          h~usnam h~cpudt h~blart h~waers h~bvorg d~lifnr d~augbl

          d~augdt d~zlspr d~pswsl d~xragl d~xref1 d~xref2 d~xref3

          h~awkey h~ausbk                      

     FROM bkpf AS h

     INNER JOIN bsak AS d

     ON   h~bukrs = d~bukrs

      AND h~belnr = d~belnr

      AND h~gjahr = d~gjahr

     APPENDING TABLE t_ap_data

     FOR ALL ENTRIES IN p_t_data

     WHERE h~bukrs p_t_data-bukrs AND

           h~bstat ' '            AND

           h~belnr p_t_data-belnr AND

           h~gjahr p_t_data-gjahr.



Internal table "p_t_data" is now having almost 27000 records. Earlier when it was working fine also this table had almost 28000 records.


So we need to know whether the data volume increase in the tables (BKPF and BSAK) will impact the performance leading to memory issue?

0 Kudos

My advice: please forget that it has worked in the past, just analyze the current situation as if it was new.

4 GB of data is huge. Did you make sure the volume has not changed? I also see that you have 2 "bad" things in your query : 1) combination of join and for all entries, it's better to do one join, and 2) appending table which adds rows, so it intends to be a loop to repeat the SELECT!

Did you try to change the code to make it work? Did you check whether some corrections exist at SAP and at your database?

0 Kudos

Thanks Sandra.

We need to answer business the root cause of the issue. We need to know whether the increase in the base tables (BKPF/BSAK) has caused the issue.

Is there any possibility for that?

0 Kudos

You can't get an easy answer to this kind of question. You have lots of possibilities to search. Maybe you have a little "bug" in your program, which woke up only now. If you can reproduce, it's easy to analyze by debug the growth of the internal table, get the input values and count the number of queried rows. If you can't reproduce (night job), as last resort you could modify your program to trace the info you need, in a file for example.

Jelena
Active Contributor
0 Kudos

Fahmeetha M wrote:

We need to answer business the root cause of the issue. We need to know whether the increase in the base tables (BKPF/BSAK) has caused the issue.

The root cause is likely that you didn't have the optimized ABAP code. Naturally, the more data you have the more resources are consumed. But optimized ABAP code together with adequate hardware should be able to handle data volume. So data is not really the root cause, it's just a trigger.

Former Member
0 Kudos

Hi Fahmeetha, Hi Sandra,

1/ Yes : the more rows you have got to read, the more you will fetch blocks from DB, the more you will put the result in memory, and the more it will affect performances. (because you need RAM, you need I/Os , and so CPU to handle that)

2/ Under what circumstances work process will get restart?

Your process is restarting because it goes in HEAP memory.

To made short and easy :

Abap dialog mode is using a shareable memory but if shareable is filled, then it goes to “HEAP”.

(very very short, it’s not exact but it explains)

Why SHARING MEMORY <> HEAP MEMORY ? because HEAP is a “PRIVATE” memory so, by building, it could not be shared [ between process ]

That why you ve got the well known message “ leave transaction before having a break” on sapgui.

And you should do ! Why ?

Considered that HEAP Memory is a  “dirty” where you can put dust under the carpet.

So when your Abap Dialog is working in Heap: it ‘s not clean.

Since it dirty, to made it simple, SAP process is restarting to clean up the heap.

By the why you have got a beautiful new process to handle the other request.

Let’s have a look to SM04 / Goto / Memory. Heap memory is the right hand column.

  1. 2.2) Does it depend upon any system setting?

Yes, but It’s not a long term solution.

What happens if you open the valve ?

What happens regarding your Operating system ? your hardware ? your pagein/out ? 

abap/heap_area_dia    : Byte Max. heap memory for dialog workprocesses

abap/heap_area_nondia : Byte Max. heap memory for non-dialog workprocesses

abap/heap_area_total  : Byte Max. usable heap memory

abap/heaplimit              :   Byte Workprocess restart limit of heap memory

  1. 2.3) Can it vary with respect to the system?

  1. Yes.

3) Solution ?

The clean solution is don’t use SELECT or Joins. Use Function module.

Read BKPF via FI_DOCUMENT_READ for example.

Don’t try to put everything in memory.  Do a loop, read BKPF, do what you want, and next.

0 Kudos

Nicolas COCHEFERT wrote:

3) Solution ?

The clean solution is don’t use SELECT or Joins. Use Function module.

Read BKPF via FI_DOCUMENT_READ for example.

Don’t try to put everything in memory.  Do a loop, read BKPF, do what you want, and next.

Can't say I agree with that, remember the golden rules for SQL performance:

  • Minimize amount of transferred data
  • Minimize amount of data transfers
  • Keep the result sets small
  • Optimal use of keys and indexes
  • (I'm deliberately forgetting the "Keep load away from the database", as it does no longer seem to be actual)

So, as has basically been suggested by previous posts: Make sure that you select only what you need, and so it in as little as separate sql statements as possible. Which implies that selecting a few lines from BKPF in a huge loop doesn't seem like the best of ideas.

In this specific case, you could consider splitting your single huge select in several large selects for instance. That is, if all the data selected is required to be in memory.. or split your job/program into several (parallel) ones.

Oh, and the obvious comment, if you're using FAE, make sure the internal table your referring too isn't empty..