Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Optimization Issue

Former Member
0 Kudos

Hello, I have an internal table which has a only four fields in it.

Material

GL_Account

Amount

Customer.

I want to update a standard database table with data that is in my internal table, the problem  is that standard database table's structure is different than my internal table. so i created another internal table which is TYPE my database table and work area. ok

then I did a loop from Internal table into Workarea.

INT_TAB has four fields above.

WA_INT is INT_TAB's work area

STD TABLE is a standard database table

WA_STD_TAB is work area of STD_TAB which is TYPE STANDARD TABLE OF standard database table.

This is my code:

Loop at Int_Tab into WA_INT.

WA_std_tab-ZZMATERIAL = WA_INT-ZZMATERIAL.

WA_std_tab-ZZGL_ACCOUNT = WA_INT-ZZGL_ACCOUNT.

WA_std_tab-ZZAMOUNT = WA_INT-ZZAMOUNT.

WA_std_tab-ZZCUSTOMER = WA_INT-ZZCUSTOMER.

UPDATE STD_TAB from WA_STD_TAB.

ENDLOOP.

My code compiled okay but this is very very slow and times out after 40 mins. I am only pulling 1 million records. I am looking and see how to optimize this better.  This should be done in a few minutes.

Please suggest me what to do here,

Thanks.

<locked by moderator, spoonfeeding>

Message was edited by: Manish Kumar

1 ACCEPTED SOLUTION

Former Member
0 Kudos

Consider using UPDATE dbtab FROM TABLE itab syntax. As far as I understand, it groups updates into packages instead of sending them one by one, so this should reduce the overall number of database calls.

The code will be something like this:

DATA:

  it_std_tab TYPE TABLE OF <DB table type>,

  l_count    TYPE i VALUE 0.

LOOP AT int_tab INTO wa_int.

  wa_std_tab-zzmaterial = ... " Copy the fields

   ...

  APPEND wa_std_tab TO it_std_tab.

   l_count = l_count + 1.

  " Let's partition the source itab to prevent updates from being too huge

  IF l_count = 1000.

    " Chunk is full, write to DB

     UPDATE std_tab FROM TABLE it_std_tab.

     CLEAR it_std_tab.

   

    " Committing transaction with 1 mln changes may cause timeouts itself, so let's do it in chunks

    COMMIT WORK.

  ENDIF.

ENDLOOP.

Another option to try is to use UPDATE ... SET syntax, thus eliminating the need of intermediate tables/structures:

LOOP AT int_tab INTO wa_int.

  UPDATE std_tab

   SET

    zzmaterial = wa_int-zzmaterial

     ...

   WHERE ...

ENDLOOP.

Important: in both cases you need to make sure that UPDATE statements issued to the DB are supplied with STD_TAB primary key (or other index) values. So you need to either include them into it_std_tab (1st option) or provide explicitly in WHERE clause (2nd option).

11 REPLIES 11

Former Member
0 Kudos

Consider using UPDATE dbtab FROM TABLE itab syntax. As far as I understand, it groups updates into packages instead of sending them one by one, so this should reduce the overall number of database calls.

The code will be something like this:

DATA:

  it_std_tab TYPE TABLE OF <DB table type>,

  l_count    TYPE i VALUE 0.

LOOP AT int_tab INTO wa_int.

  wa_std_tab-zzmaterial = ... " Copy the fields

   ...

  APPEND wa_std_tab TO it_std_tab.

   l_count = l_count + 1.

  " Let's partition the source itab to prevent updates from being too huge

  IF l_count = 1000.

    " Chunk is full, write to DB

     UPDATE std_tab FROM TABLE it_std_tab.

     CLEAR it_std_tab.

   

    " Committing transaction with 1 mln changes may cause timeouts itself, so let's do it in chunks

    COMMIT WORK.

  ENDIF.

ENDLOOP.

Another option to try is to use UPDATE ... SET syntax, thus eliminating the need of intermediate tables/structures:

LOOP AT int_tab INTO wa_int.

  UPDATE std_tab

   SET

    zzmaterial = wa_int-zzmaterial

     ...

   WHERE ...

ENDLOOP.

Important: in both cases you need to make sure that UPDATE statements issued to the DB are supplied with STD_TAB primary key (or other index) values. So you need to either include them into it_std_tab (1st option) or provide explicitly in WHERE clause (2nd option).

0 Kudos

Mr. Alex. I was trying to update from work area to internal table but did not work how to do that. i think thi statement will let me do that. right?

APPEND wa_std_tab TO it_std_tab.



I will try this and let you know if it works.

Thanks.

0 Kudos

Mr. Alex, it worked but the performance is still slow. it is definitely better than before. can i use field symbol to make it even faster. can you please tell me how to declare field symbol please? i tried searching online but i couldn't understand.

Another question is where do i clear the count?

Also, what if I get 1024 records, it will update my standard table for the first 1000 rows but what about the rest of 24 records?

Thanks.

0 Kudos
can you please tell me how to declare field symbol please?

Like this:


FIELD-SYMBOLS: <fs_row> TYPE int_tab_row_type.


Unfortunately, I really doubt this will help a lot since your main bottleneck is the UPDATE operation.

Another question is where do i clear the count?

Oops, missed that one. Eventually, the LOOP should look like the below. Please note, that the problem with the remaining 24 rows is also solved.

LOOP AT int_tab INTO wa_int.

  wa_std_tab-zzmaterial = ... " Copy the fields

   ...

  APPEND wa_std_tab TO it_std_tab.

  l_count = l_count + 1.

  " Let's partition the source itab to prevent updates from being too huge

  IF l_count = 1000.

    " Chunk is full, write to DB

     UPDATE std_tab FROM TABLE it_std_tab.

     CLEAR: it_std_tab, l_count. " <-- clear l_count as well

  

    " Committing transaction with 1 mln changes may cause timeouts itself, so let's do it in chunks

    COMMIT WORK.

  ENDIF.

ENDLOOP.


" Update the remaining records if any (the 1024 rows issue)

IF it_std_tab IS NOT INITIAL.

  UPDATE std_tab FROM TABLE it_std_tab.

  CLEAR it_std_tab.

ENDIF.


the performance is still slow


Well, you are doing around 1.000.000 updates, this just takes time, no matter what you. Check the all the indexes created for the std_tab: you may need to create or alter one to fit the WHERE condition of your updates. This should also help, but don't expect miracles to happen

0 Kudos

Alex sir, thank you. I think this will work but one thing can you also do for me...

Can you do declaration of field symbols how it needs to be done with my scenario please?


Also, Material is the only key that I have in my standard database table ok. so, i don't know how to use this WHERE clause for ZZMATERIAL and where.  I have not created any index on my standard database table, do you think i should create the index for ZZMATERIAL?

0 Kudos
Can you do declaration of field symbols how it needs to be done with my scenario please?

It will be something like this:

FIELD-SYMBOLS: <fs_int> LIKE LINE OF int_tab. " or use actual type of the int_tab work area

LOOP AT int_tab ASSIGNING <fs_int>.

   ... " Use <fs_int> instead of wa_int, the rest of the code is the same

ENDLOOP.

Also, Material is the only key that I have in my standard database table ok. so, i don't know how to use this WHERE clause for ZZMATERIAL and where.  I have not created any index on my standard database table, do you think i should create the index for ZZMATERIAL?

If ZZMATERIAL is a part of the primary key, then it's already included in the index. I believe SAP OpenSQL subsystem should be wise enough to generate a proper WHERE clause that will make use of that index.

Just in case, you may check the SQL queries actually issued when updates are performed. To do this, just run the SQL tracing from the debugger as described here.

0 Kudos

Thank you Sir Alex, I will try this and let you know how it works. also, i have another internal table in which I have a field called ZZSTORE. In that table which is called "int_tab_str" which has a work are as well. I have ZZMATERIAL and ZZSTORE as fields.

In that second internal table, my data lies like this:

ZZMATERIAL    ZZSTORE

101                    CTDPS

102                    CRGPD

103                    SEHTY

104                    CAUFG

I need to merge that table to int_tab so I can bring ZZSTORE field in my int_tab.  From there I can take all these fields and send them to it_std_tab and finally append to my final standard database table.


I am thinking about creating third final table in which I will have fields from Internal Table 1 and Internal Table 2 (ZZSTORE).

then I will do

SELECT * FROM INTERNAL TABLE 2

FOR ALL ENTRIES IN INTERNAL TABLE 1

WHERE ZZMATERIAL = INTERNAL TABLE 1-ZZMATERIAL.


LOOP AT INTERNAL TABLE 1

LOOP AT INTERNAL TABLE 2 WHERE ZZMATERIAL = ITAB1-ZZMATERIAL.


Here i think i will have to fill my third table but i do not know how.


ENDLOOP.

ENDLOOP.



Once this code is good then i need to know where to place this code in my scenario code. Please let me know.

Thanks.

0 Kudos

As far as I understand, the relation between store and material is 1:M. In this case I would just add a ZZSTORE field to the 1st table definition and then just fetch stores while looping over it. See the sample code below:

DATA l_wa_store LIKE LINE OF int_tab_2.


SORT int_tab_2 BY zzmaterial.


LOOP AT int_tab ASSIGNING <fs_int>.

  READ TABLE int_tab_2 INTO l_wa_store

    WITH KEY zzmaterial = <fs_mseg>-zzmaterial BINARY SEARCH.


  IF sy-subrc = 0.

    <fs_int>-zzstore = l_wa_store-zzstore.

  ENDIF.


  " The code we've discussed earlier goes below

  ...

ENDLOOP.

naveen_inuganti2
Active Contributor
0 Kudos

After updating your database table, if you are not accessing same data in this program then use UPDATE function module to finish your task.

Create one function module and pass your local internal table to that FM via tables or exporting parameter. Now LOOP your data inside the update function module and update the table. Call your UPDATE function module like below in program:

call function 'ZUPDATE_MY_TABLE' in update task

     tables

          Int_Tab =  Int_Tab.


This will be executed as asynchronous task and you can see this update process in SM13 transaction code.


Regards,

Naveen


former_member191806
Active Participant
0 Kudos

You can optimize your code if you use a field symbol to loop through your internal table. Field symbols work like dereferenced pointers, i.e. they do not create an additional work area. Right now, your code copies from the internal table int_tab to the work area wa_int and then it copies from wa_int to wa_std_tab. You can bypass the first step by using field symbols.

  LOOP AT int_tab ASSIGNING FIELD-SYMBOL(<fs_int>).

    MOVE-CORRESPONDING <fs_int> TO wa_std_tab.

    UPDATE std_tab FROM wa_std_tab.

  ENDLOOP.

If you can afford to make a gigantic new internal table, you can eliminate the loop and just make a MOVE-CORRESPONDING from your old table to the new internal table, then use the db UPDATE dbtab FROM TABLE itab syntax.

BR,

Serban

0 Kudos

Serban, thank you. can you please tell me how to declare field symbols. i think you are right field symbols will make my loop go fast. Please help me declaring them.

Thanks.