Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Mass reading standard texts (STXH, STXL)

Former Member
0 Kudos

Hi folks,

At my customer - we have a proxy interface - where the calling customer can attempt to read the text data for all of their open orders some of these texts are standard SD texts - some are custom standard texts. I have ran performance traces for this interface - and after several performance improvement efforts we now see that the performance bottleneck / limiting factor - is the call to READ_TEXT.

For a problem proxy call sometimes we are seeing 18000+ calls to the read_text function module - and for these calls - the proxy interface frequently times out - and our customers get no response data - and undestandably get a bit upset.

Perhaps it could be argued that this is a design question - and that we should not be reading so many texts in a proxy interface - but that is the business requirement we are trying to live with.

My questions: Does anyone know of a way to mass read texts - outside of calling READ_TEXT many times? I know the underlying data is stored in cluster tables - but from my knowledge of cluster table access - it pays to have the exact key of every record - so it would appear that possibly the fastest access we could get is to write some custom code that reads from these underlying tables - and at least cuts out the overhead of the call function command - which must in itself be a performance hit.

Does anyone have any killer tips here? - am I missing something basic / obvious? Is there a solution built into SAP standard that I'm not aware of for the mass reading of texts? Is there a solution for an efficient way to mass read data from cluster tables?

To me must be a common issue - in the SAP world - this is not the first time that I have seen Calls to READ_TEXT coming top of the performance hit list for a runtime trace.

Any advice / assistance you can provide is greatly appreciated.

Regards,

Julian Phillips

1 ACCEPTED SOLUTION

ThomasZloch
Active Contributor

I had a similar problem when extracting 100.000 FI document item long texts.

My approach in a nutshell:

- select application data into internal table itab1, incl. all key fields for the text name

- concatenate text name into separate column for each entry in the internal table itab1

- select entries of STXL "for all entries of itab1" into another internal table itab2 comparing text name, maybe use "package size" if required

- decompress the long text in itab2 using statement "import ... from internal table ..." (see ABAP help for this statement)

A bit complex, but saves about 50% to 75% of runtime compared to single READ_TEXT calls, so can be handy for time criticial interfaces and such.

Thomas

27 REPLIES 27

ThomasZloch
Active Contributor

I had a similar problem when extracting 100.000 FI document item long texts.

My approach in a nutshell:

- select application data into internal table itab1, incl. all key fields for the text name

- concatenate text name into separate column for each entry in the internal table itab1

- select entries of STXL "for all entries of itab1" into another internal table itab2 comparing text name, maybe use "package size" if required

- decompress the long text in itab2 using statement "import ... from internal table ..." (see ABAP help for this statement)

A bit complex, but saves about 50% to 75% of runtime compared to single READ_TEXT calls, so can be handy for time criticial interfaces and such.

Thomas

0 Kudos

Thanks thomas,

I was not expecting to get such a good answer so soon. Your answer both tells me that this is a common problem, that there is no standard SAP solution to it - and that your proposed solution should work (provided we can code it right - and give the needed performance benefits. If I could I would like to award more than 10 point for this answer - but 10 points it is - and

THANKS VERY MUCH!

I plan to start a demand at my customer to write a mass_read_text class - and in the interests of the broader community perhaps try to share the coded solution on a blog here at SDN - once we have it fully coded and tested - as I can see this kind of code coming in useful again and again.

0 Kudos

Julian, no problem, I had plans to write a little blog (or wiki entry) as well, but did not get around doing it.

I would be surprised if somebody told us a better solution in the SAP standard.

If you get stuck along the way, feel free to continue this thread here.

Thomas

0 Kudos

I need to develop something like this as well, do you guys have the ABAP code for it? It would help me greatly and would be much appreciated! Thanks.

0 Kudos

Hi Joyce,

Unfortunately my roll is split between many different activities - but we are still working with this issue. We plan a go-live in mid April. I will need to ask my customer if it is ok to publish code snippets for this solution - if we can get it to work - if I get the ok for this from my customer - then I hope to write a blog for sometime in April.

Regards,

Julian

0 Kudos

Hi,

Thanks for the offer, but I need to start the development on this right now as well, so I don't think I'll be able to wait for your blog

0 Kudos

Hello Thomas,

i am facing the same problem at the moment. i have to export material long texts for about 300.000 materials.

i tried to implement your approach but failed to do so until now.

i select all materials into internal table, then i select the entries of STXL into another internal table.

TYPES: BEGIN of item_list,
         matnr like mara-matnr,
         zeinr like mara-zeinr,
         netgew like mara-ntgew,
         prodh like mvke-prodh,
         stawn like marc-stawn,
         mtpos like mvke-mtpos,
         stxlkey like stxl-tdname,
         END OF item_list.

DATA: myitemlist type table of item_list.

DATA: mytextlist type TABLE OF stxl.

select ... from mara into correspondig fields of table myitemlist.

select * from stxl
  into CORRESPONDING FIELDS OF TABLE mytextlist
  FOR ALL ENTRIES IN myitemlist
  where
  tdname = myitemlist-stxlkey.

When i try to decompress the long text there is a "TCHK_TYPELINE_LOAD" error.

import tline to rt_lines
from INTERNAL TABLE mytextlist.

i think there is something wrong with my internal table "mytextlist" because the ABAP Help tells that the first column has to be type "s" and the second column type "x" in order to use "Import from internal table".

How does your internal table for STXL look like ? How do you "decompress" what am i doing wrong ?

i hope you can help me,

thank you very much in advance.

Best Regards Michael

Edited by: Thomas Zloch on Mar 24, 2011 10:41 AM - code tags added

0 Kudos

Hi Michael - here's some scratch code that I've been playing with in sandbox just to read a single text - this executes fine - and gives a result (this code is based around what executes in READ_TEXT - but ignoring the overhead):

SELECT SINGLE tdobject tdname tdid tdspras tdtexttype
    INTO wa_stxh
     FROM stxh
     WHERE tdobject = 'TEXT'
       AND tdname   IN s_tdname
       AND tdid = 'ST'
       AND tdspras  IN s_spras.
  MOVE-CORRESPONDING wa_stxh TO wa_id.


  IF sy-subrc = 0.
    WRITE: / wa_stxh.
    SELECT SINGLE * FROM stxl
      INTO wa_stxl
      WHERE relid = 'TX'
        AND tdobject = 'TEXT'
        AND tdname IN s_tdname
        AND tdid = 'ST'
        AND tdspras IN s_spras
        AND srtf2 = '0'.
    IF sy-subrc = 0.
      WRITE: / wa_stxl-tdobject, wa_stxl-clustr, wa_stxl-clustd.

      IF wa_stxh-tdtexttype IS INITIAL.              "SAPscript format
        IMPORT tline TO rt_lines
          FROM DATABASE stxl(tx)
               CLIENT   sy-mandt
               ID       wa_id
               ACCEPTING TRUNCATION                     "important for Unicode->Nonunicode
               IGNORING CONVERSION ERRORS.
      ELSE.                                             "non-SAPscript text
        IMPORT tline TO rt_lines
          FROM DATABASE stxb(tx)
               CLIENT   sy-mandt
               ID       wa_id
               CODE PAGE INTO l_cp
               ACCEPTING TRUNCATION
               IGNORING CONVERSION ERRORS.

      ENDIF.
    ENDIF.
  ENDIF.

Edited by: Thomas Zloch on Mar 24, 2011 10:42 AM - code tags added

Michael and all, here is the important bits of my code, declarations:

* compressed text data with text name
types: begin of ty_stxl,
         tdname type stxl-tdname,
         clustr type stxl-clustr,
         clustd type stxl-clustd,
       end of ty_stxl.
data:  t_stxl type standard table of ty_stxl.
field-symbols: <stxl> type ty_stxl.

* compressed text data without text name
types: begin of ty_stxl_raw,
         clustr type stxl-clustr,
         clustd type stxl-clustd,
       end of ty_stxl_raw.
data:  t_stxl_raw type standard table of ty_stxl_raw.
data:  w_stxl_raw type ty_stxl_raw.

* decompressed text
data:  t_tline type standard table of tline.
field-symbols: <tline> type tline.

and then

* select compressed text lines in blocks of 3000 (adjustable)
select tdname clustr clustd
       into table t_stxl
       from stxl
       package size 3000
       for all entries in (itab with application data and TDNAME)
       where relid    = 'TX'          "standard text
         and tdobject = (your text object)
         and tdname   = itab-tdname
         and tdid     = (your text ID)
         and tdspras  = sy-langu.

  loop at t_stxl assigning <stxl>.

*   decompress text
    clear: t_stxl_raw[], t_tline[].
    w_stxl_raw-clustr = <stxl>-clustr.
    w_stxl_raw-clustd = <stxl>-clustd.
    append w_stxl_raw to t_stxl_raw.
    import tline = t_tline from internal table t_stxl_raw.

*  access text lines for further processing
    loop at t_tline assigning <tline>.
      "whatever
    endloop.

  endloop.

  free t_stxl.

endselect.

Thomas

0 Kudos

Thanks again Thomas - I forwarded your snippet to the developer we have working on this. We have the outline design now for a class to mass read the texts and are prototyping a solution that we will trace alongside mass calls to read text to see which is faster...

0 Kudos

OK, please publish your results here if possible, this would be really interesting.

Thomas

0 Kudos

Dear Thomas,

Just gone through this thread. Thanks for sample code & it worked perfectly. Will try out this one for future requirements of similiar type.

0 Kudos

Hi Thomas,

thank you so much for your help ! reading all the texts now takes 10 to 15 minutes instead of hours when using READ_TEXT.

Best Regards Michael

0 Kudos

Let's not forget that READ_TEXT does a few more things, like reading archived texts or from text memory, code page/unicode related stuff etc.

I still wonder though why SAP has no READ equivalent for the existing INTTAB_SAVE_TEXT and INTTAB_COMMIT_TEXT for storing new texts.

An "INTTAB_READ_TEXT" would have to accept the desired TDNAMES in a internal table, do the magic and return the LINES in an internal table with additional key field TDNAME. Or something like that.

I might even suggest that to SAP if I find an appropriate channel, probably getting rebuffed with some argument

Thomas

0 Kudos

Hi Thomas, all,

we have some prototype code developed now in a sandbox system. We have done a side by side comparison of calling this code and calling read_text. In our test we read some 4500 texts. In performance traces - we found our new code is on average 4 times faster to execute, so the next step for our development is to add this technique to our problem proxy interface - and we are about to start this work (probably early next week).

As you mention - there are many things the standard function module does during its execution - that our code does not do - and our new code does not work for all texts in the system - some texts - seem to have binary/bitmaps associated with them - and these dump (the import statement raises IMPORT_CONTAINER_MISSING) with our new code - still that is not an issue for us - as we only need this code to read order related texts - and they do not have this issue. There are likely several other overheads that read_text is coded to handle - that our new code does not take into account - still what we are aiming for is performance with average standard texts over the ability to read all types of text in the system.

Thanks again for the assistance here. I'm still a little unclear on if I have permission to publish our code - so I'll try to chase this a little our end.

0 Kudos

Thanks for the feedback, of course you don't need to publish code if there are any constraints, just your performance measurements on actual production data would be helpful for evaluating the impact of this change.

Thomas

0 Kudos

Can you please let me know if the IMPORT statement format you have used should match the corresponding EXPORT statement format. We are facing dumps that says  'IMPORT_CONTAINER_MISSING' and exception 'cx_sy_import_format_error' and I see from keyword help as  "The internal table must contain a data cluster which was generated using the INTERNAL TABLE addition of the EXPORT statement; otherwise, a runtime error occurs. Note that the internal table cannot be empty."

0 Kudos

Hi Harika,

I am facing similar issue. Possibly because of the non-english characters in my text. Were you able to find a solution for it. Please let me know

0 Kudos

Thank so much, this excercise helps me alot. I make my own version for searching the CFDI UUID of financial documents.

Thanks again.

0 Kudos

It was very helpful.

Thanks.

Sandra_Rossi
Active Contributor
0 Kudos

For reference, there is this blog in 2014 which copies the results of this thread + arranges in a better form; I think it's better to continue any discussion on that blog, where I added a comment describing an issue with the code provided above.

0 Kudos

Hi Sandra, good to see you returning to SCN!

I cannot see your comment (yet?) under the blog you mentioned.


Thomas

0 Kudos

Hi Thomas, thanks. There it is. I replied in the wrong time order

Sandra

D046098
Advisor
Advisor
0 Kudos

Hi Julian,

A new note has been released to provide new function modules for mass read.

2261311 - Function module for reading multiple SAPscript texts

Regards,

Yoshio

ThomasZloch
Active Contributor
0 Kudos

Thanks for the update, a long journey ends here

I briefly looked at the correction instructions and stumbled upon comment "STXL contains an LRAW field, so SELECT ... FOR ALL ENTRIES must not be used".

Why is that?


Thomas

0 Kudos

Good question, but they precisely say `It seems that "select * from stxl ... for all entries in name_table" should not be used because of the LOB field`, so they are not sure themselves... I know SELECT with LRAW fields must be done in the right way (the INT2 field must be positioned before the LRAW field in the target area), but if you do it this way, FOR ALL ENTRIES used to work.

They say "LOB", but it's not a LOB. So maybe the developer has wrongly interpreted the abap documentation which states "If the addition FOR ALL ENTRIES is used, no LOB handles can be created as reader streams or as locators in the SELECT target area"

0 Kudos

https://help.sap.com/abapdocu_740/en/abapwhere.htm

Well,

you can not use a RAW,LOB ... in the WHERE clause.

This is obvious, because of the requirement to match the type.


I see no reason why you should not be able to SELECT a raw field

with non-raw field where conditions.

Volker