1 6 7 8 9 10 50 Previous Next

ABAP Development

748 Posts

I recently had a lot of trouble finding the requested characteristics on a specific material. For some reason the class assigned to the material didn't contain the characteristics that I needed. I'm not an SD consultant so I had no idea where to even begin to resolve this problem. After running to every SD and MM consultant that I knew, I finally found one who helped me really quickly.

 

First, he indicated that there is a configurable master material that contains the characteristic class. But unfortunately that material wouldn't contain any data as it's the master of many child materials. After some digging he helped me find the link to MARA that CLAF_CLASSIFICATION_OF_OBJECTS for some reason couldn't find. This function module is a result of that.

 

For ZIBSYMBOL I just made a copy of structure of table IBSYMBOL and added ATINN_S as a type CHAR20.

 

Also, keep in mind that the from field will be required for some characteristics. It's a floating point value and will need to be converted to be readable.

 

Function - Get Master Characteristic Data for a Material
FUNCTION z_get_config_mat_char.
*"----------------------------------------------------------------------
*"*"Local Interface:
*"  IMPORTING
*"     REFERENCE(P_MATNR) TYPE  MATNR
*"  TABLES
*"      T_SYMBOLS TYPE  ZIBSYMBOL
*"----------------------------------------------------------------------


   DATA: lv_objmara TYPE mara-cuobf,
     lv_objibin TYPE ibin-instance,
     lv_recnr TYPE ibin-in_recno,
     lv_sumid TYPE ibinvalues-symbol_id,
     w_symbols LIKE LINE OF t_symbols.

   DATA: it_symids TYPE TABLE OF IBINVALUES,
         wa_symids LIKE LINE OF it_symids.

   DATA: it_selsym TYPE TABLE OF selopt,
         wa_selsym LIKE LINE OF it_selsym.

   SELECT SINGLE cuobf
     FROM mara
     INTO lv_objmara
     WHERE matnr = p_matnr.

   lv_objibin = lv_objmara.

   SELECT SINGLE in_recno
     FROM ibin
     INTO lv_recnr
     WHERE instance = lv_objibin.

   SELECT * FROM ibinvalues
     INTO CORRESPONDING FIELDS OF TABLE it_symids
     WHERE in_recno = lv_recnr.

   LOOP AT it_symids INTO wa_symids.
     wa_selsym-SIGN = 'I'.
     wa_selsym-OPTION = 'EQ'.
     wa_selsym-low = wa_symids-SYMBOL_ID.
     APPEND wa_selsym TO it_selsym.
   ENDLOOP.

   SELECT * FROM ibsymbol
     INTO CORRESPONDING FIELDS OF TABLE t_symbols
     WHERE symbol_id IN it_selsym.

   LOOP AT t_symbols INTO w_symbols.
     CALL FUNCTION 'CONVERSION_EXIT_ATINN_OUTPUT'
       EXPORTING
         input         = w_symbols-atinn
       IMPORTING
         OUTPUT        = w_symbols-atinn_s.

     MODIFY t_symbols FROM w_symbols.
   ENDLOOP.

ENDFUNCTION.

 

 

Conversion Routine

data : w_float type f,
        w_character type ausp-atwrt,
        w_number(10) TYPE P DECIMALS 0.


...

       

         w_float = wa_symbols-ATFLV.
         CALL FUNCTION 'CEVA_CONVERT_FLOAT_TO_CHAR'
           EXPORTING
             float_imp = w_float "Feld TYPE F
             format_imp = w_number "Field Format
             round_imp = ' ' "Round off
           IMPORTING
             char_exp = w_character. "Feld TYPE C
         <field> = w_character.

Introduction


 

   The purpose of this article is to help on how to implement logical ports and how to create an executable program in order to consume an abap proxy after creating a webservice in PHP and a service consumer in Enterprise Services. In order to start with this article it is necessary to first build the PHP webservice and define the service consumer in ES. This foundation is perfectly explained by Karthikeya Sastry in his great document: Creating Web service in PHP and consuming it in SAP  It is a very straightforward tutorial on how to consume PHP webservices and as always: the most complex solution is rarely the best one. After going through his document I was looking after a way to consume that service he created in an executable program. So, this tutorial is a complement to show how to configure the logical port in SOAMANAGER in a way to set it as default to be used in ABAP programs that consume the proxy developed.

 

 

Configuration of Consumer Proxies    


   The configuration of consumer proxies is implemented via logical ports. Consequently, a logical port is created based on the requirements of the provider. A logical port refers to a service endpoint that is available under a unique location in the provider system. A logical port additionally contains a runtime configuration for accessing the service endpoint. Furthermore, the port also contains the logon data that is required for calling the service methods. You can create multiple logical ports for each consumer proxy, but a logical port can only refer to one endpoint. You manage and configure consumer proxies via the SOA Management tool (Transaction SOAMANAGER). The below figure illustrates the relationship between logical ports and service endpoints. A service consumer establishes a connection by sending a call via a logical port. A logical port can send a call only to one service end- point, but a service endpoint can be called via various logical ports. 

 

1.png

After following the steps to develop the PHP webservice in Karthikeya Sastry tutorial you should have the below enterprise service:

 

2.jpg

 

PHP development


In my case the MySQL DB has the structure below and by providing the titel column as a key for the PHP report, it must return the entire row:

 

  3.jpg

 

- My adjusted PHP program according to the model form the tutorial:

 

 

<?php

class MyClass{

                  function test($val){

                                    $host = "xxxx";                                  

     $user = "xxxx";

                                    $senhabd = "xxxx";

                                    $db = "cdcol";

                                   

                                    $connection = mysql_connect($host, $user, $senhabd) or die('Não foi possivel conectar: ' . mysql_error());

                                    $db = mysql_select_db($db, $connection) or die("erro database");

                                   

                                    $sql = "SELECT * FROM CDS WHERE TITEL = '".$val."'";

                                   

                                    $result = mysql_query($sql) or die(mysql_error());

                                   

                                    $tbl = mysql_fetch_array($result);

                                   

                           $nome = $tbl["interpret"];

                                   

                                    echo $nome;

                           return $nome;

                  }

}

$server = new SOAPServer('service.wsdl');

$server->setClass('MyClass');

$server->handle();

?>

 

 

 

-  The WSDL that I generated from the tutorial:

 

wsdl.jpg

 

Configuration of Logical Port


      Go to SOAMANAGER transaction and select the service administration tab. Then choose single service configuration and search for the consumer proxy you created. Then create the logical port with the URL of your generated PHP WSDL path. Until now you can only test your proxy but you are not able to use it in your ABAP program. So, the trick here is to define this logical port as the default for your consumer proxy by checking the box ‘Logical Port is default’ as in the pictures below. You could also do that in older versions with transaction LPCONFIG although it is not advisable since it must not be used to create logical ports for proxy classes generated in versions after SAP NetWeaver 2004s. SOAMANAGER override logical ports created in transaction LPCONFIG. This means that, when two logical ports with the same name exist, the logical port created in transaction SOAMANAGER will be used. This also applies to the behavior of default logical ports. If transaction SOAMANAGER is used to create a default logical port for a particular proxy class, the default logical port created for this class with transaction LPCONFIG is no longer used.

 

soa1.jpg

soa2.png

 

ABAP Proxy


    Finally,  after setting the logical port as default we can consume the service in an ABAP program. According to the parameters we defined for the webservice, we provide the input structure with the key for the for the records of the MySQL DB in order to retrieve the record from the output structure and display its columns.

 

 

REPORT  Z_PROXY_O_PHP.

DATA: o_proxy TYPE REF TO ZCONSCO_SERVICE,
o_ex1
TYPE REF TO CX_ROOT,
v_val
TYPE ZCONSTEST,
v_response
TYPE ZCONSTEST_RESPONSE,
v_text1
TYPE string.

START
-OF-SELECTION.

v_val
-NEW_OPERATION_REQUEST = 'Beauty'.

TRY.
CREATE OBJECT: o_proxy.

CALL METHOD o_proxy->test
EXPORTING
input = v_val
IMPORTING
output = v_response.

commit work.

CATCH CX_AI_SYSTEM_FAULT INTO O_EX1.
V_TEXT1
= O_EX1->GET_TEXT( ).
ENDTRY.

IF v_response IS NOT INITIAL.

WRITE: v_response-NEW_OPERATION_RESPONSE.

ENDIF.

 

 

Result


According to our PHP webservice it will be displayed the second column from MySQL DB table:

 

result.jpg

Hi all,

Today I want to write about a theme everybody of us (the developers) gone through.

 

Creating and changing code. To be more specific, the part we are thinking that it drives us crazy while trying to refactor it.

 

Not because we are not having a transport or a technical documentation on hand. No, because our initial changes influenced further developing and now it is again on our turn, because we build the first things. If you took part in a developing (You thought: “Please, do never show up in my todo-list again”) it always returns a time to you.

That is an unwritten rule, is it?

 

But now, what can you do to improve it and reduce such messy work as a developer.

 

1st point Do not be just a recipient


Might be a lot are now thinking: I’m a developer, so what, it is my job and if someone said it, I’ll get it done!

 

This is an argument and yes, I agree with you in a way, but not at all.

 

Of course, there is always this stuff which is really unnecessary and it has to be done. It remains in every project the same. People love doing things the same way they did before. So that is the reason for it and if it doesn’t affect SAP-Code I just let it happen. In these cases I see the great opportunities to develop from the very beginning and I can do it my way (the developing, the result is given from others)

 

But when it affects SAP or established source I always talk to the consultants and ask questions about it. It's not the goal to argue with the consultants, I want to understand what they are trying to have in the end and why the available stuff in the system doesn't match. You know, nobody knows everything.


2nd point Bring people together


What does that mean?

If you got a concept in front, just read the given task. Take a minute to remember similar requests. You remember one, let both know each other (of course, they should work for your company). That’s it.

I got it more than one time, that similar request got different solutions. And most of the time, one solution is the better one. And if it is not better, it might be easier to implement…  Got the point


3rd point Take your time to prove the concept


You passed the second point and it is something new, so most of our request start with a pre-technical concept (if not, there might be something wrong). This is a very important point to me. Take your time to prove the concept, of course the points above implement that in a way, but you have to develop it and you are the last one in the row. If you say ok, it is written in stone and it is a heavy lift to change the requesters (customers) mind again.

 

4th point Expect the unexpected


A big problem is that some developers fear to ask questions. You know, you got the concept proved and now the developing starts and there it is. New questions appear out of it. Now it is not the point to hide behind your computer. Contact the relevant persons and tell them what’s going on. Nobody will kill you for asking the questions right now, but perhaps they will if you don't


5th point Update the documents


A lot of requests doesn't match the concept in the end. You know, the consultant cross your way and during a coffee break another point is added to the request. Most of the times it isn’t a big story and therefore we (me also, I know that and I work on it) just implement it right after it and now it is gone. That’s the point, no it isn’t! Weeks later you get a question why something work like it does and not like that stuff written in the concept. If you’re lucky, you can remember more than just the good coffee, do you?

You see, it is a messy work but it's very important and if you’re a smart developer most of the time others have to do it.

You just have to handle it


6th point Keep it simple



Plan your code before you write it.

That is the best way to see the full picture as far as you can and you are able to keep it as simple as possible.

If you think it is crux  you contact another developer around you and let him/her take a look at it.

I’m pretty sure there is most of the times an easier way to find


7th point Don’t reinvent the wheel.


Many tasks you want to do have been done before and many of them are available prebuilt. That mean, before you work through a lot of tables take your time and search for helpful documents. The internet, SCN especially , older requests (if you got a ticket-system) might help you to save a lot of time.

Searching is the most undervalued timesaver at the moment. (But that is a personal statement)



Some words between
Now there are a few points which don’t match exact the timeline to do before changing code. But I felt to add these also here, so I just go ahead.



8th point Know your tools


That mean, you need to use all the provided tools and plan the stuff you need to plan in the beginning. Unit-Tests / Codeinspector-Variants / Analysis-Tools / Breakpoint-Id’s / Test sets and all the stuff I forgot to mention here. So take your time to work through your tools and play with it. You will be very surprised what all is possible. I can’t tell you here, how often this happened to me in the past… and will in the future

 

9th point Always be a hungry developer


Use your time and try out new things, it doesn’t matter if you need it right now. Read a lot and try things out. Just because you don’t need it right now, doesn’t mean you don’t need it next week.


10th point Feel like the guy using your stuff


I know, it is a very global statement, but I think development improves by such thinking. If you like the feeling using it, I’m pretty sure the end-user likes it too. That doesn’t implement, to change essential things written in the concept. Just talk with the relevant persons about it and share your thoughts about the usability.

 

That’s it. I know, all the stuff I told here is not new, but it is very important to remember these. That’s why I shared the blog. At the moment I work through Managing Custom Code in SAP. A big thing is of course how to identify unnecessary or cloned code in your system.

The simplest possibility: Don’t create cloned or unnecessary stuff.

 

Thank you for reading to the end. Feel free to add points you think it is important or just leave a comment.

 

Cheers

Florian

 

PS: I'm not sure, if crux is a word, but I think you understand the meaning of it.

PPS: Thank you Matthew Billingham for bringing that book under my pillow




Introduction

 

Fund Management Documents are generated based on Budgeting Process defined as per customizing. Following Budgeting Process have been pre-configured in SAP:

 

ProcessMeaning
EnterEnter new budget data
Balanced EntryBalanced budget increase
CarryoverCarry over budget data from one year to another
Revenues Increasing BudgetTransfer budget data using a Revenues Increasing the Budget (RIB) rule
TransferTransfer budget data
ReturnReturn budget data
SupplementSupplement for budget data
Transfer Cover EligibilityTransfer budget data using a manual cover eligibility rule

 

The process of managing the Fund Management Documents is primarily done using the Transaction Code FMBB which is called as Budgeting Workbench. All the activities from Preposting an Fund Management Document, to Posting a Preposted Fund Management Document, from Undone a Prepost Document to Helding a Posted Document.

 

fm_01.JPG


The scope of this document is Handling the Posting of Fund Management Document using ABAP. It includes following an Object Oriented approach in handling the relevant Documents.


Overview

For handling Fund Management Documents, SAP has provided the Object Orient way by giving the class CL_FMKU_ENTRYDOC_HANDLER (FM entry document handler) which is used to handle all the activities related to Fund Management Documents. Activities / methods include Mass Reversal Check and Post, Document Get Contents and Post, Held Create, delete etc.and FM Prepost Document handling.

 

fm_02.JPG

 

Digging Deep

 

Simple ABAP Objects concept can be used to create the Reference Objects and use the relevant methods provided for Fund Management Document handling. Lets walk through the basic steps for the same taking the example here as Posting a PrePost Document or Undone a Prepost Document.

 

1. Declaring the basic data for class before action.

 

DATA: REFOBJ TYPE REF TO CL_FMKU_ENTRYDOC_HANDLER,

       F_DOCID TYPE FMED_S_DOCID,

       E_F_HEADER  TYPE FMED_S_HEADER,

       E_T_LINES  TYPE FMED_T_LINES,

       E_T_LINE  TYPE FMED_S_LINE,

       E_FLG_FROM_ARCHIVE  TYPE XFELD,

       I_REF_MSG  TYPE REF TO CL_BUBAS_APPL_LOG_CTX,

       I_LOG_HEADER TYPE BAL_S_LOG,

       E_LOG_HANDLE  TYPE BALLOGHNDL.

 

2. Creating required objects for the functioning.

 

CREATE OBJECT REFOBJ

     EXPORTING

       I_FM_AREA = 'XXXX'.   "=> XXX here is FM Area

 

   CREATE OBJECT I_REF_MSG.

 

3. Making Log Handler ready for the call.

 

CALL METHOD I_REF_MSG->CREATE_LOG_HANDLE

     EXPORTING

       I_LOG_HEADER            = I_LOG_HEADER

*     I_ONLY_ERROR            =

     IMPORTING

       E_LOG_HANDLE            = E_LOG_HANDLE

     EXCEPTIONS

       LOG_HEADER_INCONSISTENT = 1

       OTHERS                  = 2.

   IF SY-SUBRC <> 0.

*   Implement suitable error handling here

   ENDIF.

 

 

4. Reading the contents of an existing FM Document.

 

   F_DOCID-FM_AREA  = 'XXXX'.            "-> Same as above

   F_DOCID-DOCYEAR  = FDOCYEAR. "-> From Screen

   F_DOCID-DOCNR  = DOCNR.            "-> From Screen

 

CALL METHOD REFOBJ->DOC_GET_CONTENTS

     EXPORTING

       I_F_DOCID            = F_DOCID

*     I_FLG_ALLOW_ARCHIVED = ' '

     IMPORTING

       E_F_HEADER           = E_F_HEADER

       E_T_LINES            = E_T_LINES

       E_FLG_FROM_ARCHIVE   = E_FLG_FROM_ARCHIVE

     EXCEPTIONS

       NOT_FOUND            = 1

       OTHERS               = 2.

   IF SY-SUBRC <> 0.

* Implement suitable error handling here

   ENDIF.

 

5. Now after reading the FM documents content, now its time to change the document state from PrePost to Post or Undone.

 

*1  Posted

*2  Preposted

*3  Preposted posted

*4  Preposted undone


  E_F_HEADER-DOCSTATE = 4.

 

6. Calling Prepost document to be Posted or Prepost document to be Undone.

 

CALL METHOD REFOBJ->PREPOSTED_POST

     EXPORTING

       I_F_DOCID          = F_DOCID

       I_F_HEADER         = E_F_HEADER

       I_REF_MSG          = I_REF_MSG

       I_FLG_POPUP        = 'X'

       I_POST_W           = 'X'

     CHANGING

       C_T_LINES          = E_T_LINES

     EXCEPTIONS

       NOT_PREPOSTED      = 1

       ORIGINAL_NOT_FOUND = 2

       REF_MSG_INITIAL    = 3

       DOCNR_ERROR        = 4

       LOCK_ERROR         = 5

       OTHERS             = 6.

   IF SY-SUBRC <> 0.

* Implement suitable error handling here

   ENDIF.


Or


CALL METHOD REFOBJ->PREPOSTED_UNDO

     EXPORTING

       I_F_DOCID          = F_DOCID

       I_F_HEADER         = E_F_HEADER

       I_T_LINES          = E_T_LINES

       I_REF_MSG          = I_REF_MSG

     EXCEPTIONS

       NOT_PREPOSTED      = 1

       ORIGINAL_NOT_FOUND = 2

       ALREADY_LOCKED     = 3

       DOCNR_ERROR        = 4

       OTHERS             = 5.

   IF SY-SUBRC <> 0.

* Implement suitable error handling here

   ENDIF.

 

7. At the end, don't forget to commit the document after changes.

 

CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'.

   IF SY-SUBRC IS INITIAL.

     MESSAGE S001(00) WITH 'Document:' DOCNR ' updated successfully.'.

   ENDIF.

 

 

Concluding Remarks


For handling the Fund Management Documents in ABAP, ABAP Objects makes the work very simple. As in this case, Class  CL_FMKU_ENTRYDOC_HANDLER provided by SAP proves savior and helps in handling pretty easy and smooth.

Recently we are handling with one customer complaint about performance issue in their production system. We are investigating from both ABAP side and DB side to seek potential improvement. Regarding DB side, we are working together with customer on possible DB index optimization.

 

On the other hand, we observed the database tables which causes the performance pain have a huge number of records ( among 50 ~ 200+ millions ). We do see the possibility to achive some history data, however we are not sure whether customer could get the performance gain after achiving.

So I did a small experiment to try to get the draft idea about the relationship between table record number and performance.

 

Test environment and test data preparation

 

Database: HANA DB with release 1.00.74.00.390550. The test is done on our internal system. It makes more sense to test in customer's system since customer is currently using a NON_HANA DB but unfortunately I cannot achieve it - I need to write several reports to generate the massive test data and my request to ask for a user in their sandbox system didn't get approved. I hope finally I could have the chance to repeat this experiment on customer sandbox system.

 

The table causing performance issue is PPFTTRIGG, which is a standard table in SAP CRM, storing transaction data of action framework processing detail.

In customer system it has 75 million records. In ST12 its DB time always ranks the first in trace result.

 

For the table itself:

 

Red color: Key field

Blue color: Index field

Yellow color: Non index field

clipboard1.png

I copy the standard table into ten new Z table with exactly the same technical settings, and fill each Z table with 10 millons ~ 100 millons records.

For field os_guid, appl_oid and medium_oid, I use function module GUID_CREATE to generate new guid and assign to them.

For other field like applctn, context and ttype, I read all possible values from their value table and assign to table records evenly.

 

Table name

ZPPFTTRIGG1

ZPPFTTRIGG2

ZPPFTTRIGG3

ZPPFTTRIGG4

ZPPFTTRIGG5

ZPPFTTRIGG6

ZPPFTTRIGG7

ZPPFTTRIGG8

ZPPFTTRIGG9

ZPPFTTRIGG10

Table record

10 millons

20 millons

30 millons

40 millons

50 millons

60 millons

70 millons

80 millons

90 millons

100 millons

 

The following measurement are done against the ten Z tables. All the time recorded in this blog is measured in millisecond by default.

 

Read via Index field, single record returned for each table

 

I use index field appl_oid, which could ensure only unique record returned for each Z table. The time is measured in microsecond.

 

clipboard2.png

I test it repeatedly and the result always shows that the time does not increase linearly according to the number of table records.

 

Read via table key field OS_GUID

 

The time is measured in microsecond.

clipboard3.png

The result shows it is on average a little faster to read via table key field compared with reading via index field.

 

Read via Non-index fields, one record returned

 

In this case I query the ten tables via non-index field MEDIUM_OID. Each query only returns one result for each Z table. The time is measured in microsecond.

clipboard4.png

All three kinds of read operation could ensure the unique record returned by SQL statement, sorting based on efficiency: read via key > read via index field > read via non-index field

 

Read via Index fields, multiple and different number of records returned for each table

 

The read is performed on index field applctn and context.

clipboard5.png

The X axis represents the number of records returned via SELECT * INTO TABLE <record set> FROM <ztable > WHERE applctn = XX AND context = XXX for each Z table.

 

Read via Index fields, multiple and fixed number of records returned

 

Similar as test above, but added UP TO 6000 rows to force the SQL on each Z table always returned the fixed number of records.

clipboard6.png

Read via Non-index fields, multiple and fixed number of records returned

 

I perform the read to retrieve all records for each table which has flag is_changed marked as abap_true. Before testing, I manually change the table entries for each Z table to ensure all ten tables have the exactly the same number of records with is_changed marked as abap_true.

clipboard7.png

Table entry mass update

 

The update operation is done on the non-index field is_changed. First I use SELECT UP TO XX to retrieve the given record set which has is_changed = abap_true, and then use UPDATE <Z table> FROM TABLE <record set> to update the table. The execution time of statement UPDATE <Z table> FROM TABLE <record set> is measured:

when doing mass change on the ten tables one by one in the same session, <record set> contains 10000 entries to be updated:

clipboard8.png

clipboard9.png

clipboard10.png

before my testing, my assumption is that the consuming time for updating will increase linearly according to numbers of table records.

However it seems according to this test result that the number of table records will not degrade the update performance at least in HANA DB.

 

Summary

 

Based on this test result, it seems in HANA DB, there would be no significant performance improvement for read operation which only returns a unique record, even if number of table records reduces dramatically( reduction from 100 millions to 10 millions ). The same holds true for update operation via UPDATE <DB table> FROM TABLE <record set>.

For read operation with multiple records returned, no matter index or non-index fields is used, fixed or different number of records are returned, the time consumed for read always increses almost linearly with number of table entries. In this case customer could still get the benefit from table archiving.


Hi All,

This blog is with reference to one of the threads(question) in SCN (http://scn.sap.com/thread/3373098) wherein the requirement was to send email with PDF attachment.

As I have implemented the same for one similar requirement I replied to this question and wanted to share the steps/code if required.I have documented all the steps and code in a MS word document but as SCN does not allow to upload MS Word document ,I told to send one email to my personal email ID(mentioned in the reply) and I will send the MS word document.

 

Thereafter I kept on receiving a lot of email requests to share the word document.So thought of creating this blog with all those steps and code so that anyone needs those can get them here in SCN itself .And also they will not need to wait for my email reply instead they can get it immediately here in SCN itself.

 

I will maintain this blog link to that original thread/question.

 

Here are the steps and codes for the development of the class, method to send email with PDF attachment(content of internal table) and also one sample program to test the same.

 

Step1:  Go to tcode SE24-> Provide the class name

 

  class1.png

 

Properties:

 

2.png

 

Step2 : Declare the PDF Method: SEND_PDF_FILE

 

PDF: Parameters:

 

3.png

 

Parameters list to copy and paste:

 

*----------------------------------------------------------------------------------------------------------------------------------------------------------*

IM_FILE_HEADER_TEXT        Importing    Type    STRING                 Header Text

IM_EMAIL_SUBJECT_LINE     Importing   Type    SO_OBJ_DES       Subject Line

IM_EMAIL_BODY_TEXT         Importing    Type    BCSY_TEXT          Body Text

IM_EMAIL_ADDRESS            Importing   Type    ZTP_SMTP_TAB    Email Address

IM_EMAIL_DIST_NAME          Importing   Type   SO_OBJ_NAM       Email Distribution Name

IM_EMAIL_DIST_ID                Importing   Type    SO_OBJ_ID           Email Distribution Id

IM_ATTACHMENT_NAME       Importing   Type    SOOD-OBJDES     Attachment Name

IM_FILE_DELIMETER             Importing   Type    C                          File Delimiter

IM_ITAB_1                             Importing   Type    ZTP_TEXT_TAB      Itab with data content

IM_ITAB_2                             Importing   Type    SOLI_TAB             Itab with column headers

EX_SEND_TO_ALL                Exporting   Type    OS_BOOLEAN      Send Email

IM_EMAIL_DO_NOT               Importing   Type    CHAR1                  Flag to determine whether mail needs to be sent or not

IM_LAYOUT                          Importing    Type    SYPAART             Layout mode Portrait or Landscape

EX_PDF_TABLE                    Exporting   Type    RCL_BAG_TLINE   Contains converted data in PDF format

 

*-------------------------------------------------------------------------------------------------------------------------------------------------------------*

 

Custom types are highlighted , please find them below:

 

Step3 : Custom types needs to be created which is used above:

Email Table Type :

4.png

 

Corresponding structure:

 

5.png

 

Data Content table type:

 

7.png

 

Corresponding structure:

 

8.png

 

Exceptions:

 

9.png

 

Step4 : Paste the below code in the method body.

 

*--------------------------------------------------------------------------------------------------------------------------------------------------*

Please find the code in the attached file , with file name : Source code _method.txt

*--------------------------------------------------------------------------------------------------------------------------------------------------*

 

 

Step5:Activate the method and class and write the below small executable test program to test it.Go to SE38 create the below program to test:

 

10.png

 

Code :

 

*-------------------------------------------------------------------------------------------------------------------------------------------------------------*

Please find the code in the attached file , with file name : Source code _program.txt

*--------------------------------------------------------------------------------------------------------------------------------------------------------------*

 

Step6: Activate the program and execute it.Please change the email ID to yours one to receive the email to Lotus Notes / Outlook /Other email configured.

 

Execute the program and test whether the email is sent successfully or not (Check in SOST tcode):

 

12.png

 

Open the PDF file attached in the email and make sure the content is correct:

 

13.png

 

Please let me know if anyone face any issue while developing or testing it.

 

Thanks & Regrads,

Hafizul Mollah,

SAP ABAP Consultant @Capgemini India Pvt. Ltd.

During our recent ECC implementation, business requested that Service Material value (populated in user field : AFVGD-USR00) should be auto populated into a purchase requisition for an external operation.

 

The Purchase Requisition (PR) is created as soon as a Production order lines having Operation control keys relevant for external processing such as ‘EXTL’ or ‘EXTP’ is saved. In Standard SAP, PR is created with Operation text as a Material Description and it does not carry Material code in the PR material Code field. But to provide better visibility to buyer and supplier, business requested a PR to be created from Production order with external operations should address the following.

 

Purchase Requisition should be

  • Auto populated Material code in PR with material code maintained in the “Service Matl” field in the Operation details screen for the external Operation number in Production Order(ref the screen Shot)

screenshot1.png

 

screenshot2.png

  • Item long text: material text both short and long copied from the material master, engineering revision level to copy from basic data 1 of the material master.

screenshot3.png

 

The business also requested some pre-validations when performing auto population of material code.

For e.g.:

1. The service material code should begin with ‘SM_’. 

2. Material should be available in MARA table.

3. Material and plant combination should exist in MARC table.

 

The validation errors should be visible on the operation details screen of the production order.

 

It took me some time to debug the end to end transaction to find out appropriate enhancement points to achieve different business requirements. I am listing down the enhancements used in table below.

I hope this is useful for others attempting to map similar requirements during automatic PR created\changed when Production Order is created\changed. 

 

Enhancement Details:

Sequence

Enhancement Type

Enhancement Name

Purpose

1

User - Exit

EXIT_SAPLCOVG_001

  • We used this exit to collect all modified external operations records.
  • This exit is triggered only when an external operation is modified.
  • However there were few scenarios in which this exit is not triggered at all and hence we have also implemented EXIT_SAPLCOZF_001.

E.g. when there is no change to external operation or when an external operation is deleted etc.

  • Populate external operation data obtained in this exit into a global internal table using custom method.

2

Spot

Enhancement spot - ES_SAPLCOZF

Point - \PR:SAPLCOZF\EX:SAPLCOZF_OLC_002\EI

  • This enhancement spot is triggered when an external operation is deleted.
  • Export deleted external operation record (AFVGD_IMP) and respective deletion flag (DELKNZ_BANF) using ABAP memory id.

3

User - Exit

EXIT_SAPLCOZF_001

  • This exit is triggered for each external operation once.
  • We used this exit to collect external operations records which cannot be fetched using exit EXIT_SAPLCOVG_001.
  • However this exit is not triggered when an existing external operation record is modified. So in that case exit EXIT_SAPLCOVG_001 will be useful.
  • Also import deleted external operations using ABAP memory id. (This data is exported by 2nd enhancement).
  • Populate external operation data obtained in this exit into a global internal table using custom method.

4

Badi

WORKORDER_UPDATE

Method – AT_SAVE

  • This method is used for adding any custom validation on external operation fields.
  • By pass the validation for deleted external operation with the help of deletion flag.
  • Fetch all the external operation from global internal table populated using above 2 exits.
  • Appropriate error message can be displayed using this Badi method.
  • If all the external operations are valid then export the external operation internal table using ABAP memory id for further processing. 

5

Spot

Enhancement spot – ES_SAPLEBNE

Point - \PR:SAPLEBNE\EX:LEBNEF08_01\EI

  • This enhancement spot is triggered before execution of standard FM ‘ME_UPDATE_REQUISITION’ used for updating PR data.
  • Import valid external operation table using ABAP memory id.
  • Modify purchase requisition table XEBAN using external operation data.
  • In our business scenario we modified PR item material number and short description using external operation fields.

6

Badi

WORKORDER_UPDATE

Method – IN_UPDATE

  • At this point, the PR is already created and hence this method can be used to populate various long texts available in PR.
  • In our business scenario, we used this method to update PR item long text based on external operation records.

 

This is a continuation of my blog A Way of Reading Huge Excel Files in which I described the usage of the sXML parser for an alternative Excel file reader with a massively reduced memory footprint. I followed the suggestion of some of my blog readers (Kay Streubel, Rainer Hübenthal) to add the code to the abap2xlsx repository at ivanfemia/abap2xlsx · GitHub. The abap2xlsx package now contains a subclass ZCL_EXCEL_READER_HUGE_FILE of the existing reader class ZCL_EXCEL_READER_2007.


I owe special thanks to Ivan Femia for helping me with the first steps into the abap2xlsx realm. :-)


Only the sheet data and shared string table are parsed differently in ZCL_EXCEL_READER_HUGE_FILE, as these usually are the heavy-load parts of an Excel file. The other parts - like style details - continue to be analyzed in the superclass and are therefore parsed with iXML parsers, as before.

 

How to Use It

 

The public interface - and therefore the usage - is the same as for ZCL_EXCEL_READER_2007. If you have an XSTRING with the .xlsx file content, you work with the LOAD( ) method; if you have a filename and want abap2xlsx to read it (from application or presentation server), you would use LOAD_FILE( ).

 

The test report ZZ_TEST_EXCEL_READER (code below) shows the idea.

 

 

It's Just Another Object Type

 

You can use the report to load an excel file from the presentation or application server, doing the parsing with the "standard" reader or with the new "huge file reader". As you see, the difference between the two reader versions is only in the CREATE OBJECT lo_reader TYPE ... statement.

 

Decision to Read Files From Presentation Server or Application Server

 

The possibility to decide between application server and presentation server explicitly is a new feature. Formerly, this distinction was made automatically, depending on the SY-BATCH flag. This will still be the default value for the new optional import parameter I_FROM_APPLSERVER. But now, the distinction can also be made by the caller.

 

It's good practice to have the reader as a local object in an own, separate code unit (here: a form routine). This way, all the memory that was necessary during the read process will be freed when the excel object has been built and the reading unit is left.

 

 

report zz_test_excel_reader.
parameters:
   p_file  type string lower case default `C:\temp\test.xlsx`,
   p_appl  type flag,
   p_huge  type flag.
start-of-selection.
   perform start.
* ---
form start.
   data: lo_excel type ref to zcl_excel,
         lo_excpt type ref to zcx_excel.
   try.
       perform read using p_file p_appl p_huge
                    changing lo_excel.
       break-point. " <<<--- Inspect lo_excel here
* Parsed data are usually contained in the following ITAB:
* LO_EXCEL->WORKSHEETS->WORKSHEETS->COLLECTION[1]->SHEET_CONTENT
     catch zcx_excel into lo_excpt.
       message lo_excpt type 'I'.
   endtry.
endform.                    "start
* ---
form read using i_filename           type csequence
                 i_from_applserver    type flag
                 i_huge               type flag
           changing e_excel           type ref to zcl_excel
           raising zcx_excel.
* Use the reader instance as a local variable in a separate unit,
* so its memory will be released after leaving the unit.
   data: lo_reader type ref to zif_excel_reader.
   if i_huge eq 'X'.
     create object lo_reader type zcl_excel_reader_huge_file.
   else.
     create object lo_reader type zcl_excel_reader_2007.
   endif.
   e_excel = lo_reader->load_file( i_filename         = i_filename
                                    i_from_applserver = i_from_applserver ).
endform.                    "read





















 

The Main Loop

 

An Excel workbook is saved as a zip archive containing several XML files. The files sheet1.xml, sheet2.xml etc. contain the actual worksheet data, among other things like style definitions. Here is a typical example of a sheet.xml: As you see, the data start with a separate element named sheetData; in this example, the cells contain pointers to the shared string table (which is defined in another file sharedStrings.xml).

 

sheetData.png

 

The parser (we use "token-based sXML parsing", see ABAP Keyword Documentation) for this file can safely skip forward until it meets the opening of element sheetData. Once this is reached, it should read elements and their contents one after another, as follows:

  • When an opening c element is detected, a new cell data structure has to be filled from the element c's attribute values;
  • When a text content of an element is detected, this content should usually be treated as value of the current cell
    • in some cases, however, the value has to be fetched by index from the shared string table
  • When a closing c element is detected, the current cell data should be added into the ABAP worksheet representation
  • When a closing sheetData element is detected, the loop should be left.

 

And this is the above text written in ABAP:

 

method read_worksheet_data.
   data: ls_cell   type t_cell.
* Skip to <sheetData> element
   skip_to(  iv_element_name = `sheetData`  io_reader = io_reader ).
* Main loop: Evaluate the <c> elements and its children
   while io_reader->node_type ne c_end_of_stream.
     io_reader->next_node( ).
     case io_reader->node_type.
       when c_element_open.
         if io_reader->name eq `c`.
           ls_cell = fill_cell_from_attributes( io_reader ).
         endif.
       when c_node_value.
         case io_reader->name.
           when `f`.
             ls_cell-formula = io_reader->value.
           when `v`.
             if ls_cell-datatype eq `s`.
               ls_cell-value = get_shared_string( io_reader->value ).
             else.
               ls_cell-value = io_reader->value.
             endif.
           when `t` or `is`.
             ls_cell-value = io_reader->value.
         endcase.
       when c_element_close.
         case io_reader->name.
           when `c`.
             put_cell_to_worksheet( is_cell = ls_cell io_worksheet = io_worksheet ).
           when `sheetData`.
             exit.
         endcase.
     endcase.
   endwhile.
endmethod.






















 

A caveat for the sXML parser is necessary: the instance attributes like name, value, etc. are never initialized. The caller has to take care which values are outdated, by looking at the three events "element open", "element close" and "node value".

 

When using the sXML reader, special caution is necessary for elements with mixed content. In the example,

 

<x><y>v</y>w</x>






















the following events will be raised:


 

Event co_nt_element_open  name: x   value:
Event co_nt_element_open  name: y   value:
Event co_nt_value         name: y   value: v
Event co_nt_element_close name: y   value: v
Event co_nt_value         name: y   value: w
Event co_nt_element_close name: x   value: w
Event co_nt_final         name: x   value: w





















 

Observe that when the event for value 'w' is raised, the name attribute still contains 'y', although y has already been closed and thus is an outdated value: The sXML parser doesn't keep track of this outdating - if it would, it had to keep track of the document structure (for example by maintaining a stack of the parents of the current node). In the sXML philosophy, however, all things regarding knowledge of the document structure, are left to the caller - the reading process is restricted to the detection of opening and closing elements, of attribute values, text nodes etc.

 

Some Remarks on Tests

 

A parser is an ideal candidate for unit tests, since its output depends only on the input stream - there are usually no dependencies on other states (database, API calls, ...). So I implemented all code of the reader using unit tests.

 

Testing Private Methods

 

The huge file reader has no public methods on its own - the only public methods are inherited from ZCL_EXCEL_READER_2007,  which is out of the scope of my tests. For this reason, I decided to test private methods. There is a price for this decision: We are not as free as it could be in renaming and deleting private methods, since all these changes also require the test classes to be adapted.

 

To make private methods testable, we start the unit test section with a declaration of friendship:

 

class lcl_test definition deferred.
class zcl_excel_reader_huge_file definition local friends lcl_test.





















Reading a Number

 

With some help methods in the test class, I let the code of a test look more intentional and more to-the-point: Everybody can read from the code of the following test method that the cell datatype "number" is under test: A cell with numerical content (17) should propagate the worksheet in the expected way:

**

   method test_number.
     data lo_reader type ref to if_sxml_reader.
     lo_reader = get_reader(
       `<c r="A1" t="n"><v>17</v></c>`
       ).
     out->read_worksheet_data( io_reader = lo_reader io_worksheet = worksheet ).
     assert_value_equals( `17` ).
     assert_datatype_equals( `n` ).
endmethod.                    "test_shared_string





















 

Here, OUT is the "object under test". As usual for unit tests, a new reader instance is created for each test method in the setup method. The same holds for the worksheet instance here, which holds the result.

 

Many tests just call the private central method read_worksheet_data( ) and then inspect the produced worksheet instance. This method imports an instance of an sXML reader to read the input data, and an instance of ZCL_EXCEL_WORKSHEET to write the result. Apart from the expected changes in the worksheet's state, there are no side effects on other data. And the only further external dependency is that the table of shared strings (supposed to be parsed earlier from the sharedStings.xml file) should be available already in the private attribute shared_strings, a simple list of strings (of type STRINGTAB).

 

In order to restrict the code of the test methods to the essential, specific parts, the get_reader( ) method only receives the XML code snippet for the  cell to be parsed (or multiple cells in a row): before creating the reader instance, this nucleus is wrapped into a document skeleton, to emulate a full XML document:

 

 

*
  method get_reader.
    data: lv_full type string.
    concatenate `<root><sheetData><row>` iv_xml `</row></sheetData></root>` into lv_full.
    eo_reader = cl_sxml_string_reader=>create( cl_abap_codepage=>convert_to( lv_full ) ).
  endmethod.                    "get_reader





















 

On Shared Strings and the Boundary Values Problem

 

As a local friend, the test class is allowed to pre-fill the shared_strings table attribute. Therefore, we can pretend to have already parsed the sharedStrings.xml file by adding some values to the shared_strings table. Afterwards, we call the reader and check that a cell with a string reference is correctly resolved. Here is the test for correct reading of the string table:

 

 

*
  method test_shared_string.
    data lo_reader type ref to if_sxml_reader.
    append `Test1` to out->shared_strings.
    append `Test2` to out->shared_strings.
    lo_reader = get_reader(
      `<c r="A1" t="s"><v>1</v></c>`
      ).
    out->read_worksheet_data( io_reader = lo_reader io_worksheet = worksheet ).
    assert_value_equals( `Test2` ).
    assert_datatype_equals( `s` ).
  endmethod.                    "test_shared_string





















 

 

For shared strings, the content of the <v> element is the 0-based index for reading the string table. Therefore, the cell content in this example has to be 'Test2' after parsing.

 

In a former version, I had been too minimalistic: I used only one entry in the string table, and the <v> element contained 0, not 1, to point to this one entry.


The test in its former version passed, but it didn't detect a bug in the code, due to the "boundary value" anti-pattern: 0 is a special value, since it also is the initial value for integer variables. The bug was that the index was not transferred at all to the variable holding the index, which thus remained on its initial value, zero.  Only by extending the test with a second string and working with the non-special index value 1 instead, I could reproduce the bug - and then fix it.

 

Test Coverage

 

While working on the code and testing it, it's good to have a look on the test coverage. It shows possible uncovered coding extracts in which a bug might be hidden.

 

coverge.png

Of course, some parts are intentionally uncovered. In this class, the methods load_shared_strings( ), load_worksheet( ) and get_sxml_reader( ) have a zero coverage on purpose, since they provide data from outside. I only test how the class works on these data. But the rest of the methods  have a 100% statement coverage.

 

 

The branch coverage (a column which is hidden by default in the coverage analyzer's default layout) shows that some conditions could still be added. Indeed, I have no particular tests for invalid sheet.xml documents - for example, I always expect that there is the sheetData element, as the Open XML specification requires.

 

As the boundary value problem above shows, full statement coverage - or even full branch coverage - is not sufficient for detecting all bugs sleeping in the code. An instinct for potential bugs is necessary while writing the test method. It's an interesting change of hat: when writing test methods, we are supposed to turn into the most fervent critics of ourselves!


SAP has provided inbuilt text symbols analysis tool. 

 

This tool is used for following purposes:

a) To delete text symbols which are not used in report.

b) To create text symbols that needs to be added in text pool .

c) To correct text symbols which are defined repeatedly or differently.

 

Lets know more about the ABAP Text Symbol Analysis Tool.

 

I have created a test report Z_TEXTSYMBOL_TEST for explaining the tool as below:


 

In this report you can see there are 10 texts.  “Text1”, “Text5” , “Text6” and “Text9” with text id 001,005,006 and 009 respectively. However, “Text6” is defined differently in text pool as below:


   


 

Also you can notice that text “Unused Text” with Text ID 011 is not used in report.

 

 

a) Delete unused text from text pool:


     Using “Delete Row” button we delete the text. However for large size reports, it is difficult to find

     which text symbol is not used in report.

     To delete unused text symbols from text pool go to SE38. Type report name (e.g.

     Z_TEXTSYMBOL_TEST) and select “Text elements” radio button and click on change.


 

 

You can see the change text symbol screen. Now click on “Compare Text Symbols”  shown in below figure or press key Ctrl+Shift+F7 to start ABAP Text Symbol Analysis Tool



 

You can see the ABAP Text Symbol Analysis screen as below.



You can also access this tool by SE32 transaction as below:

Goto SE32 transaction an click on "Anlysis".

 

 

 

 

Now select first radio button i.e. “Text symbols that can be deleted from the text pool” and click on “Edit” button.

 

 

You can see below screen.

 

Now check the text symbols which you want to delete from the text pool and click on delete symbol. You will get below message “Text symbols may still be used elsewhere. Delete anyway”.  Click on “Yes” to delete the selected texts.

 

 

If you select all the unused texts and deleted them then you will see below screen which means now there are no unused text symbols present in text pool



 

 

b) To create text symbols that needs to be added in text pool.


Now same way select second radio button “Text symbols that need to be added to the text pool” and click on “Edit”. Then you will see all the texts with text id e.g. “Text1” (001) present in report which are not added in the text pool as below

 

 

Now check texts which you want to add in text pool and click on create as below



 

After creating selected texts in text pool you can see below screen which shows there are no texts with text id in report which are not maintained in text pool.

 

 

 

 

c)  To correct text symbols which are defined repeatedly or differently:

 

Select third option “Text symbols defined repeatedly/differently in program” and click on edit to see texts which are maintained differently in report than the text pool as below:

 

 

You can see for text id 006 there are two texts.  The unmarked text will be replaced with the marked text.

Here “Text6” is marked as we want to maintain Text6 for text id 006. Click on replace as shown below to replace “Defined Differently in report” with “Text6”.



 

Now you can see all inconsistencies in text pool got corrected as below



 

Now click on save to save the changes made in text pool. If you go back you can see updated text pool as below:

Hi All,

Here is my code to generate ALV tree with 5 hierarchy dynamically (or less according with your needs and with no change in the code)  and changing just a perform parameters to generate it. I don’t know if it have a easier and better way to do but this one is working

 

If you know another way please share

 

So follow the code:

 

Declarations:

  • My output table and work area are called, respectively: it_output and wa_output.
  • My fieldcat declaration fieldcat  TYPE lvc_t_fcat,

 

DATA: tree1              TYPE REF TO cl_gui_alv_tree.

DATA: v_icon1            TYPE lvc_s_layi,
     v_icon2           
TYPE lvc_s_layi,
     v_icon3           
TYPE lvc_s_layi,
     v_icon4           
TYPE lvc_s_layi,
     v_icon5           
TYPE lvc_s_layi.

FIELD-SYMBOLS: <field1>    TYPE any,
              <field2>   
TYPE any,
              <field3>   
TYPE any,
              <field4>   
TYPE any,
              <field5>   
TYPE any,
              <reffield1>
TYPE any,
              <reffield2>
TYPE any,
              <reffield3>
TYPE any,
              <reffield4>
TYPE any,
              <reffield5>
TYPE any.

 

 

Logic:

  • Hierarchic Field parameter must be called EXACTLY as the field name declarated in your output table. 
  • Ref tab and Ref Field will be used to bring the field information (data element). <reffieldX> will be used to compare the last information and the actual one and create a new node or keep the informationa agrrouped. It’s really important to have it done correctly! You can’t compare bananas to apples, right?!
  • If you want just 3 levels, clean the #4 and #5 in field_symbol. (Keep the '' though) Simply like that.

 

*                                                           Hierarchic Field / Ref Tab /  Ref Field
PERFORM f_field_symbol USING: 'TYPED'    'T6B1T'      'VTEXT' "1
                             
'DESC'      'TB038B'    'TEXT'  "2
                             
'KNUMA'    'KONA'       'KNUMA' "3
                             
'KSCHL'     'VAKEVB'  'KSCHL' "4
                             
'KOTABNR' 'VAKEVB'  'KOTABNR'. "5

*&---------------------------------------------------------------------*
*&      Form  F_FIELD_SYMBOL
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*      -->P_0299   text
*      -->P_0300   text
*      -->P_0301   text
*----------------------------------------------------------------------*
FORM f_field_symbol  USING field1 table1 param1
                            field2 table2 param2
                            field3 table3 param3
                            field4 table4 param4
                            field5 table5 param5
.

DATA r_elemdescr  TYPE REF TO cl_abap_elemdescr.
DATA r_field      TYPE REF TO data.
DATA: data_element TYPE dd04d-rollname.

IF field1 IS NOT INITIAL.

ASSIGN COMPONENT field1 OF STRUCTURE wa_output TO <field1>.
CHECK sy-subrc = 0.

PERFORM f_fieldinfo USING table1 param1
CHANGING data_element.

r_elemdescr ?= cl_abap_elemdescr
=>describe_by_name( data_element ).
CREATE DATA r_field TYPE HANDLE r_elemdescr.
ASSIGN r_field->* TO <reffield1>.
ENDIF.

IF field2 IS NOT INITIAL.

ASSIGN COMPONENT field2 OF STRUCTURE wa_output TO <field2>.
CHECK sy-subrc = 0.

PERFORM f_fieldinfo USING table2 param2
CHANGING data_element.

r_elemdescr ?= cl_abap_elemdescr
=>describe_by_name( data_element ).
CREATE DATA r_field TYPE HANDLE r_elemdescr.
ASSIGN r_field->* TO <reffield2>.
ENDIF.

IF field3 IS NOT INITIAL.

ASSIGN COMPONENT field3 OF STRUCTURE wa_output TO <field3>.
CHECK sy-subrc = 0.

PERFORM f_fieldinfo USING table3 param3
CHANGING data_element.

r_elemdescr ?= cl_abap_elemdescr
=>describe_by_name( data_element ).
CREATE DATA r_field TYPE HANDLE r_elemdescr.
ASSIGN r_field->* TO <reffield3>.

ENDIF.

IF field4 IS NOT INITIAL.

ASSIGN COMPONENT field4 OF STRUCTURE wa_output TO <field4>.
CHECK sy-subrc = 0.

PERFORM f_fieldinfo USING table4 param4
CHANGING data_element.

r_elemdescr ?= cl_abap_elemdescr
=>describe_by_name( data_element ).
CREATE DATA r_field TYPE HANDLE r_elemdescr.
ASSIGN r_field->* TO <reffield4>.
ENDIF.

IF param5 IS NOT INITIAL.
ASSIGN COMPONENT field5 OF STRUCTURE wa_output TO <field5>.
CHECK sy-subrc = 0.

PERFORM f_fieldinfo USING table5 param5
CHANGING data_element.

r_elemdescr ?= cl_abap_elemdescr
=>describe_by_name( data_element ).
CREATE DATA r_field TYPE HANDLE r_elemdescr.
ASSIGN r_field->* TO <reffield5>.
ENDIF.

PERFORM build_sort_table USING field1 field2 field3 field4 field5.

ENDFORM.                    " F_FIELD_SYMBOL

*&---------------------------------------------------------------------*
*&      Form  F_FIELDINFO
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*      -->P_TABLE1  text
*      -->P_PARAM1  text
*      <--P_DATA_ELEMENT  text
*----------------------------------------------------------------------*
FORM f_fieldinfo  USING    table
param
CHANGING data_element.


DATA: BEGIN OF dfies OCCURS 100.
INCLUDE STRUCTURE dfies.
DATA: END OF dfies.


DATA: tablenm  TYPE ddobjname,
fieldnm 
TYPE dfies-fieldname.

MOVE table TO tablenm.
MOVE param TO fieldnm.

***  Fname Description

IF NOT fieldnm IS INITIAL.
CALL FUNCTION 'DDIF_FIELDINFO_GET'
EXPORTING
tabname       
= tablenm
fieldname     
= fieldnm
langu         
= sy-langu
TABLES
dfies_tab     
= dfies
EXCEPTIONS
not_found     
= 1
internal_error
= 2
OTHERS         = 3.
IF sy-subrc = 0.
READ TABLE dfies INDEX 1.
data_element
= dfies-rollname.
ENDIF.
ENDIF.

ENDFORM.                    " F_FIELDINFO


Creating hierarchies

  • Basically it will change according to what you used as parameter in perform f_field_symbol.
  • Each node have your own perform because if you want to fill the 2nd node with something different to the standard counts (sum, avg, max, min, ... – flagging H_FTYPE on fieldcat and using call method tree1->update_calculations.)  you need to use the same work are type used in your detailed line and organize it to show up what you want to.

 

*&---------------------------------------------------------------------*
*&      Form  CREATE_HIERARCHY
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*  -->  p1        text
*  <--  p2        text
*----------------------------------------------------------------------*
FORM create_hierarchy .

* add data to tree
DATA: l_last_key  TYPE lvc_nkey,
l_kotabnr  
TYPE vakevb-kotabnr,
l_knuma    
TYPE vakevb-knuma,
l_desc
(100) TYPE c,
l_kschl    
TYPE vakevb-kschl,
l_add      
TYPE c.

DATA: l_param_key  TYPE lvc_nkey,
l_param2_key
TYPE lvc_nkey,
l_param3_key
TYPE lvc_nkey,
l_param4_key
TYPE lvc_nkey,
l_param5_key
TYPE lvc_nkey.

LOOP AT it_output INTO wa_output.

**** LEVEL 1
IF <field1> IS ASSIGNED.
IF <field2> IS ASSIGNED. "Level 2 is empty
IF <field1> IS NOT INITIAL.
IF <reffield1> NE <field1>.
PERFORM level1 USING ''
v_icon1
-fieldname
CHANGING l_param_key.


IF <field2> IS ASSIGNED.
CLEAR: <reffield2>.
ENDIF.

IF <field3> IS ASSIGNED.
CLEAR: <reffield3>.
ENDIF.

IF <field4> IS ASSIGNED.
CLEAR: <reffield4>.
ENDIF.

APPEND l_param_key TO it_expand_nodes.

ENDIF.
ENDIF.
ELSE.
"If next level is empty so finish the hierarchy
PERFORM add_complete_line USING '' 2
CHANGING l_last_key.
l_add
= 'X'.
ENDIF.
ENDIF.


**** LEVEL 2
IF <field2> IS ASSIGNED.
IF <field3> IS ASSIGNED. "Level 3 is empty
IF <field2> IS NOT INITIAL.
IF <reffield2> NE <field2>.
PERFORM level2 USING l_param_key
2
v_icon2
-fieldname
CHANGING l_param2_key.

IF <field3> IS ASSIGNED.
CLEAR: <reffield3>.
ENDIF.

IF <field4> IS ASSIGNED.
CLEAR: <reffield4>.
ENDIF.

APPEND l_param2_key TO it_expand_nodes.

ENDIF.
ENDIF.
ELSE.
"If next level is empty so finish the hierarchy
PERFORM add_complete_line USING L_PARAM_KEY 3
CHANGING l_last_key.
l_add
= 'X'.
ENDIF.
ENDIF.


*** LEVEL 3
IF <field3> IS ASSIGNED.
IF <field4> IS ASSIGNED. "Level 4 is empty
IF <field3> IS NOT INITIAL.
IF <reffield3> NE <field3>.
PERFORM level3 USING l_param2_key
3
v_icon3
-fieldname
CHANGING l_param3_key .


IF <field4> IS ASSIGNED.
CLEAR: <reffield4>.
ENDIF.

APPEND  l_param3_key TO it_expand_nodes.

ENDIF.
ENDIF.
ELSE.
"If next level is empty so finish the hierarchy
PERFORM add_complete_line USING l_param2_key 4
CHANGING l_last_key.
l_add
= 'X'.
ENDIF.
ENDIF.



*** LEVEL 4
IF <field4> IS ASSIGNED.
IF <field5> IS ASSIGNED. "Level 4 is empty
IF <field4> IS NOT INITIAL.
IF <reffield4> NE <field4>.
PERFORM level4 USING l_param3_key
3
v_icon3
-fieldname
CHANGING l_param4_key .


IF <field5> IS ASSIGNED.
CLEAR: <reffield5>.
ENDIF.

APPEND  l_param4_key TO it_expand_nodes.

ENDIF.
ENDIF.
ELSE.
"If next level is empty so finish the hierarchy
PERFORM add_complete_line USING l_param3_key 4
CHANGING l_last_key.
l_add
= 'X'.
ENDIF.
ENDIF.


*** LEVEL 5
IF <field5> IS ASSIGNED.
IF <field5> IS NOT INITIAL.
IF <reffield5> NE <field5>.
PERFORM level5 USING l_param4_key
4
v_icon5
-fieldname
CHANGING l_param5_key .

APPEND  l_param4_key TO it_expand_nodes.

PERFORM add_complete_line USING l_param5_key 6
CHANGING l_last_key.
l_add
= 'X'.

ENDIF.
ENDIF.
ENDIF.

IF l_add IS INITIAL.
PERFORM add_complete_line USING l_param5_key 5
CHANGING l_last_key.
ENDIF.

IF <field1> IS ASSIGNED.
<reffield1>
= <field1>.
ENDIF.

IF <field2> IS ASSIGNED.
<reffield2>
= <field2>.
ENDIF.

IF <field3> IS ASSIGNED.
<reffield3>
= <field3>.
ENDIF.

IF <field4> IS ASSIGNED.
<reffield4>
= <field4>.
ENDIF.

IF <field5> IS ASSIGNED.
<reffield5>
= <field5>.
ENDIF.

CLEAR: l_add.

ENDLOOP.

 

*&---------------------------------------------------------------------*
*&      Form  LEVEL1
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*      -->P_WA_DATA  text
*      -->P_L_PARAM2_KEY  text
*      -->P_3      text
*      -->P_1748   text
*      <--P_L_PARAM3_KEY  text
*----------------------------------------------------------------------*
FORM level1 USING  p_relat_key TYPE lvc_nkey
p_icon
CHANGING p_node_key.

DATA: l_node_text TYPE lvc_value,
relat      
TYPE int4,
wa_refe    
TYPE tab_type,
wa_level  
TYPE ty_output.

* set item-layout
DATA: lt_item_layout TYPE lvc_t_layi,
ls_item_layout
TYPE lvc_s_layi.

DATA: ls_node TYPE lvc_s_layn.
ls_node
-n_image   = space.
ls_node
-exp_image = space.

ls_item_layout
-t_image = p_icon.
ls_item_layout
-style   = cl_gui_column_tree=>style_intensified.
ls_item_layout
-fieldname = tree1->c_hierarchy_column_name.
APPEND ls_item_layout TO lt_item_layout.

* add node
l_node_text
= <field1>.
wa_level
-level   = 1.

ls_node
-isfolder = 'X'.

CALL METHOD tree1->add_node
EXPORTING
i_relat_node_key    
= p_relat_key
i_relationship      
= cl_gui_column_tree=>relat_last_child
i_node_text         
= l_node_text
is_outtab_line      
= wa_level
is_node_layout      
= ls_node
it_item_layout      
= lt_item_layout
IMPORTING
e_new_node_key      
= p_node_key
EXCEPTIONS
relat_node_not_found
= 1
node_not_found      
= 2
OTHERS               = 3.

ENDFORM.                    " LEVEL1

 


*&---------------------------------------------------------------------*
*&      Form  LEVEL2
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*      -->P_L_PARAM1_KEY  text
*      -->P_2      text
*      -->P_1721   text
*      <--P_L_PARAM2_KEY  text
*----------------------------------------------------------------------*
FORM level2  USING p_relat_key TYPE lvc_nkey
hierarchy
icon
CHANGING p_node_key TYPE lvc_nkey.

* set item-layout
DATA: lt_item_layout TYPE lvc_t_layi,
l_node_text   
TYPE lvc_value,
ls_item_layout
TYPE lvc_s_layi,
relat         
TYPE int4,
wa_level     
TYPE ty_output.

ls_item_layout
-t_image = icon.
ls_item_layout
-style   cl_gui_column_tree=>style_intensified.
ls_item_layout
-fieldname = tree1->c_hierarchy_column_name.
APPEND ls_item_layout TO lt_item_layout.

* add node
l_node_text
= <field2>.
wa_level
-level = hierarchy.


relat
= cl_gui_column_tree=>relat_last_child.
CALL METHOD tree1->add_node
EXPORTING
i_relat_node_key
= p_relat_key
i_relationship  
= relat
i_node_text     
= l_node_text
is_outtab_line  
= wa_level
it_item_layout  
= lt_item_layout
IMPORTING
e_new_node_key  
= p_node_key.


ENDFORM.                    "LEVEL2


*&---------------------------------------------------------------------*
*&      Form  LEVEL3
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*      -->P_L_PARAM1_KEY  text
*      -->P_2      text
*      -->P_1721   text
*      <--P_L_PARAM2_KEY  text
*----------------------------------------------------------------------*
FORM level3  USING p_relat_key TYPE lvc_nkey
hierarchy
icon
CHANGING p_node_key TYPE lvc_nkey.

* set item-layout
DATA: lt_item_layout TYPE lvc_t_layi,
l_node_text   
TYPE lvc_value,
ls_item_layout
TYPE lvc_s_layi,
relat         
TYPE int4,
wa_level     
TYPE ty_output.

ls_item_layout
-t_image = icon.
ls_item_layout
-style   cl_gui_column_tree=>style_intensified.
ls_item_layout
-fieldname = tree1->c_hierarchy_column_name.
APPEND ls_item_layout TO lt_item_layout.

* add node
l_node_text
= <field3>.


wa_level
-knuma   = wa_output-knuma.

CONCATENATE wa_output-datab+6(2)
wa_output
-datab+4(2)
wa_output
-datab(4)
INTO wa_level-zzbrandd
SEPARATED BY '.'.

CONCATENATE wa_output-datbi+6(2)
wa_output
-datbi+4(2)
wa_output
-datbi(4)
INTO wa_level-kunnr
SEPARATED BY '.'.

wa_level
-matnr   = wa_output-waers.
wa_level
-kondm   = wa_output-agnotes.
wa_level
-level   = hierarchy.

relat
= cl_gui_column_tree=>relat_last_child.
CALL METHOD tree1->add_node
EXPORTING
i_relat_node_key
= p_relat_key
i_relationship  
= relat
i_node_text     
= l_node_text
is_outtab_line  
= wa_level
it_item_layout  
= lt_item_layout
IMPORTING
e_new_node_key  
= p_node_key.


ENDFORM.                    " LEVEL3

*&---------------------------------------------------------------------*
*&      Form  LEVEL4
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*      -->P_L_PARAM1_KEY  text
*      -->P_2      text
*      -->P_1721   text
*      <--P_L_PARAM2_KEY  text
*----------------------------------------------------------------------*
FORM level4  USING p_relat_key TYPE lvc_nkey
hierarchy
icon
CHANGING p_node_key TYPE lvc_nkey.

* set item-layout
DATA: lt_item_layout TYPE lvc_t_layi,
l_node_text   
TYPE lvc_value,
ls_item_layout
TYPE lvc_s_layi,
relat         
TYPE int4,
wa_level     
TYPE ty_output.

ls_item_layout
-t_image = icon.
ls_item_layout
-style   cl_gui_column_tree=>style_intensified.
ls_item_layout
-fieldname = tree1->c_hierarchy_column_name.
APPEND ls_item_layout TO lt_item_layout.

* add node

l_node_text
= <field4>.

wa_level
-zzbrandd = wa_output-combin.
wa_level
-level   = hierarchy.

wa_level
-knuma = wa_output-knuma.

relat
= cl_gui_column_tree=>relat_last_child.
CALL METHOD tree1->add_node
EXPORTING
i_relat_node_key
= p_relat_key
i_relationship  
= relat
i_node_text     
= l_node_text
is_outtab_line  
= wa_level
it_item_layout  
= lt_item_layout
IMPORTING
e_new_node_key  
= p_node_key.


ENDFORM.                    "LEVEL4

*&---------------------------------------------------------------------*
*&      Form  LEVEL5
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*      -->P_L_PARAM1_KEY  text
*      -->P_2      text
*      -->P_1721   text
*      <--P_L_PARAM2_KEY  text
*----------------------------------------------------------------------*
FORM level5  USING p_relat_key TYPE lvc_nkey
hierarchy
icon
CHANGING p_node_key TYPE lvc_nkey.

* set item-layout
DATA: lt_item_layout TYPE lvc_t_layi,
l_node_text   
TYPE lvc_value,
ls_item_layout
TYPE lvc_s_layi,
relat         
TYPE int4,
wa_level     
TYPE ty_output.

ls_item_layout
-t_image = icon.
ls_item_layout
-style   cl_gui_column_tree=>style_intensified.
ls_item_layout
-fieldname = tree1->c_hierarchy_column_name.
APPEND ls_item_layout TO lt_item_layout.

* add node

l_node_text
= <field5>.

wa_level
-zzbrandd = wa_output-combin.
wa_level
-level   = hierarchy.

wa_level
-knuma = wa_output-knuma.

relat
= cl_gui_column_tree=>relat_last_child.
CALL METHOD tree1->add_node
EXPORTING
i_relat_node_key
= p_relat_key
i_relationship  
= relat
i_node_text     
= l_node_text
is_outtab_line  
= wa_level
it_item_layout  
= lt_item_layout
IMPORTING
e_new_node_key  
= p_node_key.


ENDFORM.                    "LEVEL5

*&---------------------------------------------------------------------*
*&      Form  add_cmplete_line
*&---------------------------------------------------------------------*
*       add hierarchy-level 3 to tree
*----------------------------------------------------------------------*
*      -->P_LS_SFLIGHT  sflight
*      -->P_RELEATKEY   relatkey
*     <-->p_node_key    new node-key
*----------------------------------------------------------------------*
FORM add_complete_line USING  p_relat_key TYPE lvc_nkey
hierarchy
CHANGING  p_node_key TYPE lvc_nkey.

DATA: l_node_text TYPE lvc_value.

* set item-layout
DATA: lt_item_layout TYPE lvc_t_layi,
ls_item_layout
TYPE lvc_s_layi.

ls_item_layout
-fieldname = tree1->c_hierarchy_column_name.
APPEND ls_item_layout TO lt_item_layout.

l_node_text
= wa_output-bonem.

IF l_node_text IS INITIAL.
MESSAGE s021(zgeral) DISPLAY LIKE 'E'.
LEAVE LIST-PROCESSING.
ENDIF.

DATA: ls_node TYPE lvc_s_layn.
ls_node
-n_image   = space.
ls_node
-exp_image = space.

wa_output
-level = hierarchy.

CALL METHOD tree1->add_node
EXPORTING
i_relat_node_key
= p_relat_key
i_relationship  
= cl_gui_column_tree=>relat_last_child
is_outtab_line  
= wa_output
i_node_text     
= l_node_text
is_node_layout  
= ls_node
it_item_layout  
= lt_item_layout
IMPORTING
e_new_node_key  
= p_node_key.
ENDFORM.                               " add_complete_line

 

 

 

I think that is everything here. Let me know if I missed something or someone need help to understand it.

Hope it helps.

Regards,

Andréa

Symptoms of complex & tricky issues

 

During my seven years working on SAP China, I have resolved hundreds of internal tickets or customer tickets. Among them there are some kinds of tickets which make me headache:

 

 

1. The issue needs complex steps to reproduce

  For example I have ever resolved one customer ticket, I need to (1) create a new sales order (2) create a new customer demand based on sales order (3) create a pick list (3) release the generated delivery note. The issue can only be reproduced by releasing the delivery note. Then I have to repeat the lengthy steps (1) ~ (3) and do debugging in note release.

 

2. Different software components involved

  I bet most of you guys have such feeling: if the issue is purely occuring within your responsibe component, you will always be confident that it could be resolved sooner or later, since you are the owner of your API and quite familar with it. However if your API is called by other software component or from other system with complex context, you have to spend more time to have a basic understanding of the whole story, to find how your API is called, to analyze whether your original design of API could sustain this new challenge you never think about before?

 

3. The issue could only be reproduced in customer production system

  In most of the cases I ever meet, the reason is because of the data setup. For example in customer test system, the test data is not well set up so that the errorous code has no chance to be executed in test system. Sometimes there is technical limitation or whatever other reasons so that it is impossible for you to ask customers to setup exactly the same data in test system as the data they are using in production system. The worst situation is, sometimes the issue occurs during write operation, for example the pricing calculation is wrong when a business document is saved. In this case you can not simply debug the save process, as it will influence customer business. You have to coordinate with customer how to proceed.

 

4. The issue could only be reproduced in background job execution mode but not in online mode

  The first step to check such issue is trying to find whether there are some FMs or methods which should not be used during background execution when the presentation server is not attached.

 

5. The issue could only be reproduced in normal execution, but when you debug the program, everything works perfectly

  Everyday I use debugger to fight against bug. When I found the bug could not be found via debugging, however it does exist in fact, I feel helpless, since this powerful weapean could not help me out this time. Then I have to read and analyze the code, and make them running in my brain. In most cases finally the issue is related to time-dependent processing in the program.

 

  As an ABAPer we are lucky since we do not always have to struggle with such time-dependent issues. When I am developing an Android application for SAP CRM customer briefing in year 2012, I suffer a lot from such kinds of issues. Just two examples:

 

  a. When you touch the Android tablet with single finger and make a slip, there are 5 or 6 different kinds of events triggered sequentially. My event handler registered for these events will handle with the coordinates of events occurred. Those coordinates will become invalid if code stopped in debugger. Then I have to write many System.out.println to print the coordinate in console for analysis.

 

  b. Dead lock in multi-threading. Such issue is hard to reproduce via debugging.

 

In fact some issue does not simply fall into one or two categories listed above but consists of several of them. I never encounter an issue from customer which contains all the five feature above, and I pray I will NEVER meet with it.

 

An example of how to resolve such kind of issue

 

Recently I have been working on one ticket which took me totally almost 10 hours to resolve it. I will share how I analyze this issue step by step.

 

I am owner of SAP CRM IBASE component CRM-MD-INB, the issue is my Solution management development team colleague complains when they create a new IBASE component and delete it afterwards in the same session and do a save operation, there will be ST22 dump in middleware processing stuff.

 

clipboard1.png

I know nothing about solution management development before.

This issue could only be reproduced in background execution. ( The program is designed to only execute in background )

The issue is not always reproducible. 囧

 

clipboard2.png

Step1 Understand how and when my API is called

 

I quickly go through the solution manager program, there are tens of thousands code. I set breakpoint inside my API ( IBASE create, update and delete function module ), then identify all calling space and importing parameter content.

 

 

Step2 Write simulation report and ensure the issue could be reproduced via it

 

As the scenario is really complex - CRM, SOL and Middleware involved, I spent one hour debugging without any hint found. Purely judgement based on code level, there are too many factors which will impact the program. In order to make me concentrate on my API, I plan to develop a simulation report which also calls IBASE create, update and delete and then perform save. The idea is to make the API call decouple from the solution manager logic. If the issue could then also be reproduced in my simulation report, then life is eaiser - I then only have to work on the simulation report which only contains 200 lines.

 

I have spent another 1 hour to finish the simulation report. Unfortunately I cannot reproduce the issue with it. After I check again with issue reporter,

I realized that the report does not 100% simulate the real program regarding IBASE operation, and I change it to fix the gap.

The simulation report is uploaded as attachment of this blog.

 

clipboard3.png

 

Since the simulation report is owned by me, it is very convenient to change it for issue analysis.

 

a. comment out all IBASE related code.

b. uncomment IBASE component creation FM, and execute report - no dump

c. continue to uncomment IBASE component change FM, and execute - no dump

d. continue to uncomment IBASE component deletion FM, and execute - dumps!!!

 

So now I get to know this issue is related to IBASE deletion.

 

Step4 Investigation on ST22 dump

 

Now the issue could be reproduced during normal execution of simulation report, but could work perfectly well during debugging.

My previois experience told me that it might be caused by some time dependent processing logic in the code. Then I check the position of code which raises error( line 103 ) and found lots of time operation logic in the same include:

clipboard4.png

 

The aim of this include is to find the IBASE and filled it into es_ibinadm. First check in buffer table gt_ibinadm_by_in_guid, if failed then try FM in line 91( in the first screenshot of this blog) as last defence. In normal case, the es_ibinadm is expected to be filled however in this issue, the last defence also fails so the X message is raised.  I set breakpoint in this include, however during my debugging, the variable es_ibinadm is successfully filled in line 54, everything works pefectly. However the dump is indeed there when I execute the report directly.

 

So I run the report once again and go to ST22, this means the dump there is "fresh" and the Debugger button is available only in this fresh state, so that I can observe the variable value in debugger when the dump occurs.

 

clipboard5.png

I soon find the root cause: the valfr and valto of the buffer entry is the same,


clipboard6.png

so during normal execution, the check in line 53 fails, so the code has to try the last defence to call FM CRM_IBASE_COMP_GET_DETAIL. In this case, it is expected behavior to raise an X message since the entry in the buffer table should be returned. When the code is executed in debugger, the valto is always greater than valfr, so the code directly return the entry to its caller without further call on FM CRM_IBASE_COMP_GET_DETAIL.


clipboard7.png

I will not go deep into IBASE valfr and valto generation logic as it is CRM specific and I am also not expert on it. ( a default creation of IBASE component creation will set its valid to timestamp as a never invalid date( 99991231235959 ). The comparison timestamp is set as valid from timestamp )


clipboard8.png

After I add the following code to ensure the check in line 53 in above screenshot will always succeed, the issue is resolved - no dump in background job execution any more.

clipboard9.png


I guess it would also work if the "<" is changed to "<=" in line53. However this code is owned by Middleware software component and I could not change, maybe I can discuss with responsible colleague.


clipboard10.png


Summary

 

1. Benefit of simulation report

 

Although it took me 1 hour to develop the simulation report, I think it is definitely worth since it liberates me from spending lots of time and effort to debug the unfamiliar solution management program and enable me to concentrate on the core code which might be related to the dump.

Sometimes if you have some findings and need to make changes on the code which calls your API for verification, you can not really do this since the code is not owned by you. In this case the simulation report plays its role! You can change it at your will for verification.

 

2. The Mini-System methodology for issue-isolation

 

In early ten years of 21 Century, it is very popular in China to assemble a PC by ourselves via DIY approach. It means we buy CPU, memory chip, hard disk, motherboard and other stuffs from different hardware manufacturers and assemble them. Most common issue is after assembly, the computer cannot boot at all. Then we use "Mini-System" for trouble shooting: as first step we only try to boot computer with LEAST necessary hardwares ( CPU + Power + Motherboard: these three components constitute a so called "Mini-System" ). If the first attempt succeed, we can append additional component, but ensure only ONE new component is added in EACH step. Such iteration could enable us to find which hardware makes the boot failed.

 

clipboard11.png

Compared with computer system, our ABAP program is much more complex and issue-isolation is then necessary for root cause investigation.

In my issue processing I used "Mini-System" methodology to finally identify that the dump is related to the incorrect call of IBASE delete function module.

 

3. Try to gain a perspective of overall situation of the issue

 

In this issue processing I spent quite a lot of time to debug why function module CRM_IBASE_COMP_GET_DETAIL raises an X message in the beginning.

Inside this FM it calls some deeper APIs which are not owned by me so I waste lots of time to understand the logic. Later after I read the whole source code of includes where the CRM_IBASE_COMP_GET_DETAIL is called, I asked myself: should it be called at all? Why is it called in normal execution to get data from DB, although the entry is already available in the buffer??

 

Do not think solely, think holistically

 

It makes sense to spend time and effort to debug the code where the exception is raised to understand why. It makes MORE sense to investigate the code holistically, analyze the callstack and execution context of the code. If the code ( method or function module ) fails to generate the correct output as you expect, ask yourself:  should it be called at all?

 

Hope this blog can help for your issue analysis. And also welcome to share your tip & experience regarding tough issue processing

 

Update on 2014-5-14 to correct some mistake

The solution to force the valid_to timestamp to be 1 second later than the valid_from timestamp via the following ABAP code is wrong:

 

lv_valid_to = ls_comp_det-valfr + 1.

 

Suppose the component creation and deletion are both done on timestamp 20140514180000 ( for simplicity do not consider time zone ).

 

sy-timlo: 180000

 

So lv_datlo ( 20140514 ) and lv_timlo( 180001 ) are passed into FM below. However the design of the function module does not support deletion in the future:

 

clipboarda1.png

clipboarda2.png

So currently only available solution is to add a WAIT UP TO 1 seconds because the call of delete function call, which could ensure the deletion always occur 1 second later than creation. Of course this will degrade the performance, however fortunately it is impossible for customer to encounter such issue in UI - they could not achieve to create the component, save it and delete it within the same 1 second.

 

The dump will only be there if the following two prerequisites meets at the same time:

 

1. Customer or partner develop their own code to call the creation, save and deletion FM sequentially

2. The system performance is good so that the three FM call ( create, save and delete ) are done within the same 1 second.

Hi ABAP fans!
Let me do a bit of promotion for ABAP on SAP HANA .

Here's our brand-new ABAP for SAP HANA End-to-End Development example, guiding you the way from HANA via ABAP to a SAP Fiori-like application. Within the guide, we're touching CDS Views as well as ABAP managed database procedures, two of the new features provided with AS ABAP 7.4 SP5, create SAP NetWeaver OData Services, and show the data in a (rather simple) Fiori-like application.

Interested?
Just try it out: Brand-new ABAP 7.4 for SAP HANA End to End Development Guide with Latest ABAP 7.4 SP5 Features

 

Meet you there,

  Jasmin

My Favorite List Management guideline

 

In my daily work I like to use Favorite to make my work more efficiently. My personal guideline to management favorite list:

 

 

  1. The most frequently used tcode which are brief themselves ( e.g SE24, SE38 ) are not put into favorite list.
  2. The more frequently used, the upper position it will be in the favorite list. ( as a CRM developer, I am using BSP_WD_CMPWB every day for two years, but I still could not remember it, so I have to put it to favorite ) The less, the lower.
  3. I never use nested folder in favorite list.  Suppose I have put one tcode to deeply nested folder "debug tool", then it would take me 4 clicks to launch the tcode. I don't want to waste time on these 4 clicks. So there are no nested folder in my favorite list.

clipboard1.png

Pain points for Favorite list organization

 

In my opinion, I don't think the Favorite update functionality in SAP gui is convenient to use.

For example if I want to drag one tcode in the bottom area of the favorite tree and drop it to one upper folder,

clipboard2.png

I need to press the mouse and scroll up the favorite tree until the destination folder appears in the screen, and then drop the tcode to the folder. The scroll speed is quite low. Or I can delete the original tcode in the bottom, and scroll up to the upper folder, and re-add the tcode into that folder.

clipboard3.png

So I export the favorite list into a local text file and plan to edit that file locally.

 

Unfortunately the format of the text file is still not straightforward and difficult for manual change:

clipboard4.png

  1. each line consists of four columns:

     a. menu type: TR indicates that it is a transaction code, space means it is a folder

     b. object ID: each record must have a unique ID. And the hierarchy relationship between a folder and its wrapped tcode are represented by the ID.

     c. tcode technical name

     d. tcode descripton, or folder name

  2. It is necessary to manipulate the object ID on my own if I want to manually change the text file. Special attention must be paid if I need to move a given tcode from one folder to another folder. The corrupt file( wrong object ID ) would prevent you from seeing the favorite list at all. During my attempt, adter I uploaded the manually edited text file, I could not use the system any more since there is dump when SAPGUI tries to render the favorite list with corrupt file:

clipboard5.png

Tool development

 

So I develop a simple tool which will upload a text file with a new format defined by myself. It only has two columns:

 

Column1: the first level of favorite list tree. It could be a folder, a tcode( wrapped with "[ ]" ), or a webdynpro application( wrapped with "( )" ).

Column2: the second level of favorite list tree. It must be either a tcode or a webdynpro application. No nested folder is allowed since as I said I do not need nested folder in my daily work at all. The hierarchy relationship between a folder and its wrapped tcode are represented by the Tab key.

 

clipboard6.png


After I run the tool to upload the text file, my Favorite list is rendered with exactly the same structure defined in text file.

clipboard7.png

Benefit of using this tool:

 

  1. It is no need to manipulate object ID any more. The tool will calculate it automatically.
  2. No need to maintain the tcode description or webdynpro application description. Just maintain the tcode technical name and the tool will fetch description automatically.
  3. The text file itself is very easy to understand. If you need to move a tcode from one folder to another, just do cut & paste in text file.

 

I just upload my favorite list which contains 290 entries for your reference.

 

Further reading

 

1. If you would like to know where are these tcode from drop down list stored in your laptop, read this document.

clipboard9.png

2. Some more detail about SAP GUI settings storage could be found in this blog.

Updating Variant Configuration data for SO item became little tricky for us in an ongoing implementation. After spending some hours investigating the correct combination of data to pass, we were able to post the document correctly. As no detailed documentation is available online for this scenario, I hope this post will help community for similar requirements.

 

For updating Variant configuration (VC) data for Sales order item, we need to populate below tables of standard FM or BAPI (e.g. SD_SALESDOCUMENT_CREATE).

Normally the standard FM or BAPI does not return any error messages in case configuration data is not updated successfully.

·         SALES_SCHEDULES_IN: The required date field should be populated with appropriate value (REQ_DATE).

·         SALES_ITEMS_IN: Field PO_ITM_NO should be populated with appropriate value.

 

·         SALES_CFGS_REF Table:

1.       This table should have 1 record per item.

2.       Combination of CONFIG_ID and ROOT_ID should be unique across line items.

 

POSEX

000010

CONFIG_ID

000001

ROOT_ID

00000001

SCE

1

COMPLETE

T

CONSISTENT

T

CBASE_ID_TYPE

G

 

·         SALES_CFGS_INST:

 

1.       This table should have 1 record per item.

2.       Combination of CONFIG_ID and INST_ID should be unique across line items.

 

CONFIG_ID

000001

INST_ID

00000001

OBJ_TYPE

MARA

CLASS_TYPE

300

OBJ_KEY

MATNR value

QUANTITY

Quantity value

QUANTITY_UNIT

Quantity Unit

COMPLETE

T

CONSISTENT

T

OBJECT_GUID

MATNR value

PERSIST_ID_TYPE

G

 

 

 

·         SALES_CFGS_VALUE:

 

1.       Combination of CONFIG_ID and INST_ID should be unique across line items.

2.       We can have multiple characteristics for a material. In that case appropriate records should be inserted in this table. Note that CONFIG_ID and INST_ID should be same for all the rows you insert in this table for multiple characteristics for a material.

3.       The characteristic value should be in SAP internal format.

 

CONFIG_ID

000001

INST_ID

00000001

CHARC

Material characteristics

VALUE

Material characteristics value

 

 

·         SALES_CFGS_VK:

 

1.       Combination of CONFIG_ID and INST_ID should be unique across line items.

2.       We can have multiple characteristics for a material. In that case appropriate records should be inserted in this table. Note that CONFIG_ID and INST_ID should be same for all the rows you insert in this table for multiple characteristics for a material.

 

CONFIG_ID

000001

INST_ID

00000001

VKEY

Material characteristics

PROJECT LINKS:


IDEA

For a long time only OLE/DOI-approach was available for creating spreadsheets. As you know, OLE/DOI-approach is quite flexible, but some restrictions. For example, it requires installed MS Excel application (this makes impossible to use background job for generate spreadsheet). Moreover, OLE-approach has low performance (this problem may be partly solved by using VBA, but this requires manually changing of security settings MS-Excel application on each front-end computer).

 

Since, ZIP-folder processing tools became available in the SAP environment, we are able to generate spreadsheets with new Open XML Format (XLSX). This approach does not have lacks of OLE.

 

But, in my opinion, the flaw of all existing solutions for generate spreadsheets from ABAP- is many lines of ABAP code required directly for creating and formatting spreadsheet layout. This fact is the cause of that, the same tasks are quite differently implemented by different developers, which complicates further support these developments. My idea is to create a tool (to design spreadsheets) which has graphical user interface, for example such as SMARTFORMS; and resultant spreadsheet would have XLSX format.

 

 

WHAT DOES DEVELOPMENT PROCESS LOOK LIKE ?

The development via XLSX Workbench comprises three components:

  • FORM. The principle of visual design forms in XLSX Worbkench has some similarities with the principles of using such editors as SMARTFORMS or AdobeForms. Forms are stored in the SAP WEB-repository (transaction SMW0, binary data for WebRFC-applications), and represented as XLSX-files. All data about form's structure is inside the file.

 

  • CONTEXT.  But here notion of the context is little simplified compared with the AdobeForms.

 

  • PRINTING PROGRAM. It must contain next steps:
      • Declaration of the context (int.table or structure)
      • Code for filling of the context
      • Call a funct.module 'ZXLWB_CALLFORM'

 

 

HOW TO GET RESULTANT XLSX-FILE ?

You run the print program, which prepare a source data (and fill the context) and pass it to F.M.'ZXLWB_CALLFORM'. The F.M. generates the resultant XLSX-file using the form's structure and context data. Everything is processed on the application server, but the result at the developer's decision:

  • could be displayed on the front-end computer via MS Excel application;
  • could be saved on the front-end computer as file (.XLSX) in a specified path;
  • could be returned into the printing program as rawdata.

 

 

 

TITLE_21.png

Actions

Filter Blog

By author:
By date:
By tag: