1 2 3 6 Previous Next

ABAP Testing and Troubleshooting

81 Posts

Hi Community !


I'd like to share a tool for unit testing me and my team have developed for our internal usage recently.


The tool is created to simplify data preparation/loading for SAP ABAP unit tests. In one of our projects we had to prepare much tables data for unit tests. For example, a set of content from BKPF, BSEG, BSET tables (FI document). The output to be validated is also often a table or a complex structure.


Data loader


Hard-coding all of that data was not an option - too much to code, difficult to maintain and terrible code readability. So we decided to write a tool which would get the data from TAB delimited .txt files, which, in turn, would be prepared in Excel in a convenient way. Certain objectives were set:


  • all the test data should be combined together in one file (zip)
  • ... and uploaded to SAP - test data should be a part of the dev package (W3MI binary object would fit)
  • loading routine should identify the file structure (fields) automatically and verify its compatibility with a target container (structure or table)
  • it should also be able to safely skip fields, missing in .txt file, if required (non strict mode) e.g. when processing structures (like FI document) with too many fields, most of which are irrelevant to a specific test.


Test class code would look like this:



call method o_ml->load_data " Load test data (structure) from mockup

  exporting i_obj       = 'TEST1/bkpf'

  importing e_container = ls_bkpf.


call method o_ml->load_data " Load test data (table) from mockup

  exporting i_obj       = 'TEST1/bseg'

            i_strict    = abap_false

  importing e_container = lt_bseg.


call method o_test_object->some_processing " Call to the code being tested

  exporting i_bkpf   = ls_bkpf

            it_bseg  = lt_bseg

  importing e_result = l_result.





The first part of the code takes TAB delimited text file bseg.txt in TEST1 directory of ZIP file uploaded as a binary object via SMW0 transaction...



1000  10    2015  1     40    S     ...

1000  10    2015  2     50    S     ...


... and puts it (with proper ALPHA exits and etc) to an internal table with BSEG line type.




Later another objective was identified: some code is quite difficult to test when it has a select in the middle. Of course, good code design would assume isolation of DB operations from business logic code, but it is not always possible. So we needed to create a way to substitute selects in code to a simple call, which would take the prepared test data instead if test environment was identified. We came up with the solution we called Store. (BTW might nicely co-work with newly announced TEST-SEAM feature).


Test class would prepare/load some data and then "store" it:



call method o_ml->store " Store some data with 'BKPF' label

  exporting i_name = 'BKPF'

            i_data = ls_bkpf. " One line structure



... And then "real" code is able to extract it instead of selecting from DB:



if some_test_env_indicator = abap_false. " Production environment

  " Do DB selects here



else.                                    " Test environment

  call method zcl_mockup_loader=>retrieve

    exporting i_name  = 'BKPF'

    importing e_data  = me->fi_doc_header

    exceptions others = 4.





if sy-subrc is not initial.

  " Data not selected -> do error handling




In case of multiple test cases it can also be convenient to load a number of table records and then filter it based on some key field, available in the working code. This option is also possible:


Test class:



call method o_ml->store " Store some data with 'BKPF' label

  exporting i_name   = 'BKPF'

            i_tabkey = 'BELNR'  " Key field for the stored table

            i_data   = lt_bkpf. " Table with MANY different documents



"Real" code:



if some_test_env_indicator = abap_false. " Production environment

  " Do DB selects here

else.                                    " Test environment

  call method zcl_mockup_loader=>retrieve

    exporting i_name  = 'BKPF'

              i_sift  = l_document_number " Filter key from real local variable

    importing e_data  = me->fi_doc_header  " Still a flat structure here

    exceptions others = 4.



if sy-subrc is not initial.

  " Data not selected -> error handling




As the final result we can perform completely dynamic unit tests in our projects, covering most of code, including DB select related code without actually accessing the database. Of course, it is not only the mockup loader which ensures that. This requires accurate design of the project code, separating DB selection and processing code. But the mockup loader and "store" functionality makes it more convenient.




Links and contributors


The tools is the result of work of my team including:


The code is freely available at our project page on github - sbcgua/mockup_loader · GitHub


I hope you find it useful


Alexander Tsybulsky

With SAP NetWeaver AS, add-on for code vulnerability analysis 7.5 scanning of ABAP sources for security weaknesses became even more easy. Allowing more systems to be scanned for even more types of defects, the new release is also more flexible and now can be deployed centrally.


Using the new central security scan support, customers are now able to overcome the release limitations of previous releases. Using this approach, only one SAP NetWeaver AS 7.5 basis system is required. Systems containing the code to be scanned can be releases down to SAP NetWeaver AS ABAP 7.00 (for details check SAP note 2190113).



The benefit of this approach is also, that in future an upgrade of the central scan system allows to use the latest checks also for all remote system.


Using an update scan engine, you can now analyze BSP pages and even navigate directly into the BSP sources to fix your web applications in case of security issues.

In addition, there were additional checks like checks to identify coding with insufficient authorization checks. You can find more details on the new and revised checks in SAP Note 1921820 - SAP NetWeaver AS, add-on for code vulnerability analysis - support package planning.


If you want to get more details, check our new roadmap https://service.sap.com/~sapidb/011000358700000256742014E.pdf on the SAP Service Market Place (SMP).

It is common knowledge that buffering of database tables improves the system performance, provided the buffering is done judiciously – i.e. only those tables that are read frequently and updated rarely are buffered. But how exactly can we determine if a table is read frequently or updated rarely?

Also, the state of a buffered table in the buffer area is a runtime property which keeps changing with time. How can an ABAP developer, know whether a buffered table actually exists in the buffer, at a given time instant? This is a critical question to be considered while analyzing the performance of queries on buffered tables.

This blog post attempts to answer the above questions.

This blog post is divided into 3 sections and structured as follows:

  • Section 1: Prerequisites (Recap of Table Buffering Fundamentals and its Mechanism)
  • Section 2: How to use the Table Call Statistics Transaction
  • Section 3: Interpreting the results of the Table Call Statistics Transaction to answer the questions posed above.

You might find the blog to be slightly lengthy but the content will NOT be more than what you can chew. Trust me!

Section 1: Prerequisites (Recap of Table Buffering Fundamentals and its Mechanism)

Buffering is the processing of storing table data (which is always present in the database) temporarily in the RAM of the Application Server. Buffering is specified in the technical settings of a table’s definition in the DDIC.

The benefits of buffering are:

  • Faster query execution – A query is atleast 10 times faster when it fetches data from the buffer as compared to a query fetching data from the database. This is because the delays involved in waiting for the database and the network that connects it, are eliminated. The performance of the application which uses this query, is improved.
  • Reduced DB Load and Reduced Network Traffic – Since every query need not hit the DB, the load on the DB is reduced and also the network traffic between the Application Layer and the Database Layer is reduced. This improves the performance of the entire system.

The buffering mechanism can be visualized in Figure 1 below:


Figure 1.jpg

          Figure 1: Buffering Mechanism

The SAP work processes of an application server have access to the SAP table buffer. The buffers are loaded on demand via the database connection. If a SELECT statement is executed on a table selected for buffering, the SAP work process initially looks up the desired data in the SAP table buffer. If the data is not available in the buffer, it is loaded from the database, stored in the table buffer, and then copied to the ABAP program (in the internal session). Subsequent accesses to this table would fetch the data from the buffer and the query need not go to the database to fetch it.

It must be understood that RAM space in the application server is limited. Let’s say – dbtab1 is a buffered table whose data is present in the buffer. When there is a query on another buffered table - dbtab2, its data will have to be loaded into the buffer. This might result in the data of dbtab1 getting displaced from the buffer.

When there is a write access to a buffered table, the change is done in the database and the old table data which is present in the buffer (of the application server from which the change query originated) is just flagged as “Invalid”. At this instant, the buffer and the database hold different data for the same table. A subsequent read access to the table would initiate a reload of the table data from the database to the buffer. Now the buffer holds the same data as the database.

Buffering a table that gets updated very frequently might actually end up increasing the load on the DB and increasing the network traffic between the application layer and the database. This would slow down the system performance and defeat the purpose of buffering.

Key Takeaways from Section 1:

  • The contents of a buffered table in the buffer area is completely runtime dependent. At one instant, there might be data, and at another instant, it might not be present.
  • Only a table with the following characteristics must be buffered:

          (a)     Read frequently

          (b)     Updated rarely

          (c)     Contains less data

Section 2: How to use the Table Call Statistics Transaction

This is accessed by the Tcode – ST10. The following is the initial screen:

Figure 2.jpg

                         Figure 2: ST10 - Initial Screen

A few points may be noted in Figure 2:

  • An access to every table, regardless of whether it is non-buffered/single record buffered/generic area buffered/fully buffered, would be reported by this transaction.
  • Analysis of the table accesses may be restricted to a specified time frame (by choosing the radiobuttons - This day/Previous Day/This Month/Previous Month etc). Or the transaction may be run without any restriction on the time period by choosing - “Since Startup”.
  • If the SAP system consists of multiple application servers, the accesses (i.e. queries) to the tables originating from any of the servers can be reported by this transaction. On the other hand, the transaction may be restricted to table accesses originating from a specific server (by either choosing the radiobutton – “This Server” or by explicitly specifying the server).


Let’s explore the results returned by the transaction when the radiobutton – “Not Buffered” is chosen.

Figure 3.jpg

     Figure 3: Results of ST10 when the radio buttons - “Non-Buffered”, “This Server” and “From startup” are chosen

Let me explain the significance of each column –

  • Direct Reads – This gives the number of SELECT queries on a particular table in which the entire primary key was specified in the WHERE clause.
  • Seq. Reads – This gives the number of SELECT queries on a particular table in which the entire primary key was NOT specified in the WHERE clause. There can more than one record satisfying the WHERE clause.
  • Changes – This gives the number of write access (INSERT/UPDATE/MODIFY/DELETE) on a particular table.
  • Total – This is the total number of accesses (read + write) to a particular table in the time frame chosen and in the server chosen.

                    Total = Direct Read + Seq. Reads + Changes.

  • Changes/Total % = Also termed as “Change Rate”, this is the % of accesses that are write accesses.
  • Rows Affected – This not very relevant for an ABAP developer. Any operation that accesses the database will increase the Rows Affected. SWAPS would also increase this count.

Let’s explore the results when the radio button – “Generic Key Buffered” is chosen:

Figure 4.jpg

          Figure 4: Results of ST10 when the radio buttons - “Generic Key Buffered”, “This Server” and “From startup” are chosen

There are some new columns here, which were not present in Figure 3. They are:

  • Buf key opt – This describes the buffering type of the table. Its possible values are:

          (a)     SNG – Single Record Buffered Table

          (b)     FUL – Fully Buffered Table

          (c)     GEN – Generic Area Buffered Table

  • Buffer State – This describes the state of the table in the buffer. For all the possible values and their meaning, I would recommend you to just place the cursor on this column and hit F1. The following is a brief description of some of the possible values of Buffer State:

          (a)     VALID - The table content in the buffer is valid. Read access takes place in the buffer.

          (b)     ABSENT – The table has not been accessed yet. So the table buffer is not yet loaded with data.

          (c)     DISPLACED – The table buffer has been displaced

          (d)     INVALID - The table content is invalid and there are open transactions that modify the table content. Read access takes place in the                                         database.

          (e)     ERROR - The table content could not be placed in the buffer, because insufficient space.

          (f)      LOADABLE – The table buffer in the buffer area is invalid, but can be loaded in the next access.

          (g)     MULTIPLE – Relevant only in the context of Generic Area Buffered Tables. These have different buffer statuses.

  • Invalidations: Specifies how often the table was invalidated because of “Changes” (i.e. write accesses).

NOTE: All the table buffers in the current application server can be cleared by entering the Tcode- “/$TAB”.

Note that the user can toggle between one result set and another by using the buttons in the Application Toolbar (as shown in Figure 5):

Figure 5.jpg

          Figure 5: Application Toolbar of the primary list screen of ST10.

  • While the result screen of ST10 may be open in one session, there may be some accesses to tables in other sessions or by other users. Use the “Refresh” button so that the transaction shows the latest data – latest buffer state of tables, latest no. of accesses etc.
  • “Reset” button will set all the counts to zero (i.e. no. of reads/changes/DB Calls etc).
  • Detailed information about Buffer Administration etc may be viewed by double clicking on any of the entry, or placing the cursor on a row and clicking the “Choose” button on the application toolbar. The secondary list will look like Figure 6.

Figure 6.jpg

               Figure 6: Secondary List

Section 3: Interpreting the results of the Table Call Statistics Transaction to answer the questions posed above.

How to determine a non-buffered table which is suited to be buffered?

  • Begin the ST10 transaction by clicking on the “Non-Buffered” radiobutton.
  • Notice the “Change Rate” value for each table. The higher the Change Rate for a table, the less suited it is for buffering.
  • The non-buffered tables with the following properties may be considered for buffering:

        (a)     Low Change Rate (under 0.5%)

        (b)     High number of reads (Direct Reads + Seq.Reads)

        (c)     Data volume not too large


          If it is to be buffered, what should be its buffering type?


    • We are to be guided by the relative number of Direct Reads and Seq. Reads. If most of the reads are Direct Reads, categorize the table as “Single Record Buffered”.
    • On the other hand, if most of the reads are Seq. Reads, classify it as either Generic Area Buffered or Fully Buffered. If the data volume is less, the table can be considered for Full Buffering. If the data volume is higher or if certain “groups” of data of this table are accessed frequently, then classify it as “Generic Area Buffered”.

How to determine the efficiency of the buffer setting of already buffered tables?

  • Begin the ST10 transaction by clicking on either the “Generic Key Buffered” or “Single Record Buffered” radiobutton (depending upon the table whose buffer setting, you would like to verify).
  • Notice the “Change Rate” value for each table. The higher the Change Rate for a table, the less suited it is for buffering. One might consider switching OFF the buffering for such tables.
  • A wrong decision with respect to the Buffering Type may also be diagnosed here. For a Single Record Buffered table, if the no. of Seq. Reads is higher relative to the number of Direct Reads, one might consider changing the buffering type from Single Record to Fully Buffered or Generic Area Buffered.

NOTE: Ensure that the time frame for which the transaction is run is significant enough such that all the reports/applications were run in that period and all business scenarios occurred in that period. Only then, can this transaction guide us effectively in deciding which table’s buffer settings are to be altered.

Case Study:

Based on the above guidelines, let’s consider some examples in Figure 7, which shows the Non-Buffered Tables:


Figure 7.jpg

               Figure 7: List of accesses to non-buffered tables.


I would like to draw your attention to the 3 tables enclosed by a green rectangle. Based on the trends for these three tables, it can be temporarily concluded that:

  • The table – ABDBG_LISTENER is NOT a candidate for buffering. This is because, it has a high change rate.
  • The table – ABDBG_INFO can be considered for buffering and it may be set as “Single Record Buffered” table since all of its accesses were Direct Reads.
  • The table – ADCP can be considered for either Full Buffering or Generic Area Buffering. This is because most of its accesses were Seq. Reads.

  The above points are not the final decisions but just guidelines. Other aspects like data volume, size category, access frequency etc are to be considered.

How can an ABAP developer, know whether a buffered table actually exists in the buffer, at a given time instant?

  • Clear all the table buffers from the buffer area by running the tcode – “/$TAB”.
  • Consider the single record buffer table – TSTC. Its buffer state would say – LOADABLE as shown in Figure 8 below:

Figure 8.jpg

          Figure 8: Buffer State of TSTC table after clearing the buffers using - /$TAB.

  • Now, run the following code snippet in a program:


  • After running the above code snippet, press the “Refresh” button in the Application Toolbar of the ST10 transaction. This would reflect the new buffer state of the TSTC table – VALID.

Figure 9.jpg

          Figure 9: Buffer State of TSTC table after the above code snippet is run

  • Basically, the SELECT SINGLE query first looked for the relevant record in TSTC’s table buffer first. It did not find it (because the table buffer had no data. Its state was LOADABLE earlier). Then, the query fetches the relevant record from the database (this fact can be confirmed from the ST05 – SQL Trace in Figure 10) and loads that data to the buffer.

Figure 10.jpg

          Figure 10: ST05-SQL Trace when the above code snippet is run for the first time. Data is fetched from database.

  • The subsequent reads to TSTC, looking for the same record (i.e. TCODE = ‘SE38’) would fetch the data from the buffer itself (This fact can be confirmed from ST05 – Buffer Trace in Figure 11 This fetch would be several times faster than fetching from the DB.

Figure 11.jpg

          Figure 11: ST05-Buffer Trace when the above code snippet is run for the second time. Data is fetched from buffer.


ST10 is a very useful transaction that can guide you in answering the following questions:

  • Based on the accesses over a period of time from a particular server, can a non-buffered be buffered?
  • Can a table that was wrongly buffered, be identified?
  • How can an ABAP developer, know whether a buffered table actually exists in the buffer, at a given time instant?


[1]           Gahm, H., “Chapter 3 – Performance Analysis Tools,” ABAP Performance Tuning, 1st ed., Galileo Press, Boston, 2010, pp. 51-54.

You need to exchange ST12 trace with your counterpart (e.g. SAP support).

You have created traces in transaction ST12 as described here:

Single Transaction Analysis (ST12) – getting started [http://scn.sap.com/community/abap/testing-and-troubleshooting/blog/2009/09/08/single-transaction-analysis-st12-getting-started]


or here


ST12 – tracing user requests (Tasks & HTTP) [http://scn.sap.com/community/abap/testing-and-troubleshooting/blog/2010/03/22/st12-tracing-user-requests-tasks-http]



To store your trace into file, perform the following:


  1. Start transaction ST13
  2. Enter tool name: ANALYSISBROWSER
  3. Execute (F8)
  4. Select your analysis


  5.  Select menu: Download -> Text file download -> Export to frontend



   6.  Enter file name and format (leave ASC), click Transfer.





To upload the trace from the file, perform the following steps:


  1. Repeat steps 1 – 3 from above.
  2. Select menu: Download -> Text file download -> Import from frontend



  3.  Select your file (e.g. D:\trace1.trc) and click Open


  4.  Click "Yes" on Import Analysis popup.



Hint: When exchanging the traces files don't forget to compress them. You can use RAR or ZIP archivers for that.

ANST – Automated Notes Search Tool, is a powerful tool to help searching SAP notes for issues you encounter in your SAP system. As this tool is part of SAP standard applications now and has been of great use for end customers, partners & development teams, in this blog I am exploring the possibilities of using it for Quality engineers from testing point of view.


Before doing any scenario testing & test automation, it is most important to ensure that the required customizing is correct & complete to support the execution of test case. This tool can be of great help in ensuring the same & achieving more effective testing.


Lets start from the point when we design a test case , its very important to define the prerequisite steps  including the required customizing correctly here. Often, one way of finding important customizing tables involved in process testing is through development colleagues/application responsible, however in that case we are dependent on correctness & completeness of the information provided.


Here is another way to do it using ANST, to find the right tables/views ensuring all customization are done in test automation prior to scenario testing. ANST has got capability to give all the tables which are used in a particular test execution. Before test automation, to get maximum coverage of tables which may impact the test execution, manually perform each test step while using ANST. The trace will capture all the table from different components  during the test execution.After capturing all the tables , you can select the area you want to test/automate  and from there you can navigate to corresponding tables/view.

With this, now you can ensure to  include all of the required customization in your automation script to avoid any customization errors during test execution. This tool offers excellent capability of getting all the customizing tables in one place for a scenario or transaction which is to be tested.


Lets do this with a simple scenario where user wants to create a warranty claim using transaction WTY:


Steps :


Login to test system and start the transaction ANST. Enter The transaction you are testing and Description .

I suggest you to give a meaningful description as this will help to search the trace later if needed.




After executing , tool will take you to the transaction screen. Enter the necessary parameters & perform the transaction.





On completion of transaction, click on customizing tables button on below screen.





On below screen, it will show all the tables which are touched upon during this test.There are component specific table lists as well and Important tables can be scanned for the data checks.





You can double click on a particular table to navigate to the details of table. With this analysis, you can decide to include which all customizing steps should be included as prerequisite steps in the test automation script for this transaction.


You can check the trace later too by opening it with the description saved earlier, as below:





Hope this will help in designing automated tests better. If you want to know more about ANST, you can refer to some of the other blogs on this:


What is ANST....and why aren't you using it?

The power of tools - How ANST can help you to solve billing problems yourself!

We faced a weird issue today in the production system that we hadn't faced in quality. The scenario was we had to send the Location information of vendors to an external system from ECC. For this we are using the message type CREMAS with the basic type CREMAS05. This was extended with an extension to add a custom segment and custom fields in it. The change pointers were configured for these custom fields as well. And when the IDocs were triggered using BD21 for CREMAS after changing a vendor, it worked fine in the quality system.

However, when we moved the custom code i.e. the enhancement implementaion in the function module MASTERIDOC_CREATE_CREMAS along with the change pointers and all other configurations for partner profiles etc, it did not work as expected in the production system.

The difference was that no filters were maintained in the quality systems, or the IDocs were working fine for the filters maintained in the quality system, but not for the filters maintained in the production system.

The scheduled background job of the program RBDMIDOC was failing with the error saying that the custom segment created using the extension does not exist.


Exact Error message: "Segment <our Y custom segment name> does not exist for message type CREMAS"


Although when we checked, all the transports had happened correctly and we were able to view the custom segment in  WE30 in the production system.

Then we checked the partner profile too to see if the extension had been missed, but no, even that was maintained correctly.


After scratching our heads for a few days and trying out everything possible under the sun, we figured out that it was the filters in the distribution model that was causing this issue. On removing the filters, the IDocs were getting triggered fine. So, we narrowed down to the filters and then on searching around on SDN and on the internet in general, we stumbled across a few posts that said it had something to do with the conversion routines etc but finally after a lot of trial and error of the various solutions we found on the internet, the one that worked for us was to pass the name of the custom segment needed to  Function module MASTER_IDOC_DISTRIBUTE that exists at the end of the code in the function module MASTERIDOC_CREATE_CREMAS that we were using.

The structure  F_IDOC_HEADER which is a work area contains the field CIMTYP which needs to be populated with the name of the extension that has been created for the standard IDoc.


So, on adding one line of code:




before the line



solved our problem.


Now, the CREMAS IDoc started flowing fine eve with the filters for company codes and purchasing organizations maintained in the distribution model.

SAP has been doing some really good work upgrading its  tools.  We  have recently upgraded to SAP_ABAP 740. I'm an advocate of ABAP Unit testing and this upgrade gave me the opportunity to try an example on the new Test Double Framework.   Prajul Meyana's  ABAP Test Double Framework - An Introduction says that the new framework is available from SP9. We're in SP8, but I can't wait to tesdrive this.  So I started poking around. One of my colleagues pointed out that CL_ABAP_TESTDOUBLE is delivered with the release.  YEY!

Below is an example of behavior verification using the framework and it appears to work. Maybe later, I'll make a much simpler cut. At this stage I just wanted to run it through a real life example within our code base.


Below is my application code. It's a simple custom service implementation to create Chart of Authority records for Opentext Vendor Invoice Management.   ( It's not relevant here but note that we use FEH to manage exceptions for enterprise service errors.  Maybe I can show a test of that exception in a later blog. )


Further below is one  of my test classes with one of the test methods implemented.


The test double framework does three important things in this example .

  • It sets the behavior of get_manager_id( ) for when the test is executed.
  • It sets the expected invocation parameters of set_coa_details( )
  • The verify_expectation( )  verifies that the set_coa_details( ) within the Service interface has been invoked as expected.


I use a factory implementation to inject test doubles. Some of you don't like it. I understand that. Hope that doesn't distract from the intent. Have fun. As I mentioned when my colleague Custodio de Oliveira pointed it out that it's available, "Let's break it".


App Code




      lo_coa_user            TYPE REF TO zif_opentext_vim_coa_user,

      lo_cx_opentext         TYPE REF TO zcx_opentext_service,

      lo_cx_coa_user         TYPE REF TO zcx_opentext_service,


      ls_main_error          TYPE bapiret2,

      lt_coa_details         TYPE zopentext_coa_details_tt,

      ls_coa_details         TYPE LINE OF zopentext_coa_details_tt,

      lv_manager_id          TYPE /ors/umoid,

      lv_max_counter         TYPE /opt/counter.



      <ls_process_coa_details>      TYPE LINE OF zopentext_coa_detl_process_tt.


    me->_s_process_data = is_process_data.




            lo_coa_user      = zcl_vim_coa_user_factory=>get_instance( )->get_coa_user( iv_windows_id = _s_process_data-windows_id

                                                                                        iv_active_users_only = abap_false ).

            lv_manager_id    = lo_coa_user->get_manager_id( ).

          CATCH zcx_opentext_service INTO lo_cx_coa_user.

            CLEAR lv_manager_id.


        LOOP AT _s_process_data-coa_details[]

          ASSIGNING <ls_process_coa_details>

          WHERE start_date <= sy-datum

          AND   end_date   >= sy-datum" Record removed in ECC if not in validity date

          ADD 1 TO lv_max_counter.

          ls_coa_details-counter        = lv_max_counter.

          ls_coa_details-expense_type   = <ls_process_coa_details>-expense_type.

          ls_coa_details-approval_limit = <ls_process_coa_details>-approval_limit.

          ls_coa_details-currency       = <ls_process_coa_details>-currency.

          ls_coa_details-bukrs          = '*'. " Functional requirement in ECC to set CoCode to *. Assumption : From corp - 1 user = 1 co code

          ls_coa_details-kostl          = '*'.

          ls_coa_details-internal_order = '*'.

          ls_coa_details-wbs_element    = '*'.

          ls_coa_details-manager_id     = lv_manager_id"For new entries, Manager Id is the same as that on existing COA entries for the user.

          APPEND ls_coa_details TO lt_coa_details.



        " Ignore the message

        IF ( lo_cx_coa_user IS NOT INITIAL or lo_coa_user->is_deleted( ) )     " The user is deleted or does not exist

           AND lt_coa_details IS INITIAL.                                      " AND All the inbound records are deletions

          RETURN. " Ignore transaction - finish ok.



        " Raise missing user

        IF lo_cx_coa_user IS NOT INITIAL.

          RAISE EXCEPTION lo_cx_coa_user.



        " Updates

        IF lo_coa_user->is_deleted( ).

          " User &1 is deleted. COA cannot be updated.

          ""****  ZCX_FEH EXCEPTION RAISED HERE *****



        lo_coa_user->set_coa_details( lt_coa_details[] ).

        lo_coa_user->save( ).


      CATCH zcx_opentext_service INTO lo_cx_opentext.

        "****  ZCX_FEH EXCEPTION RAISED HERE *****






Local Test Class







    METHODS: setup.

    METHODS: test_2auth                        FOR TESTING.

*    METHODS: test_2auth_1obsolete              FOR TESTING.

*    METHODS: test_missinguser_coadeletions     FOR TESTING.

*    METHODS: test_update_on_deleted_user       FOR TESTING.

*    METHODS: test_opentext_error               FOR TESTING.


    DATA : mo_coa_user TYPE REF TO zif_opentext_vim_coa_user.

    CLASS-DATA : mo_coa_user_factory TYPE REF TO zif_vim_coa_user_factory.

    DATA : mo_si_opentext_delegauth_bulk TYPE REF TO ycl_si_opentext_coa.






  METHOD setup.

    mo_si_opentext_delegauth_bulk ?= ycl_si_opentext_coa=>s_create( iv_context = zcl_feh_framework=>gc_context_external ).



  METHOD test_2auth .


*  This tests the scenario where the user has 2 authority records      *

*  and both are saved properly.                                        *


    DATA ls_process_data               TYPE zopentext_deleg_auth_process_s.

    DATA ls_coa_details_process        TYPE zopentext_coa_detl_process_s.


    DATA lt_coa_details                TYPE zopentext_coa_details_tt.

    DATA ls_coa_details                TYPE LINE OF zopentext_coa_details_tt.


         "config the test double call to manager id

    mo_coa_user ?=  cl_abap_testdouble=>create( 'ZIF_OPENTEXT_VIM_COA_USER' ).  

    cl_abap_testdouble=>configure_call( mo_coa_user )->returning( 'WILLIA60' ).

    mo_coa_user->get_manager_id( ).


    " expected results

    ls_coa_details-counter        = 1.

    ls_coa_details-currency       = 'NZD'.

    ls_coa_details-approval_limit = 200.

    ls_coa_details-expense_type   = 'CP'.

    ls_coa_details-bukrs          = '*'.

    ls_coa_details-kostl          = '*'.

    ls_coa_details-internal_order = '*'.

    ls_coa_details-wbs_element    = '*'.

    ls_coa_details-manager_id     = 'WILLIA60'.

    APPEND ls_coa_details TO lt_coa_details.



    ls_coa_details-counter        = 2.

    ls_coa_details-currency       = 'NZD'.

    ls_coa_details-approval_limit = 300.

    ls_coa_details-expense_type   = 'SR'.

    ls_coa_details-bukrs          = '*'.

    ls_coa_details-kostl          = '*'.

    ls_coa_details-internal_order = '*'.

    ls_coa_details-wbs_element    = '*'.

    ls_coa_details-manager_id     = 'WILLIA60'.

    APPEND ls_coa_details TO lt_coa_details.


         "configure the expected behavior of the set_coa_details( )

    cl_abap_testdouble=>configure_call( mo_coa_user )->and_expect( )->is_called_times( 1 ).

    mo_coa_user->set_coa_details( lt_coa_details ).



        " Inject the test double into the factory which will be used inside the  method under test.


        zcl_vim_coa_user_factory=>get_instance( )->set_coa_user( mo_coa_user ).

      CATCH zcx_opentext_service ##no_handler.



    " SETUP - INPUTS To the Method under test

    ls_process_data-windows_id = 'COAUSER'.



    ls_coa_details_process-currency   = 'NZD'.

    ls_coa_details_process-approval_limit = 200.

    ls_coa_details_process-expense_type   = 'CP'.

    ls_coa_details_process-bukrs          = '1253'.

    ls_coa_details_process-start_date     = '20060328'.

    ls_coa_details_process-end_date       = '29990328'.

    APPEND ls_coa_details_process TO ls_process_data-coa_details.


    ls_coa_details_process-currency   = 'NZD'.

    ls_coa_details_process-approval_limit = 300.

    ls_coa_details_process-expense_type   = 'SR'.

    ls_coa_details_process-bukrs          = '1253'.

    ls_coa_details_process-start_date     = '20060328'.

    ls_coa_details_process-end_date       = '29990328'.

    APPEND ls_coa_details_process TO ls_process_data-coa_details.


    " EXECUTE the method under test


        mo_si_opentext_delegauth_bulk->zif_feh~process( is_process_data = ls_process_data ).

      CATCH zcx_feh  ##no_handler.




    " Verify interactions on test double

    cl_abap_testdouble=>verify_expectations( mo_coa_user ).









Some Test Tools available in SAP_ABA 740

Test Summary - 1 test method successful



Test Coverage - only 1 test >> so it's pretty poor




Test Coverage - lots of  untested code in red!


(Sorry for the eclipse fans. I re-flashed my PC to 64 bit. I haven't had the chance to re-install my Eclipse tools. Those coverage tools are there too! ).

In previous blog STOP filling your Custom ABAP Code with Business hard coding I started a discussion about a popular coding bad practice that affects most of the SAP ERP systems.

In a week, the blog got more than 2.000 visits, 5 stars rating and the several interesting comments are even more valuable than the blog itself.


To get rid of Business hard code, I'm describing here a way scan your SAP system (e.g. SAP ECC) and get a clear picture of its occurrences.


With the term Business hard code, I'm referring to the practice of hard coding strings (literals) corresponding to codes (IDs) related to Organizational Units or Document Types and even Master Data. Examples are Company Codes, Purchase Organizations, Sales Organizations, Accounting document types and also Country Codes.


Problem Domain

The ABAP workbench provides lot of tools to perform source code scanning.

Occurrences of a given literal (e.g. 'IT01' ) can be easily obtained via report RS_ABAP_SOURCE_SCAN or the Code Inspector check Scan for ABAP Tokens. In the real-life, this is the use case of Split & Merge when, for example, a Company Code is going to be merged with another.

Here I try to solve the problem of obtaining occurrences of any literal referred to business related domains without knowing the values to be found.


Techedge@SCN ALM

Before deep diving into the solution, let me confirm the attitude we have at TechedgeGroup to share stuff (for free) in SCN.

First, we are proud of the idea and first implementation of abap2xlsx by Ivan Femia . I think it is one of the most popular SCN projects in terms of downloads, usage and software contributors.

Specifically in the domain of Application Lifecycle Management (ALM), it follows a short list of ideas and tools we shared in the years with the community:



Download and install

This time Techedge is sharing with the SCN community the product Doctor ZedGe - Hard!Code that you can get for free without worrying about license or expiration time.

Doctor ZedGe - Hard!Code is the Community Edition of the larger product Doctor ZedGe that includes an advanced DashBoard to analyze ABAP Test Cockpit results and publish them in nice looking MS Excel reports and also a specific ABAP to download to MS Excel the ATC results including the statements with issues.

So, at the bottom of the page Doctor ZedGe | Techedge you will find instructions To order Doctor ZedGe . Simple ask for the Community Edition. You'll soon receive the comprehensive documentation and the complete source code simply installable via Copy & Paste .

This time we decided to distribute Doctor ZedGe - Hard!Code from our Techedge web site not only because Code Exchange has been closed.

Indeed since we are delivering the software, we can assure enterprises that the software is secure, is well developed and well documented. We'll also provide technical support in case of issues.


Thanks to the step-by-step guides you will get something like the following ATC result in less than an hour.




Or if you prefer, here it is the result in ABAP in Eclipse (AiE):


As you know, ABAP Test Cockpit provides an handy Statistics overview (top), the worklist (middle) and the finding detail (bottom). Navigation to the code (picture on the right) is a click-away.



How it works?

The idea of this custom Code Inspector check is first to get the hard-coded string (literal), then discover the corresponding operand (context) and recognize if it refers to a Business entity.


As you know, ABAP syntax is very flexible and the challenge is to determining the context (the related operand) of a given literal. In the above example, the related operand of the literal '3000' is field LT_FILE-PLANT2. This time it is on the left of the operator '='.

In case of '3200' it is instead GT_FILE-WERKS that is on the right of the operator '='.


CASE and WHEN are even more challenging:


In  the above example, the context of both 3000 and 3200 is to be found jumping back to the CASE statement to identify LT_FILE-PLANT2 as context (related operand).



Literals are strings and represent the hard coding Anti-pattern.

The Code Inspector check Doctor ZedGe - Hard!Code includes a Literal length parameter defaulted to consider those with length between 2 and 18.


In this version, the analysis considers the following statements that cover mostly of the scenarios:

  • COMPUTE including the implicit form like A = B
  • conditional statements IF, ELSEIF, CHECK or WHEN
  • MOVE or WRITE "for WRITE x TO y
  • Open SQL statements SELECT, INSERT, UPDATE or DELETE

Known limitations

It’s easy to catch hard coded strings (literals) but you'll get a huge number of false positives that make the scan unusable.

It’s not instead so easy to distinguish those related to business entities. After scanning millions of lines of code, we believe that Doctor ZedGe - Hard!Code can identify around 95% of the Business hard coding related to the following set of critical domains:


BUKRSCompany Code
EKORGPurchase Org.
VKORGSales Organization
VTWEGDistribution Channel
LGORTStorage Location
GSBERBusiness Areas
MSHEIUnit of measurements
PARVWPartner Function
KTOKDCustomer account groups
KTOKKVendor account groups
MTARTMaterial Type
AUARTSales Document Type
LFARTDelivery Type
FKARTBilling Type
BSARTPurchasing Document Category
BWARTMovement Type
VSBEDShipping conditions
PSTYPItem Category in Purchasing Document
PSTYVSales document item category
KSCHLCondition Type
BLARTDocument Type (FI)
PLTYPPrice list type
KUNNRCustomer account groups
SKA1G/L Account Master (Chart of Accounts)
BELNRAccounting Document Number BELNR


Note that in case you need, I confirm it's very easy to extend the code to take into account other domains.

Keep in mind that, since we target an installation performed via one copy & paste (one ABAP class) we avoided in this version the use of tables to define the list of domains to be analyzed. We'll see in the future if it makes sense.


In add, since Doctor ZedGe - Hard!Code leverages the power of ABAP Test Cockpit and Code Inspector, it also suffers of the same known limitations:

  • it works only on workbench objects belonging to a custom main object. It can analyze BADIs and Customer Exits (CMOD) but cannot analyze ABAP code contained in user-exits includes (es. SAPMV45A is a SAP standard object thus MV45FZ01 cannot be analyzed that as from Best Practices should just call custom Customer Exits FUNCTION MODULEs or custom BADIs)
  • it works on PROGRAMs (PROG), FUNCTION MODULEs (FUGR) and CLASSEs (CLASS) but not on SAPScripts and SMARTForms.

What's next?

Doctor ZedGe - Hard!Code could provide value not only to Developers and Quality Managers but also to Functional specialists, Team leaders, Project managers and even IT managers.


Follow a list of possible use cases:

  • get weekly system certification in terms of Business Process standardization (no hidden different behaviors)
  • before starting a roll-out, get the list of Business hard code that will require adjustments
  • during handover phase, when the Project team describe to the AMS what has been realized, get list of Business hard code and evaluate accurately
  • to discourage the bad practice, you may want to add ABAP Test Cockpit during Change Request release

Updated: you may also want to check out subsequent blog Get rid of Business Hard Code from your custom ABAP Code 



IMHO, "Business hard coding" is one of the worst and underestimated ABAP programming practice.

Here just an intro to the topic while in a subsequent blog you’ll find useful stuff to get rid of it.


I always considered hard coding really a bad practice but, only recently, I’ve got the real evidence of how much it is used. It happened during the Custom ABAP Code review services we're delivering at TechedgeGroup.

Hard coding requires the program's source code to be adjusted any time the context changes and in business, it happens quite often.

With “Business hard coding” I'm referring to the practice of hard coding strings (literals) corresponding to codes (IDs) related to Organizational Units, Document Types and even Master Data and that is one of the worst kind of hard coding.


Some examples are Company Codes, Purchase Organizations, Sales Organizations and Accounting document types.



Instead, I would not be too much worried about "Technical hard coding", the practice of hard coding strings corresponding to technical stuff like dictionary objects and output formats (e.g. tables, fields, colors, icons).


In add, hard coded strings returned to end-users as part of messages, titles and columns headers belong to a different bad practice related to the internationalization (i18n) topic.


A couple of examples

For a better comprehension, a couple of examples follow.

In the next picture, method ADDRESS_CONTROLS_IN contains two hard coded strings used to differentiate message severity. The first is related to Company Code and the second to Purchasing Group. Here hard code is even used generically to check everything starting with IN*.


Monkey.jpgI would guess that India has a specific business requirement.


In the next picture, method MANDATORY_VATCODE contains multiple hard coded strings to differentiate message severity. The first is related to Country Code, then to Company Code and my favorite one verifies that the GL Account beginning with ‘004’.


Monkey.jpg  I would guess that Poland has a specific business requirement to be combined with a type of GL Accounts.

Why real life SAP systems are full of Business hard code?

I'm sure, most of developers will justify the use of Business hard coding explaining they have been in hurry and the time to create new customizing tables or BRF+ rule was missing. In part they are right, I know that customers (internal or external) demand often for very fast results and developers operate accordingly.

I also have evidence that a large number of developers consider Business hard coding the only way to go and let's say even a good practice.

Discussing with them, to demonstrate they are wrong, I’m used to say that in hundreds of millions of lines of standard SAP code there is no occurrence of "Business hard coding" (to be honest with very few exceptions like country codes and partner functions).


Why Business hard coding is so bad?

Probably the hard-coder (the author) will be proud to show his/her skills solving issues and adjusting the business hard code only he/she is aware of (lock-in). Even if at customizing level everything is correct and identical to a working scenario, different behaviors of a transaction/report are often due to business hard code.


Time saved during the development phase will lead to much additional effort during the next roll-out or at next Merge&Split when business requirements will change.

Where Business hard coding is acceptable?

In the reality, Business hard coding is an acceptable practice in:

  • Throw away objects
  • Short living projects


Maybe it can also be useful to classify above exceptions assigning the objects to specific throw away Packages (Development Classes) similar to $TMP but transportable to production.

Speaking about serious and productive Custom ABAP Code, I'm sure you want to get rid of Business hard coding as soon as possible.

Best alternatives

In modern SAP systems, there are lot of alternatives to Business hard coding for example like:


I'm going to share very soon also the way we use at TechedgeGroup to perform a full scan of your Custom ABAP Code looking for Business hard coding and I’m also very interested to hear your experiences and ideas.

Documentation is an important aspect of scripting. Good documentation should always go hand-in-hand with the automation script and it should clearly explain the whole purpose of the script. Moreover, nothing to beat, if this documentation is easily accessible to the user. Normally the documents would be stored in folders in local servers. For some reason, if the server is down, then these documents are not accessible. Also we might end up losing the documents if the server crashes.


The reason I’m writing this blog is to create awareness and also share my experience about one of the useful features of the eCATT which allows attachment of documents (usually eCATT Specification/Design documents) to the eCATT script. It provides an option to either directly attach the document or to provide a link to document. Once the documentation is attached, it will be visible from the Test Catalog and also from eCATT Log file. Anybody who executes the eCATT script can easily find the documentation as part of the eCATT log file. This documentation serves as a  ready-reckoner and one point reference for information regarding the script. Therefore this helps the script executor in understanding what the script does and also in troubleshooting the issue, if faced. Using this feature has helped me in effective maintenance of the documents and also it has freed up the local server space. I do not need to now go searching for the script documentation. It has also immensely helped in easy and effective handover of the scripts to the new joinees in the team.



  • Documentation is readily available along with the eCATT log file.
  • Helpful in script maintenance.
  • Easy Troubleshooting by comparing the log file with the document.
  • Quite useful during handover of automated
  • Frees up server space.



  • Consumption of ECA storage space, if the documents are directly attached. Nevertheless the document has to be stored somewhere then why not with the script itself ! To avoid this situation, a link to the documentation can also be provided, but in this case the document has to be maintained in the local server.


Steps to be followed:

    1. Call Transaction Code SECATT and give the eCATT Test Configuration name in “Test Configuration” field.

    2. Navigate to the “Attributes” tab and then navigate to “Attachments” tab.

    3. Attach the document either as a File or as a link at the Test Configuration level. If you have maintained individual documents for each     variant within the test configuration, then you can attach the same for each variant.



  • Once the above steps are done, the documents will be visible in the Test Catalog.


  • eCATT Documentation is also  available within the eCATT Log file. Just click on the Doc icon to open the document.


Hope this information is useful. This has helped me and I am sure that this is going to help you as well.

Ask a layman what he understands by "Automation" and the most expected answer is "Doing something automatically" .

     Right!!! When something is done without the intervention of a human, it is automation. And how would you answer "Why automation?” Is it because we trust the machines more than humans or because machines can work tirelessly or because they can do the same job tenfold faster?

The answer is "All of it and much more”.

     Automation helps us with all of this. But keep in mind that we are humans, and 'it is human to err'. What if the creator of this unit of automation (in our context the automated script) does it the wrong way? The “wrong” would also get multiplied and multiply faster than we can realize something is not right. The whole idea is to do it the right way, and in the very beginning itself. It is those small things we ignore in the initial stages which later manifest as huge problems when the automation happens at a large scale. Everything multiplies, including the mistakes we have done and it becomes very difficult to correct it.

This is one of the reasons why some people still prefer manual testing, as they think more time goes in the maintenance and the correction of scripts in addition to their creation and execution.

     The power of automation has always been undermined because of the lack of being organized, structured and methodical in its creation. An automated script is best utilized when it is most reliable and re-usable. These two factors contribute towards easy execution (once the scripts are ready), maintenance (whenever there's a change in application) and accuracy of results (when the scripts get executed).


     A reliable script can be created only when the tester has a good understanding of the application, its usage and the configurations behind it. This requires a lot of reading and investigation of the application to know how it behaves under a given circumstance. Once this is done, the script can be created such that it handles the application for all possible application flow.


     A reusable script truly defines the meaning and purpose of automation. With a perfectly reusable script, further automation of upcoming applications becomes easier and faster. Maintenance is another take away from this attribute of a script. Reusability is a result of standardization of a script in all aspects like structure and naming convention. Let us look at them individually and see how they add to the script’s reusability.


Structure of an automated script: A well-structured script becomes easy to understand and adapt especially to those who take it over from others. It makes the script crisp without any unwanted coding. It is important to strictly limit the script to its purpose and keep only the absolute necessary.

For example, when it comes to validation part, which can be done in many ways (message check, status check, table check, field check and so on) it might not be required for every case. Also remember that a DB Table check takes extra efforts from the script to connect to the system and read the table. One execution may not make a difference, but on a large scale execution, it – does – matter. 


Such additional coding needs to be identified and eliminated. Let us analyze the necessary coding according to the purpose of the test:

1.      Functional Correctness: Validation is required before and after the execution of the application to see how the transaction has affected the existing data.

                         Validation before test --> Execution of tcode under test  --> Validation after test

2.      Performance Measurement: Performance is considered only after the application has been tested for its functional correctness. Validation has no purpose here as the focus of test is non-functional

         Execution of tcode under test

3.      Creation of Data for Performance: Usually massive data is required for Performance measurement.

      For e.g. a 1000 customers with 150 line items each… the same could be repeated for vendors, cost centers, and so on. Table checks on this scale of execution would create a huge load on the system and it would take hours to create such data, may be even days in some cases. It is best to avoid validation/table reads of any kind. Another point to keep in mind here is that using a functional module or a BAPI to create data saves a lot of time and effort. A TCD recording or a SAPGUI recoding should only be the last option.

                        Execution of tcode for data creation --> Validation after test

4.      Creation of data for system Setup: this is usually done on a fresh system, with no data. Hence verification only at the end would suffice.

                         Execution of tcode for data creation --> Validation after test


There is also a subtle aspect of being structured…  The Naming Convention.

Testers usually tend to name their script to suit their need, ignoring the fact that these transactions can be used by anyone in an integrated environment. Searching for existing scripts becomes easy when proper rules are followed while naming them. It may happen that more than one script exist for the same purpose, such duplication has to be avoided. Attributes like the purpose (unit or integration testing or customizing or performance), tcode executed, action (change or create or delete), release need to be kept in mind while setting up the rules.

The same goes for Parameters as well. Work becomes easy while binding the called script and the calling script (script references). Also quick log analysis is another take away from common naming conventions for parameter.

There is another factor that makes automation more meaningful and complete in all sense. That is documentation. Documentation is a record of what exactly is expected of the script. Its importance is realized at the time of hand over, maintenance and adaptation. However ‘Document Creation’ itself can be dealt with as a separate topic. The idea is that document creation should not be disregarded as unimportant.

Having done all this, we need to watch out for the scope of test. With new functionality getting developed over the older ones (e.g. enhancement of features), re-prioritization needs to be done regularly. Old functionality may not be relevant anymore or they must be stable enough to be omitted from the focus topics. This way the new features/developments get tested better.

Now let us summarize the write up. All the aspects mentioned above are not something we cannot do without. Automation can still happen without any of these factors. However, the benefits we draw from them can make a huge difference on time and efforts of both automation and maintenance. Understanding a script authored by someone else, Knowledge transfer, Adaptation, Corrections... these are just a few advantages to list down.

The world of automation is very vast and its benefits still remain unexplored.

In this blog I would like like to describe the idea of data-driven testing and how this can be implemented in ABAP Unit.


Data-driven testing is used to separate test data and expected results from unit test source code.

It allows running the same test case on multiple data sets without the need of modifying test code.


It does not replace such techniques as test doubles and mock objects. It is still a good idea to abstract your business logic in a way that will allow you to test independently of data. But even if your code is build in that way you can still benefit from parametrized testing and the ability to check many inputs on the same code.

It is particularly useful for methods which solve more complex computational formulas and algorithms. Input space is very wide in such cases and there are many boundary cases to consider. It is easier to maintain them outside of the code then.


Other xUnit frameworks like .Net nUnit Java jUnit provide the built-in capabilities to run parametrized test cases and implement data-driven testing.

I was missing such features in ABAP Unit and started looking for potential solutions.


The solution which I will present is based on eCATT test data containers and eCATT API.

eCATT Data containers are used to store input parameters and expected results. ABAP unit is used as an execution framework for unit tests.


For the sake of example let's take simple class with method which determines triangle type.

It returns:

  • 1 for Scalene (no two sides are the same length)
  • 2 for Isosceles (two sides are the same length and one differs)
  • 3 for Equilateral (all sides are the same length)

and throws exception if provided input is not a valid triangle


METHODS get_type


    a TYPE i

    b TYPE i

    c TYPE i

  RETURNING value(triangle_type) TYPE i

  RAISING lcx_invalid_param.

Now we proceed with creating unit tests.

There are two typical approaches:

- Creating a separate test method for each test case

- Bundling test cases in single method with multiple assertions


Usually I'm in favor of the second approach as it provides better overview in the test logs when some of the test cases are failing. It is also easier to debug single test case.


Example test case could look like this:


METHODS test_is_equilateral FOR TESTING.


METHOD test_is_equilateral.


      act = lcl_triangle=>get_type( a = 3

                                    b = 3

                                    c = 3 )

      exp = lcl_triangle=>c_equilateral ).


Each time we want to add coverage and test some additional inputs either new test method has to be created or new assertion has to be added.


To overcome this we create a test data container in transaction SECATT.



And define test variants




In ABAP code we define test method which uses eCATT API class CL_APL_ECATT_TDC_API to retrieve variant values


METHOD test_get_type.

    DATA: a TYPE i,

          b TYPE i,

          c TYPE i,

          exp_type TYPE i.


    DATA: lo_tdc_api TYPE REF TO cl_apl_ecatt_tdc_api,

          lt_variants TYPE etvar_name_tabtype,

          lv_variant TYPE etvar_id.


    lo_tdc_api = cl_apl_ecatt_tdc_api=>get_instance( 'ZTRIANGLE_TEST_01' ).

    lt_variants = lo_tdc_api->get_variant_list( ).


    "skip default variant

    DELETE lt_variants WHERE table_line = 'ECATTDEFAULT'.


    " execute test logic for all data variants

    LOOP AT lt_variants INTO lv_variant.

      get_val: 'A' a,

              'B' b,

              'C' c,

              'EXP_TRIANGLE_TYPE' exp_type.



          exp = exp_type

          act = lcl_triangle=>get_type( aa = a

                                        bb = b

                                        cc = c )

          quit = if_aunit_constants=>no ).





DEFINE get_val.



            i_param_name = &1

            i_variant_name = lv_variant


            e_param_value = &2 ).


In my project I ended up creating a base class for parametrized unit tests which takes care of reading variants and running test methods.

It has one method which does all the job:


METHOD run_variants.

  DATA: lt_variants TYPE etvar_name_tabtype,

        lo_ex TYPE REF TO cx_root.


  "SECATT Test Data Container

  TRY .

      go_tdc_api = cl_apl_ecatt_tdc_api=>get_instance( imp_container_name ).

      " Get all variants from test data container

      lt_variants = go_tdc_api->get_variant_list( ).

    CATCH cx_ecatt_tdc_access INTO lo_ex.


          msg  = |Variant { gv_current_variant } failed: { lo_ex->get_text( ) }|

          quit = if_aunit_constants=>no ).




  "skip default variant

  DELETE lt_variants WHERE table_line = 'ECATTDEFAULT'.


  " execute test method for all data variants

  " method should be parameterless and public in child unit test class

  LOOP AT lt_variants INTO gv_current_variant.

    TRY .

        CALL METHOD (imp_method_name).

      CATCH cx_root INTO lo_ex.


            msg  = |Variant { gv_current_variant } failed: { lo_ex->get_text( ) }|

            quit = if_aunit_constants=>no ).




Modified test class using this approach looks as follows:



  INHERITING FROM zcl_zz_ca_ecatt_data_ut.


    METHODS test_get_type FOR TESTING.

    METHODS test_get_type_variant.

    METHODS test_get_type_invalid_tri FOR TESTING.

    METHODS test_get_type_invalid_tri_var.



CLASS ltc_test_triangle IMPLEMENTATION.

  METHOD test_get_type.

    "run method TEST_GET_TYPE_VARIANT for all variants from container ZTRIANGLE_TEST_01


        imp_container_name = 'ZTRIANGLE_TEST_01'

        imp_method_name  = 'TEST_GET_TYPE_VARIANT' ).



  METHOD test_get_type_variant.

    DATA: a TYPE i,

          b TYPE i,

          c TYPE i,

          exp_type TYPE i.


    get_val: 'A' a,

            'B' b,

            'C' c,

            'EXP_TRIANGLE_TYPE' exp_type.



      exp = exp_type

      act = lcl_triangle=>get_type( a = a

                                    b = b

                                    c = c )

      quit = if_aunit_constants=>no

      msg = |Wrong type returned for variant { gv_current_variant }| ).



  METHOD test_get_type_invalid_tri.

    "run method TEST_GET_TYPE_INVALID_TRI_VAR for all variants from container ZTRIANGLE_TEST_02


        imp_container_name = 'ZTRIANGLE_TEST_02'

        imp_method_name  = 'TEST_GET_TYPE_INVALID_TRI_VAR' ).



  METHOD test_get_type_invalid_tri_var.

    DATA: a TYPE i,

          b TYPE i,

          c TYPE i.

    get_val: 'A' a,

            'B' b,

            'C' c.

    TRY .

        lcl_triangle=>get_type( a = a

                                b = b

                                c = c ).



            msg = |Expected exception not thrown for invalid triangle - variant { gv_current_variant }|

            quit = if_aunit_constants=>no ).

      CATCH lcx_invalid_param.

        " OK - expected





As you can see with this approach it's very easy to create parametrized test cases where data is maintained in external container. Adding new cases requires just modifying TDC by adding new variant.

It proved to be very useful for test cases checking complex logic requiring multiple input sets to be covered.


There are also some challenges with this approach:

- you need to remember to pass quit = if_aunit_constants=>no in assertions otherwise test will stop at first failed variant

- in ABAP Unit results report there is only one method visible and it is not reflecting number of variants tested


For those challenges I would love to see some improvements in the future versions of ABAP Unit. Similarly to what is available in other xUnit frameworks.

Ideally there should be a way to provide the variants in a declarative way and they should be visible as separate nodes in test run results.


Kind regards,



I have used this testing technique during one of my test phase, where we were testing the portal applications

This test technique is applicable where we have portal application & equivalent functionality in R/3(back-end) as well.

I will take the examples from EAM where we have portal & R/3 transactions available to create/change/display the objects Equipment, Functional Location, Orders, Task list, Notifications etc.

Portal applications have its own benefits, End User need not to remember all the transactions. But at the same time its mandatory functionality should behave same whether it’s is launched from portal or R/3 transactions.

We have tested different combinations and ensured that functionality is behaving in same manner in all the cases. Wherever this deviates from expected behavior we can analyze the behavior further & report an issue.

If we test both (R/3 & portal) of them separately without comparison, it’s difficult to validate the exact & expected behavior.



  • Portal configurations for system should already be taken care.
  • User Roles


I have described below few aspects of functionality which we should test.

Few Combinations which we validated were:

Open the object in change mode in portal & try changing it in R/3 and vice verse.      

Expected results are: object should be locked & not available for changes.

Change few customization in R/3 and check in portal for the impact.

Expected results are: customization change should have impact on portal too.

Create Object in Portal & check in R/3 transaction/ tables & vice verse.   

Expected results are: Objects created in portal should be available in database table in back-end system.

Block the Object status as inactive in R/3.   

Expected results are:  Status should be updated on portal for respective object & we should not be able to change it any further.

There are many other cases/combinations which can be compared. With this test technique we can ensure the functionality is robust & identical, does not change its behavior with change in test environment or technology.

This article might be useful for testers who are testing the portal & will help them in designing there testing  even better.

I will further share my findings & new ways of testing for any new  functionality from my future test phases.



This blog is about changing the way of work of Source Code Inspector(tcode:SCI) especially when Transport Organizer integration is activated. Transport Organizer integration can be activated using tcode SE03. Thanks to SAP for this talented and flexible tool.



In one of our projects we needed to seperate SCI controls according to creation date of objects. We needed this because afore mentioned project is started 12 years ago and as you guess quality and security standarts are changed over time. At sometime integration of SCI/ATC and Tranport  Organizer(SE01) is activated. So developers can not release a request before handling errors given by SCI. But how can you force developer who made just single line of change to a huge program? How can he/she handle all errors given after checks without knowing semantics of this huge program? What if this change should be transported to production system immediately? The solution was to seperate check variants according to creation date of objects .

Also periodic checks can be planned to adduct old objects to new standarts.


In this blog i will try to explain what I did to workaround this issue. To benefit from the solution you should be familiar with adding your own test class to SCI.

You can find information about adding your own test class to SCI at : http://scn.sap.com/community/abap/blog/2006/11/02/code-inspector--how-to-create-a-new-check and http://wiki.scn.sap.com/wiki/download/attachments/3669/CI_NEW_CHECK.pdf?original_fqdn=wiki.sdn.sap.com


Solution summary

First, I created a test class ZCL_SCI_TEST_BYDATE (derived from CL_CI_TEST_ROOT) that has just 2 parameters date (mv_credat) and check variant(mv_checkvar). This class decides if tests in mv_checkvar is required for the object under test by checking creation date . If object is 'new' it runs additional tests.


Secondly, I created two SCI check variants : BASIC_VARIANT and EXTENDED_VARIANT. The first one is for old development objects and second one is for additional tests for ‘new’ objects. ‘new’ means that object is created after certain date(ZCL_SCI_TEST_BYDATE->mv_credat). First check variant includes my custom test which is mentioned above (ZCL_SCI_TEST_BYDATE) and EXTENDED_VARIANT is given as mv_checkvar parameter. Also second check variant is complementary for the first one and includes different tests than the first one.


Finally, to enable navigation by double clicking at check results I had to make one simple repair and 2 enhancements.


Step 1 : ZCL_ SCI_TEST_BYDATE class :


Most important method of this class is -normally- run() .

run method checks if object is created after date mv_checkvar, gets test list for EXTENDED_VARIANT and starts new test procedure for new test list.


  1. METHOD run.
  2. DATA lo_test_ref TYPEREF TO cl_ci_tests .
  3. * Check whether the object is created after mv_credat
  4. IF me->is_new_object()NE abap_true .
  5. EXIT.
  6. ENDIF.
  7.   me->modify_insp_chkvar(RECEIVING eo_test_list = lo_test_ref ).
  8. * RUN_BEGIN
  9.   lo_test_ref->run_begin(
  11.       p_no_aunit ='X'
  12.       p_no_suppress ='X'
  13.       p_oc_ignore ='X').
  14. * RUN
  15.   lo_test_ref->run( p_object_type = object_type
  16.                     p_object_name = object_name
  17.                     p_program_name = program_name ).
  18. * RUN_END
  19.   lo_test_ref->run_end().


Another important method is modify_insp_chkvar which returns test list for EXTENDED_VARIANT.


  1. METHOD modify_insp_chkvar.
  2. * Returns test list for mv_checkvar(EXTENDED_VARIANT).
  3. * Also this method combines BASIC_VARIANT and EXTENDED_VARIANT's test lists
  4. * on INSPECTION. Just needed when check results double clicked.
  5. * (I could not handle it with MESSAGE event of CL_CI_TEST_ROOT)
  6.   DATA lo_check_var TYPE REF TO cl_ci_checkvariant .
  7.   DATA lo_check_var_insp TYPE REF TO cl_ci_checkvariant .
  8.   DATA lt_var_test_list TYPE sci_tstvar .
  9.   FIELD-SYMBOLS : <l_var_entry> TYPE sci_tstval .
  10.   CLEAR eo_test_list .
  11. * Get reference for EXTENDED_VARIANT - additional checks for new objects
  12.   cl_ci_checkvariant=>get_ref(
  13.     EXPORTING
  14.       p_user = ''
  15.       p_name = mv_checkvar
  16.     RECEIVING
  17.       p_ref = lo_check_var
  18.     EXCEPTIONS
  19.       chkv_not_exists = 1
  20.       missing_parameter = 2
  21.       OTHERS = 3 ) .
  22.   IF sy-subrc NE 0 .
  23.     MESSAGE e001(z_sci_msg) WITH mv_checkvar description . "Check variant &1-&2 does not exist.
  24.   ENDIF.
  25.   IF lo_check_var->variant IS INITIAL .
  26.     lo_check_var->get_info(
  27.       EXCEPTIONS
  28.         could_not_read_variant = 1
  29.         OTHERS                = 2 ) .
  30.     IF sy-subrc NE 0 .
  31.       EXIT .
  32.     ENDIF.
  33.   ENDIF.
  34. * Get test list of EXTENDED_VARIANT - addional checks for new objects
  35.   cl_ci_tests=>get_list(
  36.     EXPORTING
  37.       p_variant      = lo_check_var->variant
  38.     RECEIVING
  39.       p_result        = eo_test_list
  40.     EXCEPTIONS
  41.       invalid_version = 1
  42.       OTHERS          = 2 ) .
  43.   IF sy-subrc NE 0.
  44.     EXIT .
  45.   ENDIF.
  46. *...



Important points about my custom class definition is ok now. I attached full source code.

If you want to add parameters to your custom test classes look at query_attributes, get_attributes, put_attributes methods of ZCL_SCI_TEST_BYDATE.


To add new test class to SCI test list I opened SCI->Management of tests and chose my new test class and clicked save button.



Step 2 : Check variants

As I mentioned before I created 2 checkvariants. Below is BASIC_VARIANT which is valid for all programs.Selected test list in figures below is just an example. Notice that my new test ‘Additional tests for new programs’ is selected. Parameters of new test can be seen in this picture.



Next picture depicts second checkvariant which is valid for objects created after ’01.01.2014’ (mv_date).



PS: SE01 uses SCI checkvariant TRANSPORT as default. But there is a way to change this – thanks to SCI : I changed default checkvariant with my BASIC_VARIANT. To achieve this I changed the SCICHKV_ALTER table’s record which has ‘TRANSPORT’ at CHECKVNAME_DEF field.

Note that AFAIK DEFAULT checkvariant is used by SE80, so it is modifiable too .

Step 3 : Adding check results of custom test class to SCI.


-This step is not related to the main idea, first 2 steps are sufficient to express my idea-


After creation of new test class and checkvariants I had been able to run additional checks for new objects but SCI result list was not navigating to EXTENDED_VARIANT’s test results when I double clicked. As I guess SCI is just aware of BASIC_VARIANT’s test list and can not navigate to unknown test’s results. I should add my additional tests to inspection object’s test list.

I made a single line of repair(CL_CI_INSPECTION->EXECUTE_DIRECT) and enhancement to CL_CI_TESTS->GET_LIST. The aim of these modifications is to fill the ‘inspection’ property of ZCL_SCI_TEST_BYDATE .( ZCL_SCI_TEST_BYDATE has a property named inspection inheriting from CL_CI_TEST_ROOT but it is empty when tests are running. I don’t know if it’s a bug or not ).


PS: CL_CI_TEST_ROOT class has method ‘inform’ and event Message. But I could not be able to pass  my additional check results to SCI result list . I will work on this and if its ok step3 will be useless.


CL_CI_TESTS->GET_LIST enhancement



  1. """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""$"$\SE:(1) Class CL_CI_TESTS, Method GET_LIST, End                                                                                                          A
  2. *$*$-Start: (1)---------------------------------------------------------------------------------$*$*
  3. ENHANCEMENT1  Z_SCI_ENH_IMP2."active version
  4. *
  5. IF p_inspection ISNOTINITIAL.
  6.     p_result->inspection= p_inspection .
  7. LOOP AT p_result->listINTO l_ref_test .
  8.       l_ref_test->if_ci_test~inspection = p_inspection .
  10. ENDIF.
  12. *$*$-End:  (1)---------------------------------------------------------------------------------$*$*

CL_CI_INSPECTION->execute_direct , repair
  1. *...
  2. call method CL_CI_TESTS=>GET_LIST
  3. exporting
  4.       P_VARIANT      = CHKV->VARIANT
  5.       p_inspection = me “added line
  6. receiving
  7.       P_RESULT        = L_TEST_REF
  8. exceptions
  9.       INVALID_VERSION =1
  10. others=2.
  11. *....

While there are quite some good documents about the setup of the ABAP Test Cockpit (ATC) on SDN (cf. http://scn.sap.com/docs/DOC-32791 and http://scn.sap.com/docs/DOC-32628) I haven't seen any experience reports about a roll out of ATC yet. Therefore I decided to blog about my current experiences in rolling out the ATC in our development organization.


Step 0: Some Background

Before starting to describe what we did in our ATC roll out I want to give you some background about the environment of the roll out. At my company we are managing and maintaining several SAP system landscapes for different customer. A typical customer landscape consists of a SAP CRM, a SAP IS-U (ERP) and a SAP BW together with several non-SAP systems (e.g. an output management system and an archive system). In addition to that we have a central development which is used to develop core functionality and distribute these across the customer systems. These core functionalities are typically developed in our own namespace. Therefore, each of our customer system contains a set of custom development in the customer namespace and a set of developments in our own namespace.

The second important aspect of our environment is the diversity of developers developing in the system. Firstly, we have a core development team. This team consists of people with a deep knowledge around software development and mostly some formal training (e.g. a computer science degree) in the area. Secondly, we have a team of functional consultants with a wide range of development skills, ranging from some basic ABAP knowledge to very deep knowledge. And finally we usually have several external consultants developing in the different customer systems as well.

As you might have guessed the result of this environment is a quite diverse code base containing anything from well designed, reusable components to unmaintainable one-time reports.


Step 1: Analysis of our Custom Code

The first step I took in order to roll out ATC was to perform a first check run using a default check variant in the customer system with the largest code base as well as in our central development system. The result of this first analysis was quite disillusioning. The first run of the default check variant of the ATC across this code base resulted in roughly 700 priority 1 errors, 2500 priority 2 errors and nearly 10.000 priority 3 errors.


Step 2: Discussion within the Core Developer Team

The next step was to discuss the check results with the core development team. This discussion basically consisted of two parts.


Firstly, when I presented the tool everyone agreed that it would be very useful and we should use it. When we then had a detailed look at the check results from the two systems they were not that positive any more. The main criticism was around the errors raised by the ATC. Especially some of the more common errors lead to quite some discussions whether the reported error was really an error or rather a false positive. Furthermore, it turned out that some of the default checks simply are not valid in our system landscape. An example of such a check is the Extended Program Check that checks for conditions that are always false. In the context of SAP IS-U the pattern "IF 1 = 2. MESSAGE..." is used extensively throughout the SAP standard. Consequently, it is also widely used in our custom code. However, the Extended Program Check reported each of these if statements. There reason is, that the check only allows for the pattern "IF 0 = 1. MESSAGE....".


Secondly, we discussed extensively how we should approach the large number of issues in our code base. It was obvious that we wouldn't be able to fix all reported issues. This would also have been not very sensible. One reason is that a lot of the programs for which issues were reported might not be in use any more.


As a result of the discussion we decided to:

  • define a custom check variant including only the relevant checks
  • define a custom object set.


Step 3: Definition, Testing and Rollout of a custom Check Variant

The next step we took was the definition of a custom check variant. The process of the definition of the custom check consisted of several parts. We started by defining an initial set of checks that we wanted to use. Furthermore, we adjusted the priorities of the checks to our need. It's pretty obvious that each error that might cause a short dump needs to be error of priority one. However, with other checks the correct priority is not that clear. Consider for example the check for an empty where clause in a select. A program containing such a statement might cause severe performance problems in production if it is executed on a large table, nevertheless it might be fine in a small correction program that is only executed once. Last but not least we modified some of the default checks (cf. the IF 1 = 2 pattern mentioned above) to suite ore needs. Unfortunately, the modification of the default checks required a modification of the SAP standard in some cases.

After the initial definition of the check variant we set up daily check runs in the QA system including the replication of the results into the development system. With this set up we worked for some weeks and iteratively refined our default check variant.


Step 4: Definition of a custom Object Set

Besides the executed checks we also needed an approach to cope with the large number of errors present in our code base. For this we decided that from now on we only wanted to transport objects into the production system without any priority 1 or priority 2 errors. However, we also decided that we didn't want to correct legacy code unless we were modifying it anyway (for example as a result of a bug fix or new feature request). Therefore we created a custom object set and a custom object collector. The custom object collector will only ad objects to the object set if it has been modified after a certain date. This way we were able to get check results only for new or recently modified objects.

Note that this approach has an important drawback. If for example the interface of a method is changed (e.g. by adding a additional required parameter) this might cause a syntax error in some other program using the class. However, with our custom object collector ATC will not be able to find this error as the program using the class itself is not changed. Nevertheless this was the approach we choose to cope with the large amount of legacy code.


Step 5: Rollout across all Developers

After the core development team had been working with the described set up for a while we were quite comfortable with the results that the ATC produced. Therefore we decided to roll out the ATC to all developers working in our system. This was done by informing everybody about the ATC as well as setting up the execution of the ATC checks upon release of transport request. Note that we for now only executed the checks upon release of a transport but did not block any transports because of ATC errors.

As a result of executing ATC upon the release of a transport request basically every developer was immediately using ATC, even if they had not integrated it into their workflow yet. This proved very successful, especially with the less experienced developers. As the ATC provides useful explanations together with each error it resulted in quite some discussion and learning regarding good ABAP code that wouldn't have happened otherwise.


Summary and Next Steps

After working with the described set up now for a few weeks the roll out of ATC proved quite successful in or development organisation. Especially the detailed documentation of the ATC errors help to improve the knowledge across the organisation. With respect to the roll out I think involvement of the core developers from the very beginning was very important. Only by agreeing on a set of ATC checks, sometimes only after a few discussions, everyone accepts the raised errors and fixes them. If we would have simply used the default check variants without the adaptations mentioned above I don't thinks the ATC would have been accepted as a tool to improve the code quality (e.g. due to a large number of false positives).


The next step we will take is the roll out of the ATC exemption process in our development organisation.The reason is that we already noticed that some priority 2 errors can't be fixed due to different restrictions (e.g. usage of standard SAP functionality in custom code that leads to error messages). Therefore we need the exemption process in order to remove the errors in those special cases. Furthermore, I see the exemption process also a prerequisite to disable the release of transport request as long as ATC errors are present.


Finally, I'd be happy to discuss experiences with other ATC users.




Filter Blog

By author:
By date:
By tag: