1 2 3 5 Previous Next

ABAP Testing and Troubleshooting

72 Posts

Ask a layman what he understands by "Automation" and the most expected answer is "Doing something automatically" .

     Right!!! When something is done without the intervention of a human, it is automation. And how would you answer "Why automation?” Is it because we trust the machines more than humans or because machines can work tirelessly or because they can do the same job tenfold faster?

The answer is "All of it and much more”.

     Automation helps us with all of this. But keep in mind that we are humans, and 'it is human to err'. What if the creator of this unit of automation (in our context the automated script) does it the wrong way? The “wrong” would also get multiplied and multiply faster than we can realize something is not right. The whole idea is to do it the right way, and in the very beginning itself. It is those small things we ignore in the initial stages which later manifest as huge problems when the automation happens at a large scale. Everything multiplies, including the mistakes we have done and it becomes very difficult to correct it.

This is one of the reasons why some people still prefer manual testing, as they think more time goes in the maintenance and the correction of scripts in addition to their creation and execution.

     The power of automation has always been undermined because of the lack of being organized, structured and methodical in its creation. An automated script is best utilized when it is most reliable and re-usable. These two factors contribute towards easy execution (once the scripts are ready), maintenance (whenever there's a change in application) and accuracy of results (when the scripts get executed).

 

     A reliable script can be created only when the tester has a good understanding of the application, its usage and the configurations behind it. This requires a lot of reading and investigation of the application to know how it behaves under a given circumstance. Once this is done, the script can be created such that it handles the application for all possible application flow.

 

     A reusable script truly defines the meaning and purpose of automation. With a perfectly reusable script, further automation of upcoming applications becomes easier and faster. Maintenance is another take away from this attribute of a script. Reusability is a result of standardization of a script in all aspects like structure and naming convention. Let us look at them individually and see how they add to the script’s reusability.

 

Structure of an automated script: A well-structured script becomes easy to understand and adapt especially to those who take it over from others. It makes the script crisp without any unwanted coding. It is important to strictly limit the script to its purpose and keep only the absolute necessary.

For example, when it comes to validation part, which can be done in many ways (message check, status check, table check, field check and so on) it might not be required for every case. Also remember that a DB Table check takes extra efforts from the script to connect to the system and read the table. One execution may not make a difference, but on a large scale execution, it – does – matter. 

 

Such additional coding needs to be identified and eliminated. Let us analyze the necessary coding according to the purpose of the test:

1.      Functional Correctness: Validation is required before and after the execution of the application to see how the transaction has affected the existing data.

                         Validation before test --> Execution of tcode under test  --> Validation after test

2.      Performance Measurement: Performance is considered only after the application has been tested for its functional correctness. Validation has no purpose here as the focus of test is non-functional

         Execution of tcode under test

3.      Creation of Data for Performance: Usually massive data is required for Performance measurement.

      For e.g. a 1000 customers with 150 line items each… the same could be repeated for vendors, cost centers, and so on. Table checks on this scale of execution would create a huge load on the system and it would take hours to create such data, may be even days in some cases. It is best to avoid validation/table reads of any kind. Another point to keep in mind here is that using a functional module or a BAPI to create data saves a lot of time and effort. A TCD recording or a SAPGUI recoding should only be the last option.

                        Execution of tcode for data creation --> Validation after test

4.      Creation of data for system Setup: this is usually done on a fresh system, with no data. Hence verification only at the end would suffice.

                         Execution of tcode for data creation --> Validation after test

 

There is also a subtle aspect of being structured…  The Naming Convention.

Testers usually tend to name their script to suit their need, ignoring the fact that these transactions can be used by anyone in an integrated environment. Searching for existing scripts becomes easy when proper rules are followed while naming them. It may happen that more than one script exist for the same purpose, such duplication has to be avoided. Attributes like the purpose (unit or integration testing or customizing or performance), tcode executed, action (change or create or delete), release need to be kept in mind while setting up the rules.

The same goes for Parameters as well. Work becomes easy while binding the called script and the calling script (script references). Also quick log analysis is another take away from common naming conventions for parameter.

There is another factor that makes automation more meaningful and complete in all sense. That is documentation. Documentation is a record of what exactly is expected of the script. Its importance is realized at the time of hand over, maintenance and adaptation. However ‘Document Creation’ itself can be dealt with as a separate topic. The idea is that document creation should not be disregarded as unimportant.

Having done all this, we need to watch out for the scope of test. With new functionality getting developed over the older ones (e.g. enhancement of features), re-prioritization needs to be done regularly. Old functionality may not be relevant anymore or they must be stable enough to be omitted from the focus topics. This way the new features/developments get tested better.

Now let us summarize the write up. All the aspects mentioned above are not something we cannot do without. Automation can still happen without any of these factors. However, the benefits we draw from them can make a huge difference on time and efforts of both automation and maintenance. Understanding a script authored by someone else, Knowledge transfer, Adaptation, Corrections... these are just a few advantages to list down.

The world of automation is very vast and its benefits still remain unexplored.

By  Ramesh Vodela

A couple of months back I wrote blog in interoperability section ( mobile development with C# and Xamarin) I felt it was too Technical and wanted to write a blog which can be fun reading but also helps readers and  readers can participate to help others.  I titled this blog SAP Consulting X issues ( like X files) as I found some issues quite strange - However the issues I find as X issues listed here could be N issues ( Normal issues for others).  I would really encourage others to declassify my X issues as their N issues ( if they have answer) or raise new X issues so that readers can benefit by being aware of some issues and work out a suitable solution or avoid a potential time consuming issue.

 

X1)  In the year 1996 I was given SAP help CD(My first exposure to SAP I'am a developer) and randomly clicked on a topic and the topic turned out to be Special Purpose Ledger ( FI configuration).I Came to US in 1997 thru a consulting company and went to my first project in Hershey for Hershey(PA) Canada Project to develop Report Painter report. I was in FI team. In the first team meeting there a issue that was becoming critical (to do with multi-currency reporting).  Prior to my arrival the team had about 12 possible solutions to solve this. I suggested the use of Special Purpose Ledger to create a ledger with the required data and this was the 13th solution.  The idea was accepted to be tried.  I was given a sandbox to try this out.  I configured the SPL and could populate all the fields except two fields which involved the use of ABAP Exits - As a developer I thought this will easy as I did the config which was not my skill set.  I wrote the exists and configured the ABAP program as mentioned in the documentation. But no matter what I did the control did not come to the exit and hence the two field could not be populated (BATCH population was not accepted).  The manger was obviously disappointed.  Some colleagues used to call me SAP ALL as although I was developer I showed interest in functional modules- From SAP ALL I came to SAP NONE.  After this I went about the job I came to Hershey - developed 50 Report Painter reports - Hershey Canada Project Went live - There was party for go live. My project ended - The next phase was Hershey US which was to start later.

 

PS1) Late 2001 I was watching CNN news and heard that there were problems with the SAP implementation which affected share price.

PS2) Sydney 2003 - I was asked by a professor in Accounting to configure and document the Special Purpose Ledger - I had the exact same document which I used at Hershey - I configured and wrote the exists as well - The Exit worked the very time with the exact same steps I has used in Hershey. I was dumb founded and tried to search for answer on the net.  I am not 100% sure of the accuracy of what I read which is "There is a Basis setting that actually makes sure that Flow control does not come to the Exit" .  This was strange finding.

 

X2) After Hershey my consulting company sent to another project in Wisconsin (1998).  This project was Reporting using Logistic Information System (Can the client put of BW reporting and manage with LIS reporting).  Having faced the Exit issue I made sure that all the Exists were working in my company's system before heading off to Wisconsin.    Again in this project I configured LIS and wrote the Exists - Again I had the same issue the control was not coming.  I spoke to the manager and we had finalized to raise OSS - But before I could that Exists started working on its own.  I find this strange as well

 

X3)  In 2006 I was doing Application development with .NET C# and ABAP Services - my ASP.NET screens invoke ABAP Services.  In one situation I found that I was sending a char30 field to SAP.  What I found in the debugger ( I could step from  ASP.NET to ABAP code) one of the characters in the middle of the string was getting corrupted(not the same as the one sent from ASP.NET). This was happening only to one particular FM.  I had no explanation but could circumvent this by sending another duplicate variable which was not getting corrupted.  I find this very strange

 

X4) 2013 I was developing ABAP in ECC with CRM and PI.  The Sales order Creation Starts in CRM and flows to ECC- I had to make a number of enhancements in ECC to implement some rules - As there different teams working and to make trouble shooting easy I create a Z table which populates some values which CRM is sending so that if any issue came up I could classify this as CRM issue or ECC issue so that the problem can be resolved.  To populate the Z Table I implemented an enhancement in the FM in ECC which is the first point of entry from CRM to ECC.  After a few weeks I found this table was not getting populated and on close examination I found the FM that was being called (Sales order Creation FM) was totally different from the FM it was calling before - The other FM where I populating earlier was not being called at all.  The Basis people told me after verifying the system that they made no changes.  I find this issue was strange

 

If you have experienced such issues do document it as it will help others.

 

In this blog I would like like to describe the idea of data-driven testing and how this can be implemented in ABAP Unit.

 

Data-driven testing is used to separate test data and expected results from unit test source code.

It allows running the same test case on multiple data sets without the need of modifying test code.

 

It does not replace such techniques as test doubles and mock objects. It is still a good idea to abstract your business logic in a way that will allow you to test independently of data. But even if your code is build in that way you can still benefit from parametrized testing and the ability to check many inputs on the same code.

It is particularly useful for methods which solve more complex computational formulas and algorithms. Input space is very wide in such cases and there are many boundary cases to consider. It is easier to maintain them outside of the code then.

 

Other xUnit frameworks like .Net nUnit Java jUnit provide the built-in capabilities to run parametrized test cases and implement data-driven testing.

I was missing such features in ABAP Unit and started looking for potential solutions.

 

The solution which I will present is based on eCATT test data containers and eCATT API.

eCATT Data containers are used to store input parameters and expected results. ABAP unit is used as an execution framework for unit tests.

 

For the sake of example let's take simple class with method which determines triangle type.

It returns:

  • 1 for Scalene (no two sides are the same length)
  • 2 for Isosceles (two sides are the same length and one differs)
  • 3 for Equilateral (all sides are the same length)

and throws exception if provided input is not a valid triangle

 

METHODS get_type

  IMPORTING

    a TYPE i

    b TYPE i

    c TYPE i

  RETURNING value(triangle_type) TYPE i

  RAISING lcx_invalid_param.

Now we proceed with creating unit tests.

There are two typical approaches:

- Creating a separate test method for each test case

- Bundling test cases in single method with multiple assertions

 

Usually I'm in favor of the second approach as it provides better overview in the test logs when some of the test cases are failing. It is also easier to debug single test case.

 

Example test case could look like this:

...

METHODS test_is_equilateral FOR TESTING.

...

METHOD test_is_equilateral.

  cl_abap_unit_assert=>assert_equals(

      act = lcl_triangle=>get_type( a = 3

                                    b = 3

                                    c = 3 )

      exp = lcl_triangle=>c_equilateral ).

ENDMETHOD.

Each time we want to add coverage and test some additional inputs either new test method has to be created or new assertion has to be added.

 

To overcome this we create a test data container in transaction SECATT.

container1.PNG

 

And define test variants

 

container2.PNG

 

In ABAP code we define test method which uses eCATT API class CL_APL_ECATT_TDC_API to retrieve variant values

 

METHOD test_get_type.

    DATA: a TYPE i,

          b TYPE i,

          c TYPE i,

          exp_type TYPE i.

 

    DATA: lo_tdc_api TYPE REF TO cl_apl_ecatt_tdc_api,

          lt_variants TYPE etvar_name_tabtype,

          lv_variant TYPE etvar_id.

 

    lo_tdc_api = cl_apl_ecatt_tdc_api=>get_instance( 'ZTRIANGLE_TEST_01' ).

    lt_variants = lo_tdc_api->get_variant_list( ).

 

    "skip default variant

    DELETE lt_variants WHERE table_line = 'ECATTDEFAULT'.

 

    " execute test logic for all data variants

    LOOP AT lt_variants INTO lv_variant.

      get_val: 'A' a,

              'B' b,

              'C' c,

              'EXP_TRIANGLE_TYPE' exp_type.

 

      cl_abap_unit_assert=>assert_equals(

          exp = exp_type

          act = lcl_triangle=>get_type( aa = a

                                        bb = b

                                        cc = c )

          quit = if_aunit_constants=>no ).

    ENDLOOP.

ENDMETHOD.

 

...

DEFINE get_val.

  lo_tdc_api->get_value(

          exporting

            i_param_name = &1

            i_variant_name = lv_variant

          changing

            e_param_value = &2 ).

END-OF-DEFINITION.

In my project I ended up creating a base class for parametrized unit tests which takes care of reading variants and running test methods.

It has one method which does all the job:

 

METHOD run_variants.

  DATA: lt_variants TYPE etvar_name_tabtype,

        lo_ex TYPE REF TO cx_root.

 

  "SECATT Test Data Container

  TRY .

      go_tdc_api = cl_apl_ecatt_tdc_api=>get_instance( imp_container_name ).

      " Get all variants from test data container

      lt_variants = go_tdc_api->get_variant_list( ).

    CATCH cx_ecatt_tdc_access INTO lo_ex.

      cl_aunit_assert=>fail(

          msg  = |Variant { gv_current_variant } failed: { lo_ex->get_text( ) }|

          quit = if_aunit_constants=>no ).

      RETURN.

  ENDTRY.

 

  "skip default variant

  DELETE lt_variants WHERE table_line = 'ECATTDEFAULT'.

 

  " execute test method for all data variants

  " method should be parameterless and public in child unit test class

  LOOP AT lt_variants INTO gv_current_variant.

    TRY .

        CALL METHOD (imp_method_name).

      CATCH cx_root INTO lo_ex.

        cl_aunit_assert=>fail(

            msg  = |Variant { gv_current_variant } failed: { lo_ex->get_text( ) }|

            quit = if_aunit_constants=>no ).

    ENDTRY.

  ENDLOOP.

ENDMETHOD.

Modified test class using this approach looks as follows:

 

CLASS ltc_test_triangle DEFINITION FOR TESTING DURATION SHORT RISK LEVEL HARMLESS

  INHERITING FROM zcl_zz_ca_ecatt_data_ut.

  PUBLIC SECTION.

    METHODS test_get_type FOR TESTING.

    METHODS test_get_type_variant.

    METHODS test_get_type_invalid_tri FOR TESTING.

    METHODS test_get_type_invalid_tri_var.

ENDCLASS.

 

CLASS ltc_test_triangle IMPLEMENTATION.

  METHOD test_get_type.

    "run method TEST_GET_TYPE_VARIANT for all variants from container ZTRIANGLE_TEST_01

    run_variants(

        imp_container_name = 'ZTRIANGLE_TEST_01'

        imp_method_name  = 'TEST_GET_TYPE_VARIANT' ).

  ENDMETHOD.

 

  METHOD test_get_type_variant.

    DATA: a TYPE i,

          b TYPE i,

          c TYPE i,

          exp_type TYPE i.

 

    get_val: 'A' a,

            'B' b,

            'C' c,

            'EXP_TRIANGLE_TYPE' exp_type.

 

    cl_abap_unit_assert=>assert_equals(

      exp = exp_type

      act = lcl_triangle=>get_type( a = a

                                    b = b

                                    c = c )

      quit = if_aunit_constants=>no

      msg = |Wrong type returned for variant { gv_current_variant }| ).

  ENDMETHOD.

 

  METHOD test_get_type_invalid_tri.

    "run method TEST_GET_TYPE_INVALID_TRI_VAR for all variants from container ZTRIANGLE_TEST_02

    run_variants(

        imp_container_name = 'ZTRIANGLE_TEST_02'

        imp_method_name  = 'TEST_GET_TYPE_INVALID_TRI_VAR' ).

  ENDMETHOD.

 

  METHOD test_get_type_invalid_tri_var.

    DATA: a TYPE i,

          b TYPE i,

          c TYPE i.

    get_val: 'A' a,

            'B' b,

            'C' c.

    TRY .

        lcl_triangle=>get_type( a = a

                                b = b

                                c = c ).

 

        cl_abap_unit_assert=>fail(

            msg = |Expected exception not thrown for invalid triangle - variant { gv_current_variant }|

            quit = if_aunit_constants=>no ).

      CATCH lcx_invalid_param.

        " OK - expected

    ENDTRY.

  ENDMETHOD.

ENDCLASS.

 

As you can see with this approach it's very easy to create parametrized test cases where data is maintained in external container. Adding new cases requires just modifying TDC by adding new variant.

It proved to be very useful for test cases checking complex logic requiring multiple input sets to be covered.

 

There are also some challenges with this approach:

- you need to remember to pass quit = if_aunit_constants=>no in assertions otherwise test will stop at first failed variant

- in ABAP Unit results report there is only one method visible and it is not reflecting number of variants tested

 

For those challenges I would love to see some improvements in the future versions of ABAP Unit. Similarly to what is available in other xUnit frameworks.

Ideally there should be a way to provide the variants in a declarative way and they should be visible as separate nodes in test run results.

 

Kind regards,

 

Tomasz

I have used this testing technique during one of my test phase, where we were testing the portal applications

This test technique is applicable where we have portal application & equivalent functionality in R/3(back-end) as well.

I will take the examples from EAM where we have portal & R/3 transactions available to create/change/display the objects Equipment, Functional Location, Orders, Task list, Notifications etc.


Portal applications have its own benefits, End User need not to remember all the transactions. But at the same time its mandatory functionality should behave same whether it’s is launched from portal or R/3 transactions.

We have tested different combinations and ensured that functionality is behaving in same manner in all the cases. Wherever this deviates from expected behavior we can analyze the behavior further & report an issue.

If we test both (R/3 & portal) of them separately without comparison, it’s difficult to validate the exact & expected behavior.

 

Prerequisites:  

  • Portal configurations for system should already be taken care.
  • User Roles

 

I have described below few aspects of functionality which we should test.

Few Combinations which we validated were:


Open the object in change mode in portal & try changing it in R/3 and vice verse.      

Expected results are: object should be locked & not available for changes.


Change few customization in R/3 and check in portal for the impact.

Expected results are: customization change should have impact on portal too.


Create Object in Portal & check in R/3 transaction/ tables & vice verse.   

Expected results are: Objects created in portal should be available in database table in back-end system.


Block the Object status as inactive in R/3.   

Expected results are:  Status should be updated on portal for respective object & we should not be able to change it any further.


There are many other cases/combinations which can be compared. With this test technique we can ensure the functionality is robust & identical, does not change its behavior with change in test environment or technology.


This article might be useful for testers who are testing the portal & will help them in designing there testing  even better.

I will further share my findings & new ways of testing for any new  functionality from my future test phases.

                           

Summary

This blog is about changing the way of work of Source Code Inspector(tcode:SCI) especially when Transport Organizer integration is activated. Transport Organizer integration can be activated using tcode SE03. Thanks to SAP for this talented and flexible tool.

 

Problem

In one of our projects we needed to seperate SCI controls according to creation date of objects. We needed this because afore mentioned project is started 12 years ago and as you guess quality and security standarts are changed over time. At sometime integration of SCI/ATC and Tranport  Organizer(SE01) is activated. So developers can not release a request before handling errors given by SCI. But how can you force developer who made just single line of change to a huge program? How can he/she handle all errors given after checks without knowing semantics of this huge program? What if this change should be transported to production system immediately? The solution was to seperate check variants according to creation date of objects .

Also periodic checks can be planned to adduct old objects to new standarts.

 

In this blog i will try to explain what I did to workaround this issue. To benefit from the solution you should be familiar with adding your own test class to SCI.

You can find information about adding your own test class to SCI at : http://scn.sap.com/community/abap/blog/2006/11/02/code-inspector--how-to-create-a-new-check and http://wiki.scn.sap.com/wiki/download/attachments/3669/CI_NEW_CHECK.pdf?original_fqdn=wiki.sdn.sap.com

 

Solution summary

First, I created a test class ZCL_SCI_TEST_BYDATE (derived from CL_CI_TEST_ROOT) that has just 2 parameters date (mv_credat) and check variant(mv_checkvar). This class decides if tests in mv_checkvar is required for the object under test by checking creation date . If object is 'new' it runs additional tests.

 

Secondly, I created two SCI check variants : BASIC_VARIANT and EXTENDED_VARIANT. The first one is for old development objects and second one is for additional tests for ‘new’ objects. ‘new’ means that object is created after certain date(ZCL_SCI_TEST_BYDATE->mv_credat). First check variant includes my custom test which is mentioned above (ZCL_SCI_TEST_BYDATE) and EXTENDED_VARIANT is given as mv_checkvar parameter. Also second check variant is complementary for the first one and includes different tests than the first one.

 

Finally, to enable navigation by double clicking at check results I had to make one simple repair and 2 enhancements.

 

Step 1 : ZCL_ SCI_TEST_BYDATE class :


 

Most important method of this class is -normally- run() .

run method checks if object is created after date mv_checkvar, gets test list for EXTENDED_VARIANT and starts new test procedure for new test list.

 

  1. METHOD run.
  2. DATA lo_test_ref TYPEREF TO cl_ci_tests .
  3. * Check whether the object is created after mv_credat
  4. IF me->is_new_object()NE abap_true .
  5. EXIT.
  6. ENDIF.
  7.   me->modify_insp_chkvar(RECEIVING eo_test_list = lo_test_ref ).
  8. * RUN_BEGIN
  9.   lo_test_ref->run_begin(
  10. EXPORTING
  11.       p_no_aunit ='X'
  12.       p_no_suppress ='X'
  13.       p_oc_ignore ='X').
  14. * RUN
  15.   lo_test_ref->run( p_object_type = object_type
  16.                     p_object_name = object_name
  17.                     p_program_name = program_name ).
  18. * RUN_END
  19.   lo_test_ref->run_end().
  20. ENDMETHOD.

 

Another important method is modify_insp_chkvar which returns test list for EXTENDED_VARIANT.

 

  1. METHOD modify_insp_chkvar.
  2. * Returns test list for mv_checkvar(EXTENDED_VARIANT).
  3. * Also this method combines BASIC_VARIANT and EXTENDED_VARIANT's test lists
  4. * on INSPECTION. Just needed when check results double clicked.
  5. * (I could not handle it with MESSAGE event of CL_CI_TEST_ROOT)
  6.   DATA lo_check_var TYPE REF TO cl_ci_checkvariant .
  7.   DATA lo_check_var_insp TYPE REF TO cl_ci_checkvariant .
  8.   DATA lt_var_test_list TYPE sci_tstvar .
  9.   FIELD-SYMBOLS : <l_var_entry> TYPE sci_tstval .
  10.   CLEAR eo_test_list .
  11. * Get reference for EXTENDED_VARIANT - additional checks for new objects
  12.   cl_ci_checkvariant=>get_ref(
  13.     EXPORTING
  14.       p_user = ''
  15.       p_name = mv_checkvar
  16.     RECEIVING
  17.       p_ref = lo_check_var
  18.     EXCEPTIONS
  19.       chkv_not_exists = 1
  20.       missing_parameter = 2
  21.       OTHERS = 3 ) .
  22.   IF sy-subrc NE 0 .
  23.     MESSAGE e001(z_sci_msg) WITH mv_checkvar description . "Check variant &1-&2 does not exist.
  24.   ENDIF.
  25.   IF lo_check_var->variant IS INITIAL .
  26.     lo_check_var->get_info(
  27.       EXCEPTIONS
  28.         could_not_read_variant = 1
  29.         OTHERS                = 2 ) .
  30.     IF sy-subrc NE 0 .
  31.       EXIT .
  32.     ENDIF.
  33.   ENDIF.
  34. * Get test list of EXTENDED_VARIANT - addional checks for new objects
  35.   cl_ci_tests=>get_list(
  36.     EXPORTING
  37.       p_variant      = lo_check_var->variant
  38.     RECEIVING
  39.       p_result        = eo_test_list
  40.     EXCEPTIONS
  41.       invalid_version = 1
  42.       OTHERS          = 2 ) .
  43.   IF sy-subrc NE 0.
  44.     EXIT .
  45.   ENDIF.
  46. *...
  47. ENDMETHOD.

 

 

Important points about my custom class definition is ok now. I attached full source code.

If you want to add parameters to your custom test classes look at query_attributes, get_attributes, put_attributes methods of ZCL_SCI_TEST_BYDATE.

 

To add new test class to SCI test list I opened SCI->Management of tests and chose my new test class and clicked save button.

 

 

Step 2 : Check variants

As I mentioned before I created 2 checkvariants. Below is BASIC_VARIANT which is valid for all programs.Selected test list in figures below is just an example. Notice that my new test ‘Additional tests for new programs’ is selected. Parameters of new test can be seen in this picture.

 

 

Next picture depicts second checkvariant which is valid for objects created after ’01.01.2014’ (mv_date).

 

 

PS: SE01 uses SCI checkvariant TRANSPORT as default. But there is a way to change this – thanks to SCI : I changed default checkvariant with my BASIC_VARIANT. To achieve this I changed the SCICHKV_ALTER table’s record which has ‘TRANSPORT’ at CHECKVNAME_DEF field.

Note that AFAIK DEFAULT checkvariant is used by SE80, so it is modifiable too .


Step 3 : Adding check results of custom test class to SCI.

 

-This step is not related to the main idea, first 2 steps are sufficient to express my idea-

 

After creation of new test class and checkvariants I had been able to run additional checks for new objects but SCI result list was not navigating to EXTENDED_VARIANT’s test results when I double clicked. As I guess SCI is just aware of BASIC_VARIANT’s test list and can not navigate to unknown test’s results. I should add my additional tests to inspection object’s test list.

I made a single line of repair(CL_CI_INSPECTION->EXECUTE_DIRECT) and enhancement to CL_CI_TESTS->GET_LIST. The aim of these modifications is to fill the ‘inspection’ property of ZCL_SCI_TEST_BYDATE .( ZCL_SCI_TEST_BYDATE has a property named inspection inheriting from CL_CI_TEST_ROOT but it is empty when tests are running. I don’t know if it’s a bug or not ).

 

PS: CL_CI_TEST_ROOT class has method ‘inform’ and event Message. But I could not be able to pass  my additional check results to SCI result list . I will work on this and if its ok step3 will be useless.

 

CL_CI_TESTS->GET_LIST enhancement

 

 

  1. """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""$"$\SE:(1) Class CL_CI_TESTS, Method GET_LIST, End                                                                                                          A
  2. *$*$-Start: (1)---------------------------------------------------------------------------------$*$*
  3. ENHANCEMENT1  Z_SCI_ENH_IMP2."active version
  4. *
  5. IF p_inspection ISNOTINITIAL.
  6.     p_result->inspection= p_inspection .
  7. LOOP AT p_result->listINTO l_ref_test .
  8.       l_ref_test->if_ci_test~inspection = p_inspection .
  9. ENDLOOP.
  10. ENDIF.
  11. ENDENHANCEMENT.
  12. *$*$-End:  (1)---------------------------------------------------------------------------------$*$*

CL_CI_INSPECTION->execute_direct , repair
  1. *...
  2. call method CL_CI_TESTS=>GET_LIST
  3. exporting
  4.       P_VARIANT      = CHKV->VARIANT
  5.       p_inspection = me “added line
  6. receiving
  7.       P_RESULT        = L_TEST_REF
  8. exceptions
  9.       INVALID_VERSION =1
  10. others=2.
  11. *....

While there are quite some good documents about the setup of the ABAP Test Cockpit (ATC) on SDN (cf. http://scn.sap.com/docs/DOC-32791 and http://scn.sap.com/docs/DOC-32628) I haven't seen any experience reports about a roll out of ATC yet. Therefore I decided to blog about my current experiences in rolling out the ATC in our development organization.

 

Step 0: Some Background

Before starting to describe what we did in our ATC roll out I want to give you some background about the environment of the roll out. At my company we are managing and maintaining several SAP system landscapes for different customer. A typical customer landscape consists of a SAP CRM, a SAP IS-U (ERP) and a SAP BW together with several non-SAP systems (e.g. an output management system and an archive system). In addition to that we have a central development which is used to develop core functionality and distribute these across the customer systems. These core functionalities are typically developed in our own namespace. Therefore, each of our customer system contains a set of custom development in the customer namespace and a set of developments in our own namespace.

The second important aspect of our environment is the diversity of developers developing in the system. Firstly, we have a core development team. This team consists of people with a deep knowledge around software development and mostly some formal training (e.g. a computer science degree) in the area. Secondly, we have a team of functional consultants with a wide range of development skills, ranging from some basic ABAP knowledge to very deep knowledge. And finally we usually have several external consultants developing in the different customer systems as well.

As you might have guessed the result of this environment is a quite diverse code base containing anything from well designed, reusable components to unmaintainable one-time reports.

 

Step 1: Analysis of our Custom Code

The first step I took in order to roll out ATC was to perform a first check run using a default check variant in the customer system with the largest code base as well as in our central development system. The result of this first analysis was quite disillusioning. The first run of the default check variant of the ATC across this code base resulted in roughly 700 priority 1 errors, 2500 priority 2 errors and nearly 10.000 priority 3 errors.

 

Step 2: Discussion within the Core Developer Team

The next step was to discuss the check results with the core development team. This discussion basically consisted of two parts.

 

Firstly, when I presented the tool everyone agreed that it would be very useful and we should use it. When we then had a detailed look at the check results from the two systems they were not that positive any more. The main criticism was around the errors raised by the ATC. Especially some of the more common errors lead to quite some discussions whether the reported error was really an error or rather a false positive. Furthermore, it turned out that some of the default checks simply are not valid in our system landscape. An example of such a check is the Extended Program Check that checks for conditions that are always false. In the context of SAP IS-U the pattern "IF 1 = 2. MESSAGE..." is used extensively throughout the SAP standard. Consequently, it is also widely used in our custom code. However, the Extended Program Check reported each of these if statements. There reason is, that the check only allows for the pattern "IF 0 = 1. MESSAGE....".

 

Secondly, we discussed extensively how we should approach the large number of issues in our code base. It was obvious that we wouldn't be able to fix all reported issues. This would also have been not very sensible. One reason is that a lot of the programs for which issues were reported might not be in use any more.

 

As a result of the discussion we decided to:

  • define a custom check variant including only the relevant checks
  • define a custom object set.

 

Step 3: Definition, Testing and Rollout of a custom Check Variant

The next step we took was the definition of a custom check variant. The process of the definition of the custom check consisted of several parts. We started by defining an initial set of checks that we wanted to use. Furthermore, we adjusted the priorities of the checks to our need. It's pretty obvious that each error that might cause a short dump needs to be error of priority one. However, with other checks the correct priority is not that clear. Consider for example the check for an empty where clause in a select. A program containing such a statement might cause severe performance problems in production if it is executed on a large table, nevertheless it might be fine in a small correction program that is only executed once. Last but not least we modified some of the default checks (cf. the IF 1 = 2 pattern mentioned above) to suite ore needs. Unfortunately, the modification of the default checks required a modification of the SAP standard in some cases.

After the initial definition of the check variant we set up daily check runs in the QA system including the replication of the results into the development system. With this set up we worked for some weeks and iteratively refined our default check variant.

 

Step 4: Definition of a custom Object Set

Besides the executed checks we also needed an approach to cope with the large number of errors present in our code base. For this we decided that from now on we only wanted to transport objects into the production system without any priority 1 or priority 2 errors. However, we also decided that we didn't want to correct legacy code unless we were modifying it anyway (for example as a result of a bug fix or new feature request). Therefore we created a custom object set and a custom object collector. The custom object collector will only ad objects to the object set if it has been modified after a certain date. This way we were able to get check results only for new or recently modified objects.

Note that this approach has an important drawback. If for example the interface of a method is changed (e.g. by adding a additional required parameter) this might cause a syntax error in some other program using the class. However, with our custom object collector ATC will not be able to find this error as the program using the class itself is not changed. Nevertheless this was the approach we choose to cope with the large amount of legacy code.

 

Step 5: Rollout across all Developers

After the core development team had been working with the described set up for a while we were quite comfortable with the results that the ATC produced. Therefore we decided to roll out the ATC to all developers working in our system. This was done by informing everybody about the ATC as well as setting up the execution of the ATC checks upon release of transport request. Note that we for now only executed the checks upon release of a transport but did not block any transports because of ATC errors.

As a result of executing ATC upon the release of a transport request basically every developer was immediately using ATC, even if they had not integrated it into their workflow yet. This proved very successful, especially with the less experienced developers. As the ATC provides useful explanations together with each error it resulted in quite some discussion and learning regarding good ABAP code that wouldn't have happened otherwise.

 

Summary and Next Steps

After working with the described set up now for a few weeks the roll out of ATC proved quite successful in or development organisation. Especially the detailed documentation of the ATC errors help to improve the knowledge across the organisation. With respect to the roll out I think involvement of the core developers from the very beginning was very important. Only by agreeing on a set of ATC checks, sometimes only after a few discussions, everyone accepts the raised errors and fixes them. If we would have simply used the default check variants without the adaptations mentioned above I don't thinks the ATC would have been accepted as a tool to improve the code quality (e.g. due to a large number of false positives).

 

The next step we will take is the roll out of the ATC exemption process in our development organisation.The reason is that we already noticed that some priority 2 errors can't be fixed due to different restrictions (e.g. usage of standard SAP functionality in custom code that leads to error messages). Therefore we need the exemption process in order to remove the errors in those special cases. Furthermore, I see the exemption process also a prerequisite to disable the release of transport request as long as ATC errors are present.

 

Finally, I'd be happy to discuss experiences with other ATC users.

 

Christian

Bugs in your custom ABAP code can be quite expensive when they impact critical business processes, which is why quality assurance of custom ABAP code is receiving more and more attention in business. Detecting bugs early in the development stages before they can be moved across the landscape ensure that the cost and risk impact is minimal. To reach this goal, SAP offers the ABAP Test Cockpit (ATC) and Code Inspector as quality assurance tools.

 

The ATC is available with EhP2 for SAP NetWeaver 7.0 support package stack 12 (SAP Basis 7.02, SAP Kernel 7.20) and EhP3 for SAP NetWeaver 7.0 support package stack 5 (SAP Basis 7.31, SAP Kernel 7.20).

 

General process for releasing a transport request

The transport organizer is a tool for managing the objects that gather the changes carried on during the development and configuration phases, and for transporting them across the landscape. The two kinds of objects used are the Request and the Task.

The Request is the main container, which contains zero to any number of Tasks.The CTS automatically creates one task for each user who adds objects to the Request. An ABAP transport request may contain many tasks that are assigned to different users.When you want to transport the Request, you have to first release all the tasks of the request, and then the request itself. When it is released, the transport is done automatically or manually by the administrator. The transport goes towards the systems and clients defined in the transport routes.

 

Current behavior of Code Inspector checks during the release of a transport request or a transport task

Releasing a transport request or a task can be considered as the first quality gate to ensure that poor quality custom code is not transported across the landscape. Currently, Code Inspector checks can be activated during the release of a transport request. To activate this feature, perform the following steps

  1. Go to transaction SE03
  2. Double click on the entry 'Global Customizing' (Transport Organizer)
  3. Under 'Check Objects when Request Released' , select the option 'Globally Activated'.

 

Now this activates the check of a transport request. But there may be the requirement to check also the single 'tasks'. Currently, automatically triggering  Code Inspector checks during the release of a 'task' is not available as a standard. To address this requirement, SAP provides a standard BAdI 'CTS_REQUEST_CHECK, that can be implemented by customers to trigger code inspector checks during the release of a task.

 

In this blog, I will illustrate the steps required to implement the BAdI, which when activated will trigger the checks during the release of a task.

(please adapt the naming conventions, texts, badi names, class names etc as per your requirement)

Steps for triggering Code Inspector Checks during the release of tasks

  1. Go to transaction SE19
  2. At the create implementation box provide the name of the classic BAdI „CTS_REQUEST_CHECK‟ and click on the button „Create Impl.‟

Image_1.png

   3. Provide a BAdi implementation name

badi_impl_name.png

   4. Provide a short text. Click on the „Save‟ button. Provide package details when prompted

badi_short_text.png

   5. Double clicking on the method ‚check_before_release„ of the BAdi interface takes you to the method implementation of the generated ABAP object class that was created during BAdi impl creation.

5_badi.png

   6. In the method „CHECK_BEFORE_RELEASE‟ first check if the release concerns a transport request or a transport task.Code the following portion in the method 'CHECK_BEFORE_RELEASE'

Code1.PNG

     7. For calling the actual Code inspector check itself create a new private method sci_check in the class „ZCL_IM__CTS_REQUEST_CHECK‟

  sci_chk1.png

     Provide the following parameters for the method SCI_CHECKsci_check2.png 

     create the method exception

sci_chk3.png

  8. The rest of the method SCI_CHECK contains the various steps of creating Code inspector check, assigning variants, object sets etc.It is sufficient if you copy the piece of code from the attachment 'sci_check.txt.zip'


  9. Finally create the message class „ZSCI‟ with the following valuesmsg_class.png

  10. Save and activate all your changes. Do not forget to activate the BAdI implementation in transaction SE19.

 

* Deactivate the BAdI in SE19, if you do not wish to use this feature

 

 

 


Sometimes we need to debug a process but the logic that you need to debug is after a button event after 3 pop-ups. So, what you do? Debug everything trying to figure it out when your point starts...  NO! You can create a shortcut on your desktop and drop down it into your pop-up or before the event and the debug will start after it.

 

Creating a debug shortcut:

Shorcut_step1.png

 

Change the title to help you to identify the client, change the tcode to /h and chose a place to save the shortcut

Shorcut_step2.png

 

And Finish.

 

Go to your desktop and find your shortcut:

Shorcut_step3.png

 

Now, how the magic happens:

 

Shorcut_step3.png

 

A message will be shown...

Continue the process... And the debbug will starts after your click event!

 

 

 

Hope it helps

Tarun Telang

Overview of Eclipse IDE

Posted by Tarun Telang Aug 5, 2013

Below is the easy to remember short description of Eclipse IDE

(E)ditor for many programming language

(C)ode Faster

(L)ess Typing with Code Completion

(I)ntegrated Development Environment

(P)latform

(S)yntax Highlighting

(E)xtensible

 

Note: This is not an official expansion of Eclipse.

 

If you are still wondering what does it means. Please read ahead.

 

Following are the advantages of using Eclipse as Development tool.

  • providing an open and extensible development environment - open plug-in architecture provides suitable platform for extending it with more specific features and combining it together.
  • cover full/holistic software life cycle (they can develop, build, deploy, and execute applications directly from the - design (modeling), construction (coding) and maintenance (deploying, debugging, monitoring, testing..) tools..
  • Integrated environment - seamless integration

 

It openness and inter operability through standards and facilitate open source integration.


Following are the Components of Eclipse Platform

  • Eclipse SDK
    • Eclipse JDT
    • Eclipse PDE
    • Eclipse Platform (RCP)
      • Eclipse UI
      • Eclipse File System
      • Eclipse Runtime
    • Eclipse Modeling Framework

 

References

The SAP Eclipse Story  - http://www.sdn.sap.com/irj/sdn/nw-devstudio?rid=/library/uuid/10c671f2-6364-2a10-8d96-8b3145d4a478]


Tutorials

  1. Eclipse IDE Tutorial - http://www.vogella.de/articles/Eclipse/article.html
  2. OSGi with Eclipse Equinox - Tutorial -http://www.vogella.de/articles/OSGi/article.html
  3. Eclipse Plugin Development Tutorial = http://www.vogella.de/articles/EclipsePlugIn/article.html
  4. Eclipse RCP Tutorial- http://www.vogella.de/articles/EclipseRCP/article.html
  5. Eclipse Modeling Framework (EMF) - Tutorial - http://www.vogella.de/articles/EclipseEMF/article.html

Introduction

In this video, we will discuss test data container, "Internal" and "External" variants. We will see how to import parameters we defined in our test script (discussed in part 5 of this video series). Finally, the internal variants defined will be used to create a template file for creating external variant file.

 

Lesson 6 : Creating Test Data Container

 

Best Regards,

Gopal Nair.

In this video, we will be editing the test script we recorded, and replacing the hardcoded values with parameters.

 

Lesson 5 :Creating Test Script Parameters

 

Best Regards,

Gopal Nair.

Preface

Inspired by video tutorials made by Thomas Jung and also open sap course (http://open.sap.com), I decided to try my hands on video tutorials. I have always liked the "seeing and learning" experience, especially, when starting out with a new technology.

 

Lesson 4 :Test Script Recording Initial Dry Run

 

 

 

Best Regards,

Gopal Nair.

Preface

Inspired by video tutorials made by Thomas Jung and also open sap course (http://open.sap.com), I decided to try my hands on video tutorials. I have always liked the "seeing and learning" experience, especially, when starting out with a new technology.

 

Lesson 3 :Test Script Recording

 

Best Regards,

Gopal Nair.

Preface

Inspired by video tutorials made by Thomas Jung and also open sap course (http://open.sap.com), I decided to try my hands on video tutorials. I have always liked the "seeing and learning" experience, especially, when starting out with a new technology.

 

Lesson 2 : Test Script Initial Creation & Testing

 

 

Best Regards,

Gopal Nair.

Preface

Inspired by video tutorials made by Thomas Jung and also open sap course (http://open.sap.com), I decided to try my hands on video tutorials. I have always liked the "seeing and learning" experience, especially, when starting out with a new technology.

 

Lesson 1 : System Data Container

 

 

Best Regards,

Gopal Nair.

Actions

Filter Blog

By author:
By date:
By tag: