Application Development Blog Posts
Learn and share on deeper, cross technology development topics such as integration and connectivity, automation, cloud extensibility, developing at scale, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member182670
Contributor

In this blog I would like like to describe the idea of data-driven testing and how this can be implemented in ABAP Unit.

Data-driven testing is used to separate test data and expected results from unit test source code.

It allows running the same test case on multiple data sets without the need of modifying test code.

It does not replace such techniques as test doubles and mock objects. It is still a good idea to abstract your business logic in a way that will allow you to test independently of data. But even if your code is build in that way you can still benefit from parametrized testing and the ability to check many inputs on the same code.

It is particularly useful for methods which solve more complex computational formulas and algorithms. Input space is very wide in such cases and there are many boundary cases to consider. It is easier to maintain them outside of the code then.

Other xUnit frameworks like .Net nUnit Java jUnit provide the built-in capabilities to run parametrized test cases and implement data-driven testing.

I was missing such features in ABAP Unit and started looking for potential solutions.

The solution which I will present is based on eCATT test data containers and eCATT API.

eCATT Data containers are used to store input parameters and expected results. ABAP unit is used as an execution framework for unit tests.

For the sake of example let's take simple class with method which determines triangle type.

It returns:

  • 1 for Scalene (no two sides are the same length)
  • 2 for Isosceles (two sides are the same length and one differs)
  • 3 for Equilateral (all sides are the same length)

and throws exception if provided input is not a valid triangle


METHODS get_type


  IMPORTING


    a TYPE i


    b TYPE i


    c TYPE i


  RETURNING value(triangle_type) TYPE i


  RAISING lcx_invalid_param.







Now we proceed with creating unit tests.

There are two typical approaches:

- Creating a separate test method for each test case

- Bundling test cases in single method with multiple assertions

Usually I'm in favor of the second approach as it provides better overview in the test logs when some of the test cases are failing. It is also easier to debug single test case.

Example test case could look like this:


...


METHODS test_is_equilateral FOR TESTING.


...


METHOD test_is_equilateral.


  cl_abap_unit_assert=>assert_equals(


      act = lcl_triangle=>get_type( a = 3


                                    b = 3


                                    c = 3 )


      exp = lcl_triangle=>c_equilateral ).


ENDMETHOD.





Each time we want to add coverage and test some additional inputs either new test method has to be created or new assertion has to be added.

To overcome this we create a test data container in transaction SECATT.

And define test variants

In ABAP code we define test method which uses eCATT API class CL_APL_ECATT_TDC_API to retrieve variant values


METHOD test_get_type.


    DATA: a TYPE i,


          b TYPE i,


          c TYPE i,


          exp_type TYPE i.



    DATA: lo_tdc_api TYPE REF TO cl_apl_ecatt_tdc_api,


          lt_variants TYPE etvar_name_tabtype,


          lv_variant TYPE etvar_id.



    lo_tdc_api = cl_apl_ecatt_tdc_api=>get_instance( 'ZTRIANGLE_TEST_01' ).


    lt_variants = lo_tdc_api->get_variant_list( ).



    "skip default variant


    DELETE lt_variants WHERE table_line = 'ECATTDEFAULT'.



    " execute test logic for all data variants


    LOOP AT lt_variants INTO lv_variant.


      get_val: 'A' a,


              'B' b,


              'C' c,


              'EXP_TRIANGLE_TYPE' exp_type.



      cl_abap_unit_assert=>assert_equals(


          exp = exp_type


          act = lcl_triangle=>get_type( aa = a


                                        bb = b


                                        cc = c )


          quit = if_aunit_constants=>no ).


    ENDLOOP.


ENDMETHOD.



...


DEFINE get_val.


  lo_tdc_api->get_value(


          exporting


            i_param_name = &1


            i_variant_name = lv_variant


          changing


            e_param_value = &2 ).


END-OF-DEFINITION.




In my project I ended up creating a base class for parametrized unit tests which takes care of reading variants and running test methods.

It has one method which does all the job:


METHOD run_variants.


  DATA: lt_variants TYPE etvar_name_tabtype,


        lo_ex TYPE REF TO cx_root.



  "SECATT Test Data Container


  TRY .


      go_tdc_api = cl_apl_ecatt_tdc_api=>get_instance( imp_container_name ).


      " Get all variants from test data container


      lt_variants = go_tdc_api->get_variant_list( ).


    CATCH cx_ecatt_tdc_access INTO lo_ex.


      cl_aunit_assert=>fail(


          msg  = |Variant { gv_current_variant } failed: { lo_ex->get_text( ) }|


          quit = if_aunit_constants=>no ).


      RETURN.


  ENDTRY.



  "skip default variant


  DELETE lt_variants WHERE table_line = 'ECATTDEFAULT'.



  " execute test method for all data variants


  " method should be parameterless and public in child unit test class


  LOOP AT lt_variants INTO gv_current_variant.


    TRY .


        CALL METHOD (imp_method_name).


      CATCH cx_root INTO lo_ex.


        cl_aunit_assert=>fail(


            msg  = |Variant { gv_current_variant } failed: { lo_ex->get_text( ) }|


            quit = if_aunit_constants=>no ).


    ENDTRY.


  ENDLOOP.


ENDMETHOD.



Modified test class using this approach looks as follows:


CLASS ltc_test_triangle DEFINITION FOR TESTING DURATION SHORT RISK LEVEL HARMLESS


  INHERITING FROM zcl_zz_ca_ecatt_data_ut.


  PUBLIC SECTION.


    METHODS test_get_type FOR TESTING.


    METHODS test_get_type_variant.


    METHODS test_get_type_invalid_tri FOR TESTING.


    METHODS test_get_type_invalid_tri_var.


ENDCLASS.



CLASS ltc_test_triangle IMPLEMENTATION.


  METHOD test_get_type.


    "run method TEST_GET_TYPE_VARIANT for all variants from container ZTRIANGLE_TEST_01


    run_variants(


        imp_container_name = 'ZTRIANGLE_TEST_01'


        imp_method_name  = 'TEST_GET_TYPE_VARIANT' ).


  ENDMETHOD.



  METHOD test_get_type_variant.


    DATA: a TYPE i,


          b TYPE i,


          c TYPE i,


          exp_type TYPE i.



    get_val: 'A' a,


            'B' b,


            'C' c,


            'EXP_TRIANGLE_TYPE' exp_type.



    cl_abap_unit_assert=>assert_equals(


      exp = exp_type


      act = lcl_triangle=>get_type( a = a


                                    b = b


                                    c = c )


      quit = if_aunit_constants=>no


      msg = |Wrong type returned for variant { gv_current_variant }| ).


  ENDMETHOD.



  METHOD test_get_type_invalid_tri.


    "run method TEST_GET_TYPE_INVALID_TRI_VAR for all variants from container ZTRIANGLE_TEST_02


    run_variants(


        imp_container_name = 'ZTRIANGLE_TEST_02'


        imp_method_name  = 'TEST_GET_TYPE_INVALID_TRI_VAR' ).


  ENDMETHOD.



  METHOD test_get_type_invalid_tri_var.


    DATA: a TYPE i,


          b TYPE i,


          c TYPE i.


    get_val: 'A' a,


            'B' b,


            'C' c.


    TRY .


        lcl_triangle=>get_type( a = a


                                b = b


                                c = c ).



        cl_abap_unit_assert=>fail(


            msg = |Expected exception not thrown for invalid triangle - variant { gv_current_variant }|


            quit = if_aunit_constants=>no ).


      CATCH lcx_invalid_param.


        " OK - expected


    ENDTRY.


  ENDMETHOD.


ENDCLASS.


As you can see with this approach it's very easy to create parametrized test cases where data is maintained in external container. Adding new cases requires just modifying TDC by adding new variant.

It proved to be very useful for test cases checking complex logic requiring multiple input sets to be covered.

There are also some challenges with this approach:

- you need to remember to pass quit = if_aunit_constants=>no in assertions otherwise test will stop at first failed variant

- in ABAP Unit results report there is only one method visible and it is not reflecting number of variants tested

For those challenges I would love to see some improvements in the future versions of ABAP Unit. Similarly to what is available in other xUnit frameworks.

Ideally there should be a way to provide the variants in a declarative way and they should be visible as separate nodes in test run results.

Kind regards,

Tomasz

6 Comments