1 6 7 8 9 10 72 Previous Next

ABAP Development

1,068 Posts

Last month, i published my abstraction class to manage Word OLE link. It can generate complete (and complex) word document, but it is a little slow for big tables.

 

OLE is an old technology... DOCX is an XML file extension... Enough to change my mind about word file generation. Exit OLE, welcome XML

 

I updated my abstraction class to generate DOCX file from ABAP directly, without any OLE usage. I know there are actually some projects that want to do that (abap2docx for example). But i think theses projects are too complex to use, or not yet usable in the true life.

 

With my class, it never be easier to generate DOCX. You never have to use or understand XML.

 

Here is the code of the "hello word" program.

docx_generation-sap-abap-hello-world[1].jpg

The class is simple, but can manage complex documents !

 

Here is the feature list :

  • Empty document creation or with use of template (docx, dotx, docm, dotm)
  • Write text with or without style (character style and/or paragraph style)
  • Option to manualy apply bold, underline, italic, strike, subscript, superscript, small caps, font name & size, font color, highlight color, letter spacing
  • Management of alignment, indent, spacing before/after paragraph
  • Break line, page, section, continuous section
  • Write table with or without style (and option to define cell format : bold, color...)
  • Write Header / footer
  • Write end note / foot note
  • Write comments
  • Write numbered label (figure, table...)
  • Write table of labels (figures, tables...)
  • Choose portrait/landscape, manage page border
  • Add images
  • Add canvas
  • Insert table of content (toc)
  • Add and manage document properties
  • Create and insert custom fields
  • Style creation (character/paragraph)
  • Manage files in SAP Web Repository for template/image (SAPWR, access with transaction SMW0)

 

In the download file, you will find a test program that contain the class CL_WORD and a demo of how to use it. You will find also some images and 1 template. Theses files are used by the test program, but are not necessary for the class itself.

 

French presentation can be found here : SAP : Générer un document Word DOCX en ABAP

 

And there is a direct download link (remember that you will need SAPLINK to install .slnk file) : http://quelquepart.biz/telechargements&file=L2RhdGEvbWVkaWFzL1pDTF9XT1JEX0RPQ1guemlwKjU1NGMxNQ&source=SCN-DOCX

 

A use case of the class can be found in ZAUTODOC : Automatic generation of technical BW documentation

 

Feel free to comment here


My others blogpost :

LISTCUBE replacement : a new way to display data

ZAL11 : a replacement for AL11

ZTOAD - Open SQL editor

Abstraction class to generate MSWORD with SAP using OLE

ZAUTODOC : Automatic generation of technical BW documentation

ABAP CDS in TechEd Keynote

 

Bjoern Goerke has shown ABAP CDS in his keynote for TechEd Barcelona, yes, ABAP!

 

Interestingly, he did not talk much about all the DDL language elements of ABAP CDS. In fact, he used quite a simple CDS view:

 

cds1.gif

 

The select statement of the view wraps the access to database table zbg_marsdata. Some of the modeling capapbilities of CDS shine through with the association _MarsSite that joins Z_MarsRoverMissions with Z_MarsSites. But this was of marginal importance for the presentation.

 

What he did talk about were annotations!

 

cds2.gif

 

The DDL source code consists mainly of annotations and not of SQL! What's that all about?

 

With annotations, you can semantically enrich your data model. And as you see, this is abundantly done above. Let's have a look behind the curtain.

 

What are Annotations?

 

From the compiler's point of view, annotations are simply something that can be written at given positions and have to follow a prescribed syntax. As shown in an example of the documentation, you can write something like

 

@v_annot4:{ annot0, annot1:'abc', annot2:123 }

 

into the DDL source of a CDS view. The source is syntactically correct and can be activated. Of course, such an annotation has no meaning as long as nobody evaluates it. During activation of a DDL source, its annotations are saved in system tables and there are system classes available to evaluate them.

 

Some annotations are evaluated directly during activation and by the ABAP runtime environment.

 

 

Annotations before ABAP 7.50

 

Before ABAP 7.50, only a handful annotations played a role. Those were the annotations that are evaluated during activation and by the ABAP runtime environment. We call those annotations ABAP annotations. They are documented as part of the ABAP CDS documentation, as e.g. the ABAP view annotations. An important example is @ClientDependent, that defines the client handling when Open SQL is used to access a CDS entity. Other examples are the EndUserText annotations that denote translatable texts.

 

Annotations with ABAP 7.50

 

The usage of annotations is not restricted to the ABAP Dictionary's own needs and the ABAP runtime environment (e.g.. Open SQL). As said above, you can enter for annotations what you want, as long as you stay within the syntax rules. Of course, there must be someone who evaluates it. And that's what software components of SAP do with ABAP 7.50! Software components of SAP such as ODATA, UI, and Analytics prescribe sets of annotations that can be used to achieve a defined behavior and the frameworks themselves provide frameworks that evaluate these framework specific annotations and act accordingly. With other words, it's not the ABAP runtime environment alone any more that evaluates DDL source codes! Accordingly the documentation of these annotations is not part of the ABAP CDS reference documentation (so don't send your error messages there ...) but delivered by the respective software components. There is a landing page where all SAP annotations are listed and where you find links to the detailed framework documentation.

 

As you see in the screen shot of Bjoern's session above, he uses lots of framework specific annotations as @Search... , @UI.... While the syntax coloring and code completion of ADT recognizes them, the ABAP runtime environment (e.g. Open SQL) does not care at all. You have to work with the respective frameworks in order to see the effects! Of course Bjoern did exactly that.

 

Here is an example for a documentation that describes what you have to do in order to expose a CDS View as OData Service:

 

Exposing CDS View as OData Service

 

Have fun!

 

 

Note

 

Please note that SAP currently does not recommend to create customer annotations, At the moment you should work with SAP annotations (ABAP annotations and framework specific annotations) only.

 

 

PS: Sorry for the blurred screen shots, but they are taken from the TechEd video.

 



Hi Community !

 

I'd like to share a tool for unit testing me and my team have developed for our internal usage recently.

 

The tool is created to simplify data preparation/loading for SAP ABAP unit tests. In one of our projects we had to prepare much tables data for unit tests. For example, a set of content from BKPF, BSEG, BSET tables (FI document). The output to be validated is also often a table or a complex structure.

 

Data loader

 

Hard-coding all of that data was not an option - too much to code, difficult to maintain and terrible code readability. So we decided to write a tool which would get the data from TAB delimited .txt files, which, in turn, would be prepared in Excel in a convenient way. Certain objectives were set:

 

  • all the test data should be combined together in one file (zip)
  • ... and uploaded to SAP - test data should be a part of the dev package (W3MI binary object would fit)
  • loading routine should identify the file structure (fields) automatically and verify its compatibility with a target container (structure or table)
  • it should also be able to safely skip fields, missing in .txt file, if required (non strict mode) e.g. when processing structures (like FI document) with too many fields, most of which are irrelevant to a specific test.

 

Test class code would look like this:

...

call method o_ml->load_data " Load test data (structure) from mockup

  exporting i_obj      = 'TEST1/bkpf'

  importing e_container = ls_bkpf.


call method o_ml->load_data " Load test data (table) from mockup

  exporting i_obj      = 'TEST1/bseg'

            i_strict    = abap_false

  importing e_container = lt_bseg.


...

call method o_test_object->some_processing " Call to the code being tested

  exporting i_bkpf  = ls_bkpf

            it_bseg  = lt_bseg

  importing e_result = l_result.


assert_equals(...).

...


The first part of the code takes TAB delimited text file bseg.txt in TEST1 directory of ZIP file uploaded as a binary object via SMW0 transaction...


BUKRS BELNR GJAHR BUZEI BSCHL KOART ...

1000  10    2015  1    40    S    ...

1000  10    2015  2    50    S    ...


... and puts it (with proper ALPHA exits and etc) to an internal table with BSEG line type.


Store/Retrieve


Later another objective was identified: some code is quite difficult to test when it has a select in the middle. Of course, good code design would assume isolation of DB operations from business logic code, but it is not always possible. So we needed to create a way to substitute selects in code to a simple call, which would take the prepared test data instead if test environment was identified. We came up with the solution we called Store. (BTW might nicely co-work with newly announced TEST-SEAM feature).


Test class would prepare/load some data and then "store" it:


...

call method o_ml->store " Store some data with 'BKPF' label

  exporting i_name = 'BKPF'

            i_data = ls_bkpf. " One line structure

...


... And then "real" code is able to extract it instead of selecting from DB:


...

if some_test_env_indicator = abap_false. " Production environment

  " Do DB selects here

else.                                    " Test environment

  call method zcl_mockup_loader=>retrieve

    exporting i_name  = 'BKPF'

    importing e_data  = me->fi_doc_header

    exceptions others = 4.

endif.


if sy-subrc is not initial.

  " Data not selected -> do error handling

endif.

...


In case of multiple test cases it can also be convenient to load a number of table records and then filter it based on some key field, available in the working code. This option is also possible:


Test class:


...call method o_ml->store " Store some data with 'BKPF' label

  exporting i_name  = 'BKPF'

            i_tabkey = 'BELNR'  " Key field for the stored table

            i_data  = lt_bkpf. " Table with MANY different documents

...


"Real" code:


...

if some_test_env_indicator = abap_false. " Production environment

  " Do DB selects here

else.                                    " Test environment

  call method zcl_mockup_loader=>retrieve

    exporting i_name  = 'BKPF'

              i_sift  = l_document_number " Filter key from real local variable

    importing e_data  = me->fi_doc_header  " Still a flat structure here

    exceptions others = 4.

endif.


if sy-subrc is not initial.

  " Data not selected -> error handling

endif.

...


As the final result we can perform completely dynamic unit tests in our projects, covering most of code, including DB select related code without actually accessing the database. Of course, it is not only the mockup loader which ensures that. This requires accurate design of the project code, separating DB selection and processing code. But the mockup loader and "store" functionality makes it more convenient.


illustration.jpg

Links and contributors


The tools is the result of work of my team including:

 

The code is freely available at our project page on github - sbcgua/mockup_loader · GitHub

 

I hope you find it useful

 

Alexander Tsybulsky

Messages are basically short texts stored in database table T100. What makes them special is the ABAP statement MESSAGE. This statement sends a message with a short text from T100 and adds a message type (S, I, W, E, A, X). The system behavior after sending a message is extremely context dependent and I'm not really confident that the documentation covers all the possible situations.

 

Historically, messages were invented for the PAI-handling of classical dynpros. There they can be used to conduct an error dialog. As a rule, messages should be restricted to that usage. But there is an important exception. Messages are also closely connected to exception handling:

 

  • In exception classes that implement IF_T100_MESSAGE, messages from T100 can serve as exception texts. Then they are part of the semantical properties of an exception class besides the class name and its super classes.

 

  • For non-class-based exceptions, messages can play the role of a poor man's exception text concept.

    • By raising a classical exception with MESSAGE RASING instead of RAISE, you add the message text and type to the exception. After handling such a classical exception with the  EXCEPTIONS addition of the CALL statement, you find the information in the well known system fields sy-msg... .

    • You can catch messages sent with MESSAGE naming the predefined classical exception error_message behind EXCEPTIONS of the CALL statement.

 

What's missing?

 

Nowadays you work with class based exceptions in your application programs. But from time to time you have to call legacy procedures that throw classical exceptions that are bound to messages. If you cannot handle the reason of the exception in place, you want to pass it to your caller in form of a class based exception. The problem is, how to find an appropriate exception class and how to convert the message based exception text of the original exception to an exception text of the exception class?

 

Since the exception texts of an exception class are part of their semantics, you would need an own exception class or at least exception text for each message that might occurr. Then you can raise the class based exception e.g. as follows:

 

meth( EXCEPTIONS exception = 4 ).

IF sy-subrc = 4.

  RAISE EXCEPTION TYPE cx_demo_t100

    EXPORTING

      textid = cx_demo_t100=>demo

      text1  = CONV #( sy-msgv1 )

      text2  = CONV #( sy-msgv2 )

      text3  = CONV #( sy-msgv3 )

      text4  = CONV #( sy-msgv4 ).

ENDIF.

 

Here, meth is a method that raises a classical exception exception with MESSAGE RAISING and cx_demo_t100 implements IF_T100_MESSAGE and denotes a message that fits to the message passed by the classical exception. If there is no approptiate exception class at hand that is able to cover all the messages that might be send by a called procedure, shrewd developers proceed also as follows:

 

meth( EXCEPTIONS exception = 4 ).

IF sy-subrc = 4.

  RAISE EXCEPTION TYPE cx_demo_t100

    EXPORTING

      textid = VALUE scx_t100key( msgid = sy-msgid

                                  msgno = sy-msgno

                                  attr1 = 'TEXT1'

                                  attr2 = 'TEXT2'

                                  attr3 = 'TEXT3'

                                  attr4 = 'TEXT4' )

      text1  = CONV #( sy-msgv1 )

      text2  = CONV #( sy-msgv2 )

      text3  = CONV #( sy-msgv3 )

      text4  = CONV #( sy-msgv4 ).

ENDIF.

 

This exploits the fact, that you can pass any structure of type scx_t100key to the contructor of an exception class. By doing so, you define a message from T100 not statically as message text but when raising the exception. Only the attributes for the replacement texts have to be there. Knowing that, you can create a kind of generic exception class for messages. But this is not recommended for exception classes implementing IF_T100_MESSAGE. For such an exception class, the exception text should not be dynamic and you should pass constants of that class to the parameter textid only. Furthermore, the above coding is quiet cumbersome. And further-furthermore, there's no way to pass the message type.

 

Solution with ABAP 7.50

 

Since the above scenario is a valid use case, a solution is provided with ABAP 7.50: A new interface IF_T100_DYN_MSG that contains IF_T100_MESSAGE It adds an attribute msgty for the message type and it also adds predefined attributes msgv1 to msgv4 for the replacment texts (placeholders) of a message.

 

If an exception class cx_demo_dyn_t100 implements IF_T100_DYN_MSG, you can profit from a new MESSAGE addition to the RAISE EXCEPTION statement:

 

meth( EXCEPTIONS exception = 4 ).

IF sy-subrc = 4.

  RAISE EXCEPTION TYPE cx_demo_dyn_t100

    MESSAGE ID    sy-msgid

            TYPE   sy-msgty

            NUMBER sy-msgno

            WITH   sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.

ENDIF.

 

This does basically the same as the example above, but now in a well-educated way. You can pass the full signature of a message to an exception class including the message type and the runtime environment does the rest for you. Also, you don't have to care about the names of the attributes for the placeholders any more. When handling the exception you have access to the message, e.g. as follows:

 

CATCH cx_demo_dyn_t100 INTO DATA(oref).

  cl_demo_output=>display(

    |Caught exception:\n\n| &&

    |"{ oref->get_text( ) }" of type { oref->msgty }| ).

 

You get back the message text and, that's new, also the message type. From now on, this is the recommended way of converting classical messages to exceptions.

 

For more information and more examples see:

 

 

The MESSAGE addition is also available for THROW in conditional expressions, of course.

I promised to tell you, why the INTO clause should be placed behind all the other clauses in an Open SQL SELECT statement. One reason is that Open SQL also wanted to support the SQL syntax addition UNION. An UNION addition can be placed between SELECT statements in order to create the union of the result sets. ABAP CDS offered its UNION from the beginning (7.40, SP05).  If you wanted to use it in Open SQL, you had to wrap it in a CDS view. What hindered Open SQL? Well, the position of the INTO clause before the WHERE, GROUP BY and ORDER BY clauses. These clauses can be part of any SELECT statement participating in unions and there must be only one INTO clause at the end. Therefore, with 7.40, SP08, as a first step, the INTO clause was given a new position.

 

Now, with ABAP 7.50, we can bring the harvest in. Let me show you an example. The task is to get the names of all the ABAP source texts of a package. These might be needed for searching in the sources or for dumping them in a file. All programs can be found in the database table TRDIR. You know, that the source code files of some ABAP program types like class pools and function pools are distributed over include programs. In order to select the correct technical names of the include programs, it is not a bad idea to construct a ranges table that does the search for you based on some known features.

 

Before ABAP 7.50, the construction of such a ranges table might have looked as follows:

 

DATA prog_range TYPE RANGE OF trdir-name.


SELECT 'I' AS sign, 'EQ' AS option, obj_name AS low, ' ' AS high

        FROM tadir

        WHERE pgmid = 'R3TR' AND object = 'PROG' AND devclass = @devclass

        INTO TABLE @prog_range.

 

SELECT 'I' AS sign, 'CP' AS option, concat( rpad( obj_name, 30, '=' ), '*' ) AS low,

        ' ' AS high

        FROM tadir

        WHERE pgmid = 'R3TR' AND object = 'CLAS' AND devclass = @devclass

        APPENDING TABLE @prog_range.

 

SELECT 'I' AS sign, 'EQ' AS option, 'SAPL' && obj_name AS low, ' ' AS high

        FROM tadir

        WHERE pgmid = 'R3TR' AND object = 'FUGR' AND devclass = @devclass

        APPENDING TABLE @prog_range.

 

SELECT 'I' AS sign, 'CP' AS option, 'L' && obj_name && '+++' AS low, ' ' AS high

        FROM tadir

        WHERE pgmid = 'R3TR' AND object = 'FUGR' AND devclass = @devclass

        APPENDING TABLE @prog_range.

 

Four individual SELECT statements are used to fill one internal table prog_range with the help of the APPENDING addition. Note the usage of string expressions in the SELECT lists.

 

With ABAP 7.50 you can pack the four SELECT statements into one (this can be called code push down):

 

DATA prog_range TYPE RANGE OF trdir-name.

 

SELECT 'I' AS sign, 'EQ' AS option, obj_name AS low, ' ' AS high

       FROM tadir

       WHERE pgmid = 'R3TR' AND object = 'PROG' AND devclass = @devclass

UNION

SELECT 'I' AS sign, 'CP' AS option, concat( rpad( obj_name, 30, '=' ) , '*' ) AS low,

       ' ' AS high

       FROM tadir

       WHERE pgmid = 'R3TR' AND object = 'CLAS' AND devclass = @devclass

UNION

SELECT 'I' AS sign, 'EQ' AS option, 'SAPL' && obj_name AS low, ' ' AS high

       FROM tadir

       WHERE pgmid = 'R3TR' AND object = 'FUGR' AND devclass = @devclass

UNION

SELECT 'I' AS sign, 'CP' AS option, 'L' && obj_name && '+++' AS low, ' ' AS high

       FROM tadir

       WHERE pgmid = 'R3TR' AND object = 'FUGR' AND devclass = @devclass

       INTO TABLE @prog_range.

 

The result is the same as above and can be used to get the program names, e.g. as follows:

 

SELECT name

       FROM trdir

       WHERE name IN @prog_range

       ORDER BY name

       INTO TABLE @DATA(programs).

 

(The example is not bullet-proof, but well an example and you might extend it...)

 

As shown here, with UNION you can unite the result sets from SELECT statements for one database table, but it is also possible to combine the result sets of different database tables, if the numbers of columns and the column types match.

 

For more information and examples see SELECT - UNION.

Here I am trying to explain how to do an Outer Join of 2 internal tables using the new internal table functions available in ABAP Release 7.40. The same method can be extended to add more tables to the join.

 

Sample program is below. I hope the inline comments are clear. If you need any explanation, please add a comment below and I will try to answer.

 

report outer_joins.

class lcl_outer_join definition.

   public section.

     methods:

*Main method

       perform.

   private section.

*Sample tables to hold initial values

     types:begin   of   struc1,

             f1 type numc1,

             f2 type numc1,

           end     of   struc1,

           begin   of   struc2,

             f1 type numc1,

             f3 type numc1,

           end     of   struc2,

*Table structure to hold output. Common key between the tables is field F1

           begin   of   struc3,

             f1 type numc1,

             f2 type numc1,

             f3 type numc1,

           end     of   struc3,

           tab1    type standard table of struc1 with non-unique key f1,

           tab2    type standard table of struc2 with non-unique key f1,

           tab3    type standard table of struc3 with non-unique key f1,

           ty_numc type numc1.

     data: table1 type tab1,

           strc2  type struc2,

           table2 type tab2.

     methods:

       build_tables,

       outer_join,

       line_value importing value(key) type numc1 returning value(result) type numc1.

endclass.


class lcl_outer_join implementation.

   method perform.

*Build input tables

     build_tables( ).

*Perform outer join

     outer_join( ).

   endmethod.

   method build_tables.

*Populate initial values

*Reference: ABAP News for 7.40, SP08 - Start Value for Constructor Expressions

     table1 = value tab1( ( f1 = '1' f2 = '2' )

                          ( f1 = '2' f2 = '8' )

                          ( f1 = '1' f2 = '9' )

                          ( f1 = '3' f2 = '4' ) ).

 

     table2 = value tab2( ( f1 = '1' f3 = '5' )

                          ( f1 = '3' f3 = '6' ) ).

   endmethod.

 

   method line_value.

*Store the last accessed structure, to reduce table access

     if strc2-f1 ne key.

       try.

*Read the line from 2nd table, with respect to the key

*Reference: ABAP News for Release 7.40 - Table Expressions

           strc2 = table2[ f1 = key ].

         catch cx_sy_itab_line_not_found.

*If corresponding line was not found, then avoid dump, populate blank

           clear strc2.

           strc2-f1 = key.

       endtry.

     endif.

*Pass the required field from 2nd table as result

     result = strc2-f3.

   endmethod.


   method outer_join.

*Perform join and display output

*Reference: ABAP News for 7.40, SP08 - FOR Expressions

     cl_demo_output=>display_data( value tab3( for data1 in table1

                                             ( f1 = data1-f1

                                               f2 = data1-f2

*Field F3, is populated by calling the method, passing the common key

                                               f3 = line_value( data1-f1 ) ) ) ).


*If you are sure that table2 will always have an entry corresponding to F1 field,

*then there is no need to create a method to read the value.

*Meaning: Since my table1 has a row with F1 = 2, but table2 doesn't,

*the statement below, will result in a dump with exception

*cx_sy_itab_line_not_found. But, if my table2 also

*had an entry corresponding to F1 = 2, then, this single statement

*is enough to perform the join.

    cl_demo_output=>display_data( value tab3( for data1 in table1

                                             ( f1 = data1-f1

                                               f2 = data1-f2

                                               f3 = table2[ f1 = data1-f1 ]-f3 ) ) ).

   endmethod.

endclass.

 

start-of-selection.

   data:   joins   type ref to lcl_outer_join.

*Initialize and perform join

   create object joins.

   joins->perform( ).

Have you gotten into Git yet? It is one of the new things to know when working with teams on SAPUI5 Projects. But first...

 

 

git (1).png

This comic comes from XKCD.com by Randal Monroe and is re-used under creative commons.

 

Flashback

Earlier today I was looking for a draft blog that I had started with a simple title as a memory jog for me to come back and flesh out. Then I found this blog that I had written up over the course of several plane trips worth of waiting in lounges last year. Given that I presented a similar Introductory GIT course for ABAPers this year (2015) at SAP TechEd I thought I should dust this off and polish it up for your enjoyment and learning pleasure.

The Call for Papers

When there was a call for community sessions before #SAPtd I jumped on the opportunity to present on two topics that I feel were going to be under-represented at the event.

  1. Developer Communication Skills
  2. Git

So I proposed two abstracts to the community talk selection committee and they told me that they while they were very excited about both topics they could only take one and they would take the first one. Well I was excited, but to be honest, also a little dissapointed. As much as I wanted to talk on communication skills for developers I really wanted to introduce ABAPers to Git as a method of source code control as I know it will become very important as we move toward the HCP and OpenUI5.

As we got closer to the conference I was advised that one of other speakers that was selected had to withdraw and I was asked to prepare my second talk.

This presentation was only delivered at #SAPtd in Las Vegas so all the people going to Berlin (2014) will miss out. For this reason and that there should be a bit of an intro to Git here on SCN. I thought I would distill my off the cuff talk into a short blog for the benefit of the ABAPers in the universe.

Welcome to Source Control

So source control is not a new subject to ABAPers. We are used to putting our code into transport requests, which then enables the code to be delivered across the landscape and into production in an orderly manner.

Let's for a moment think about what happens when you put a code artifact into a transport request. It's not that hard. You lock the object so that you and only you can work on the object at one time. Until you release your transport, everyone else is prevented from doing anything to that object.

This is all well and good, but what happens in the following scenario?

  1. You are working on a new feature request.
  2. A bug is discovered in production that related directly to your code object.

Well resolving the bug will obviously trump the new feature but you are halfway though your change and it will need to be parked while the emergency is dealt with.

The challenge is you change request might have a whole bunch of other objects that are not ready to go to prod, let alone ready for regression testing.

So mostly you do something like the following:

  1. Copy all of your changes to notepad* and save the file for safekeeping.
  2. Hack the lock on the transport request and remove your file from your transport request
  3. Copy the code from the prior transport (or back from production if they are different)
  4. Open a new transport request.
  5. Make the hotfix.
  6. Release the new transport.
  7. Add your object back into the old request and merge your changes in from the file you saved.
  8. When you are ready you release the transport.

While this is functional and it is not the best. Also note that only one person can be working on each file at once.

Enter Git

So what's so great about Git. Well, for starters the file types involved in a OpenUI5 project don't have to be edited in SE80. In fact you can set up your system to work on your project locally on your machine and there are many great blogs here on SCN to help you with that. This means that you can work on your project on the move rather that having to be connected to the ABAP app server in the same way you need to in the ABAP world.

 

Back to the start.

So Git is a distributed source code control system where pretty much everything happens locally and you are only need a network connection when you are pushing to the remote server.

You don't even need to have a remote system to use Git. You can install git on your Linux, Mac or Windows system and use it to version whatever files you like without ever sharing your code. Personally I have done this when I was working on a project even though I wasn't working with other team members. This enabled me to version code and rollback changes if they didn't work.

For most scenarios though, you will need a remote system that all members of your team can access. For this, GitHub is the answer to your needs. Is it the only answer? No, there are others but it is a great place to start.

So surf on over to GitHub and signup for an account.

(insert image of GitHub initial screen)

There are several options to consume GitHub content. Native apps, command line and the GitHub site itself. I will focus on the command line in this blog because even though the native GUI applications simplify everything so nicely, sometimes the power that the command line affords is the only way to get out of the muddle. So I like to build up the muscle memory in my fingers so that when it hits the fan and my colleague's Windows commit has clobbered my commit I can recover without getting into a "tissie".

First the init

Let's assume that we will start locally, because you are on a long haul flight (with power) and have had a great new idea you want to code or write about. Lets call it NextBigThing.

The first thing you are going to do is to initialise a folder to be tracked under source control.

As I said we will start with the command line and look at other options later:

05:58:42 ~/squarecloud$ mkdir NextBigThing
05:58:59 ~/squarecloud$ cd NextBigThing/
05:59:04 ~/squarecloud/NextBigThing$ ll
total 8
drwxrwxr-x  2 nigeljames nigeljames 4096 Nov  4 17:58 ./
drwxrwxr-x 19 nigeljames nigeljames 4096 Nov  4 17:58 ../
05:59:05 ~/squarecloud/NextBigThing$ git init
Initialized empty Git repository in /home/nigeljames/squarecloud/NextBigThing/.git/
05:59:11 (master) ~/squarecloud/NextBigThing$ ll
total 12
drwxrwxr-x  3 nigeljames nigeljames 4096 Nov  4 17:59 ./
drwxrwxr-x 19 nigeljames nigeljames 4096 Nov  4 17:58 ../
drwxrwxr-x  7 nigeljames nigeljames 4096 Nov  4 17:59 .git/
05:59:13 (master) 

So here we have created a new directory for our new exiting project and initialised it so that git can track its contents

Next we will start editing our documents and or code. I will start by creating a ProjectOverview.md to get my ideas down.

After editing that document for a while I need to see what is going on. I check the status back on the command line.

~/squarecloud/NextBigThing$ subl Project Overview.md
05:59:28 (master) ~/squarecloud/NextBigThing$ git status
On branch master

Initial commit

Untracked files:
  (use "git add <file>..." to include in what will be committed)

     ProjectOverview.md
nothing added to commit but untracked files present (use "git add" to track)
06:00:23 {master} ~/squarecloud/NextBigThing$

 

So let's see what is going on:

You can see that the command line prompt has changed from being round brackets and green to red curly brackets. This is my visual cue that my repository is not up to date.

 

So looking at the directory listing I can see my new file and by checking git's status I can see that I have one untracked file.

Git is nice and tells us what to do most of the time. It is telling me to add the file to be tracked. So let's do that:

 

06:00:23 {master} ~/squarecloud/NextBigThing$ git Add ProjectOverview.md 
06:00:44 {master} ~/squarecloud/NextBigThing$ git status
On branch master

Initial commit

Changes to be committed:
  (use "git rm --cached <file>..." to unstage)

     new file:   ProjectOverview.md
06:00:47 {master} ~/squarecloud/NextBigThing$

It is now telling me that the tracked file needs to be committed. You can think of commiting to like saving a file. It is a snapshot of our file at that point in time.

So let's go ahead and commit the file:

06:00:47 {master} ~/squarecloud/NextBigThing$ git commit -m "Inital Idea"
[master (root-commit) cb62f3d] Inital Idea
 1 file changed, 5 insertions(+)
 create mode 100644 ProjectOverview.md
06:01:21 (master) ~/squarecloud/NextBigThing$

Did you notice that the brackets have now changed back to curly?

This work-add-commit loop is what you will do most with git.

Share and share a like

We now need to share this idea with our stakeholders and team so we are going to create a repository on GitHub and then push our work there for all to see.

So we log onto GitHub and find the big green 'Add Repository' button. Personally, I love green for postive actions.

create-git-repo.png

 

We have to fill out the Repository name which has to be unique under your account. We enter a description and decide if we are making this public or private. I want the world to know about my new idea so of course we are making this public.

There are a couple of other options:

  1. Initialize with a README
  2. Add .gitignore
  3. Add a licence

As we have content for our repository already we are going to leave these blank but if you were creating a repo from scratch on the GitHub site it would be pretty handy to choose these options.

A README.md is created if you select option one. This is a handy place to tell the world about your repo as it is displayed by default on the GitHub site.

The .gitignore file tells git which files to ignore when committing your code. This is handy if your IDE creates project files or there are secure files that are not appropriate to be shared publicly.

Lastly the licence is a handy feature for open source project that needs to have a licence to be considered open source.

So, with all that out of the way, we press another Big Green Button and create our project.

 

repo-created.png

 

Git now presents us with options as to how to clone or push our repo.

Since we have content already we are going to push our repo.

06:01:21 (master) ~/squarecloud/NextBigThing$ git remote add origin git@njames.github.com:njames/NextBigThing.git
06:04:58 (master) ~/squarecloud/NextBigThing$ git push -u origin master
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects:  50% (1/2)   
Compressing objects: 100% (2/2)   
Compressing objects: 100% (2/2), done.
Writing objects:  33% (1/3)   
Writing objects:  66% (2/3)   
Writing objects: 100% (3/3)   
Writing objects: 100% (3/3), 320 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To git@njames.github.com:njames/NextBigThing.git
 * [new branch]      master -> master
Branch master set up to track remote branch master from origin.
06:05:22 (master) ~/squarecloud/NextBigThing$     

Now if we refresh the GitHub repo page we can see all the commits made to it.

 

repo-after-push.png

 

The rest of the world can now clone our repo and find out about the NextBigThing.

 

Clone me baby, one more time

If we look at the repository properties we can see there is a url that we can use to clone the repo. There are several protocols we can use. Firstly we can use http or (and this is my preference) we can use ssh. I prefer ssh because once I have added my public ssh key to GitHub that is the way I am identified and it is seemless.

The windows client uses http and you need to add your username and password.

So staying with the command line, this is how we clone:

git clone git@github.com:njames/NextBigThing.git 

Summary

So in this blog, we have learned that git is a great distributed source control system.

We have learned how to: 1. init 2. add 3. commit 4. push 5. clone

There is a lot more to get into with git but you can get started and then learn as you go.

I hope you have found this a useful introduction and if this topic is of interest I will expand some of these topics.

You can also refer to my session slides from my recent session at TechEd Las Vegas 2015

ABAP is getting more and more database-specific features – and this make me worry. Please don’t understand me wrong – I have no problem with SAP’s technological strategy but with SAP’s communication und documentation. And I’m convinced that this will lead to severe problems unless SAP decides to deliver better documentation.

 

SAP said it loud and clear: not every feature of ABAP language is supported on every database platform – typical examples are ADMP and CDS with parameters. With parameterized CDS you can do amazing things: operational reporting, fast queries, and it is a very powerful tool for code pushdown to the database.

 

Before reading the next section I would like you to answer following questions about NW 7.40:

  • Do you know which DBMS of the PAM support parameterized CDS?
  • Supposed you have a complex SAP landscape with different DBMS: How do you find out whether parameterized CDS is supported in your landscape?
  • Supposed you are an SAP Partner and your solution uses parameterized CDS – do you know on which DBMS your solution runs?

 

The answer is difficult since in NW 7.40 parameterized CDS is not supported on every DBMS of the PAM. SAP doesn’t provide any official information which database platforms are supported. This restriction is documented in ABAP Help with two sentences: if you use parameterized CDS on DBMS that doesn’t support this feature you get an exception. There is also a class that provides the information whether the system database supports the feature.

 

Where’s the Problem?

The answer is simple: how can we make technological decisions when SAP doesn’t give precise information which feature of ABAP (and of course AS ABAP, too) works on which database? How can we be sure that our software is running a system landscape with different  DBMS? This is common in larger development systems and the general case for SAP partners.


We have the following possibilities:

  1. We can develop a solution that supports anyDB. This can be difficult and expensive since we have to develop two variants, say one with parameterized and one without. Dual development has its costs and if I have the choice I would like to avoid it. In the end I think I have to avoid it because of economic reasons.
  2. We don’t know or I don’t care about the restrictions. But then the solution crashes when it runs on a DBMS which doesn’t support the feature. In the worst case this could lead to a production downtime, which is or course no option. Unfortunately parameterized CDS has some unique features and when there are in the core of your application a change could be problematic and expensive.
  3. We know about a restriction and I will change the DBMS. Of course this is expensive.
  4. We decide not to use the new techniques at all. This is annoying since I paying for the platform. Without using new features the platform loses value.

 

This is the dilemma. Without knowing the facts we could get into trouble if our systems landscapes are running on multiple DBMS. Without this information SAP partners can’t neither answer the question whether their solution runs on a special DBMS nor make reasonable decision.

 

What do we need?

The answer is simple: whenever there is a restriction, SAP customers and partners should be informed about it. Here we need a list of supported databases. As I explained above it does not suffice that there is a solution only at runtime. I need this information in a release note so that it is an eye-catcher no one can miss by chance.

 

Why I am worried?

At the moment there are only a few restrictions, but as I explained above the times are over when AS ABAP was database agnostic and the whole PAM was supported. Of course it was possible to create database specific solutions using native SQL/ADBC, but the Code Inspector / ABAP Test Cockpit contains checks so that we ensure at development time that our solutions are platform independent, but I know no automated checks for parameterized CDS so far.

 

I think it is very likely that SAP will continue to develop ABAP features that are no more database agnostic. As I said above I have no problem with this but I need explicit information about restrictions. Everyone with complex system landscapes needs this information especially partners. If SAP continues adding database proprietary features to NetWeaver platform without precise information about supported DBMS, we will start developing solution where no one can tell on which DBMS they can run.

In case you don’t know about ABAP CDS let me short introduce the basic facts. CDS allows building database views with many features of SQL-92 that are available for HANA and anyDB. You can use it for

  • data models for operational analytics
  • fast queries – identification of business objects or building packages for parallelization
  • fast data access for OData services where joins are only used if necessary
  • you can use is to “redefine” DDIC information in SELECTs which can help you to overcome restrictions in many frameworks like BRFplus
  • it will be one major cornerstone for building next generation application apps in NW 7.5 and S/4HANA which was/will be explained in this year’s SAP TechEd.

You can read more about it in the following blogs:

 

This is only a short blog entry but reflects experiences with ABAP CDS in releases NW 7.40 SP8 and 11 we had in the development of two complex CDS models. Those hints may sound easy but if I had know it earlier many things would have been easier.

 

Get the latest ADT tools

You need ABAP in Eclipse to create and edit DDL Sources. Please don’t work with older versions of ADT tools. Until summer they allowed some dangerous manipulations that will bring you in big trouble.

 

Learn SQL-92 and apply SQL best practices

CDS is about view building. Improve your SQL skills before starting with CDS. Otherwise you will most likely create severe errors. Without SQL knowledge it is possible that you will get bad runtime results. But if you are a skelled SQL developer you will get amazing results.

 

Start slowly

With CDS you can build complex data models. But complexity is no value in itself – simplicity rules. So start slowly and explore all features. The complexity will arise soon when you are building view on view which is the usual programming model.

 

Learn about restrictions

Sometimes restrictions are hidden in the documentation: http://help.sap.com/abapdocu_740/en/abenselect_cds_para_abexa.htm und the ABAP Keyword Documentation. So please read it carefully. By the way: I am don’t like the way SAP communicates the restrictions but this is the topic of another blog entry.

 

Use the ABAP package concept

CDS models use to get complex and complicated within short time.  I recommend to implement CDS views with different purposes (operational reporting, OData..) in different ABAP packages. One reason for it is to control reuse. I made the experience that most ABAP developers don’t understand the concept of reuse and apply it whenever if it's possible and not when it is necessary.

 

Study OSS Notes

CDS is a new technology and there are problems like the one here: http://service.sap.com/sap/support/notes/2023690. So I recommend to study the OSS, from time to time, f.e. following note: http://service.sap.com/sap/support/notes/2238561.

 

In case of activation errors the following notes are helpful:

 

Be careful before changing data models

We made the experience that changing components of transparent tables can lead to activation erros when there are CDS views with parameters defined on top of it. In this case you have to perform all necessary changes manually. Because of those errors we decided to test changes in CDS models in a sandbox first.

 

Look at transport protocols

As I mentioned before CDS is a new technology and we had some surprises with the transport behavior. So look at the transport protocols.

 

Visit this years’s SAP TechEd

CDS is a cornerstone of new business applications – so I recommend to visit ABAP lectures like “DEV106 - The ABAP Programming Model in SAP S/4HANA” (see https://scn.sap.com/community/abap/hana/blog/2015/09/08/abap-at-sap-teched-2015  for example).

Hello,

 

I wrote an abstraction class to generate complex MS Word document from SAP. This class is delivered "ready to use" and cover, i hope, a large panel of usage.

 

Why this class ?

OLE syntax is not easy and find help is a pain. I learned all i need to know and, to never have to do it again, i have written this class.

 

Now to write text into word, i simply use the method "write_text" or "write_table" to write a complete table

 

What you can actually do with the class :

 

  • Create document from scratch or with a template (dotx)
  • Write text with or without style (font style or paragraph style)
  • Use option bold, italic, underline, choose font, font size, font color
  • Break line, Break page, Section, section continuous
  • Write table, with or without Table style, and option to define format by cell (bold, background color...)
  • Footnote
  • Write simple header/footer
  • Choose orientation landscape/portrait
  • Add image
  • Add canvas
  • Insert table of content
  • Insert custom field
  • Change the title of the word window
  • Save document and close word

 

As OLE is an old method, and seem to be a little slow for big document (in particular if you have a lot of table). So this class could be used to help all people that have a problem with OLE syntax, it contain almost all solutions

 

In the download, you will find a test program that contain the class CL_WORD and a sample of how to use it. You will find also some image and 1 template. Theses files are used by the test program, but are not necessary for the class itself.

 

French presentation can be found here : Faire communiquer SAP et MS WORD grâce à OLE - Quelquepart

 

And there is a direct download link (remember that you will need SAPLINK to install) : http://quelquepart.biz/telechargements&file=L2RhdGEvbWVkaWFzL1pDTF9XT1JEX09MRS56aXAqOGQwZmVh&source=SCN-OLE

 

Feel free to comment here


My others blogpost :

LISTCUBE replacement : a new way to display data

ZAL11 : a replacement for AL11

ZTOAD - Open SQL editor

Generate DOCX file in ABAP

ZAUTODOC : Automatic generation of technical BW documentation

Volker Wegert

The Delivery Barrier

Posted by Volker Wegert Oct 29, 2015

Introduction

 

In the last years, ABAP development has become a lot easier. The individual developer’s perspective has improved considerably through advanced tooling (ADT!), better online documentation and community support. When I started ABAP development around 2001, you still needed a dedicated machine with quite a price tag only to run the development system of our landscape at a decent speed. Today, that’s easily accomplished using a virtual machine, even on cheap off-the-shelf hardware (which is obviously not recommended for a production system, but hey – what do you think the average Hudson CI server in a small development shop runs on?). With pre-packaged demo systems, a complete ABAP development environment is in reach for most people – I just installed a system a within a few hours (including downloading 15 GB of installation files, setting up the VirtualBox server and the underlying OS, while doing other stuff alongside). If you don’t want to run the system on your own hardware, you can get a CAL account and run the systems in the cloud as well.

From an organizational point of view, things have become easier (that is: cheaper) as well. For the most part, you no longer need to invest in large-scale hardware (see above), and there are plans available (see below) that will provide you with up to 25 developer licenses, a registered namespace and access to pretty much everything you need for very reasonable prices.

A lot has been done to lower the entry barrier, to get individual developers and possibly small startup companies to embrace the ABAP ecosystem more easily, and I am very grateful for all the hard work that must have gone on behind the scenes to make this a reality. If you know ABAP and have some great ideas for additional products that complement some existing solution, just build your product, test it thoroughly, write whatever documentation you deem necessary, sell it to your customers and then deliver it.

Just deliver it” – unfortunately, that is the one point that is still easier said than done. This has not been a problem so far, when ABAP development was only available to medium-to-large-scale enterprises anyway. For a small company comprising of only a few enthusiastic employees, ABAP development has now become rather easy and cheap, while the delivery process has yet to undergo that transformation. There are a couple of options available that I would like to discuss in this article. I’m very well aware that this has become a rather lengthy article, but I hope it’s worth the read – giving the discussions of the last weeks, it most certainly needed writing.


The Obvious: “No. Nonononono.”


Okay, that’s not a serious alternative. You could, if you were so inclined, manually copy your solution over to each and every customer system. Don’t. Even. Think. About. It.


The Automated Obvious: “No.”


There’s a great tool available to get development objects out of the system and back into the system: SAPlink. Don’t get me wrong, it really is a great tool, if used by a knowledgeable mind for the right purposes. Software delivery is definitely not one of those.

SAPlink has a few advantages, which do look compelling, from a distance:

  • It’s free, which is always good, right?
  • It’s comparatively easy to learn – no complex concepts, just export to XML and reimport. Anyone can learn that fast.
  • It’s extensible – if it doesn’t support what you need, just add the missing pieces yourself.

However, when using it as a delivery tool, there are some serious issues.

  • It is most definitely not supported by SAP. Some of the import/export implementations available might not even use supported APIs to extract and insert development objects – probably because there are none. While this might not be huge deal for in-house use of SAPlink, the situation changes when you’re an external solution provider. You’ll be providing components for mission-critical enterprise systems – better make sure you’re covered when it comes to maintenance issues.
  • There are virtually no integrity checks during either the export or the import process. A lot of things can go wrong when packaging software, and SAPlink is simply not designed to handle any of these issues.
  • There is no support for object deletion. This happens frequently in product maintenance – you no longer need a class, so you delete it. SAPlink might be able to deliver the class, but it can’t deliver the deletion.
  • There’s no dependency resolution during import. Imagine a class that uses a structure that contains a data element that in turn refers to an interface. You might need to import some of these objects in the right order because it largely depends on the object type import/export plugin whether you can import inactive objects that refer to other objects that don’t yet exist. Sometimes you can’t, and then you have to keep tabs on the dependencies manually. Not cool.
  • Speaking of the plugins, the support for certain objects heavily relies on the plugins working correctly. Since the plugins come from a handful of loosely connected volunteers, they will naturally vary in quality, so YMMV.
  • One of the most important points might be that on the target side, SAPlink actually does little less than automate the object creation. You can create a class using SAPlink just like you could manually – and vice versa, you can’t (legally, and most of the time technically as well) create a class that you couldn’t create manually as well. That means that you have to develop and deliver your solution in a namespace that is fully writeable by your prospective customer. Either you use Y or Z and risk conflicts with existing customer objects (besides demonstrating that you probably shouldn’t be delivering a product just yet), or you give the customer full access (production key) to a registered namespace, essentially rendering its protective character useless. Welcome to maintenance hell – oh and don’t forget that some of the object type plugins don’t (yet) support objects in registered namespaces.

Compared to the Copy&Paste approach, SAPlink certainly is a huge step – but for professional software delivery, a huge step in an entirely wrong direction.


The Slightly Less Obvious: “Not A Good Idea.”


Anyone who knows a bit about the ABAP environment will know what’s coming next: Transports. From my personal experience, this is probably the most popular way to deliver add-ons (it certainly is within the Healthcare sector) – we even get add-ons from the consulting departments that are as close to SAP as can be via transports. And why not – this method does have a number of advantages:

  • It is supported by SAP. Perhaps not exactly for the use case of delivering software to customers, but it is somehow covered by the standard maintenance.
  • Not only is it officially supported, but it is also widely tested, and it is guaranteed to be implemented in every system landscape you might want to deliver software to – simply because everyone needs it.
  • There is support for all relevant object types – obviously, since you need that capability within a customer system landscape anyway.
  • It does a really great job of handling stuff like import-level ordering, inactive import and mass activation and import post-processing. There is a huge amount of complexity involved that is cleverly hidden underneath a tool that every ABAP developer uses without even thinking about it. I’d highly recommend the course ADM325 to every ABAP developer to get a deeper knowledge about the inner workings of the system.

This looks like the perfect solution, but: the CTS/TMS that you know from your development or customer landscape was designed for transports within the system landscape. It was not designed for software delivery between different landscapes, although it is frequently used for this. Because of this off-label use, there are some nasty issues that are just waiting to bite you:

  • Transport identifiers are generated automatically using the system ID (SID) of the exporting system (<sid>K9<nnnnn>). Not considering some special values, there is no telling what SIDs you might encounter in a customer system landscape. If the SID of your delivery system exists in a target landscape, transport identifiers will collide sooner or later, with very unpleasant effects. That means trouble, and there is no easy way to solve this.
  • Only an extremely limited consistency checking is performed before exporting a transport. Basically, if it’s a valid and mostly consistent object, you can export it. That includes Z-Objects, Test stubs, Modifications and a few other things that might easily slip into delivery transport unnoticed. You can implement checks for this using CTS extension points, but you have to be aware of the danger and prepare for it.
  • There is no support for modification adjustments, no SPAU/SPDD upgrade assistance. Your customer can modify your delivered objects (provided that you supply the modification key of the namespace, which you should), but then what? With the next delivery of that object, the customer has to backup his modifications (manually), have your transport overwrite it and re-implement the modifications (again, manually).
  • There is no integrated dependency management. The CTS/TMS is supposed to be used in a system landscape that has a homogeneous arrangement of software versions, so whatever you can safely export from the development box, you’ll probably can import safely into the other boxes, right. If the transport originates from a system where HR happens to be installed and maybe you used some HR data elements or function modules just because they seemed convenient, you can export the transport easily. If the customer doesn’t have HR installed, you won’t land on the moon today – and you have no way of ensuring this beforehand, you’ll just notice during the import. The same pattern applies if you want to supply multiple add-ons that rely on each other – you can’t ensure that your customer will import these in the right order and only using matching versions.
  • Speaking of versions – there is no version management to speak of, you’ll have to store the version number and patch level manually if you want to and build your own display function. Not a big deal, but cumbersome none the less.
  • The import order of individual transports is not governed in any way. This not only affects dependencies (as discussed above), it also allows for mishaps like partial downgrades, import of patches in the wrong order and numerous other issues that will keep your support staff unnecessarily busy. Even worse, unintended downgrading of database tables might lead to data loss.
  • One rather subtle problem lies hidden in the area of object deletion and skipped transports. With CTS/TMS transports, it’s easily possible to export a deletion record for an object that will cause the object to be deleted in the target system as well. Let’s assume you export that deletion record with version 5. The customer decides (consciously or by accident) to skip version 5 and upgrade directly from 4 to 6. In that case, the deletion record is not imported and the object stays in the system. In most cases, that won’t be a problem, but if you think of class hierarchies, interface implementations and other heavily interconnected objects, you might end up with leftovers of the unpleasant sort. This isn’t easy to solve, either: It’s not trivially possible to add a deletion record to the transport of version 6 because the TADIR entry of the object was deleted when exporting version 5, and you can’t add the deleted object to the transport of version 6 without creating the TADIR entry first. It’s possible, but not trivial – BTDT.
  • There’s a procedural trapdoor that might lead to unexpected results as well. Since you’re essentially using the normal change management software logistics system of the customer system landscape, your software upgrades might be imported by the TMS along with regular in-landscape transports (that they technically are!) inadvertently. If that happens at the wrong time – especially when importing into the production system – bad things might occur. Avoid if possible.
  • As a last hidden obstacle: There’s no support for a clean upgrade path between releases. Your software will inevitably use some components of the NetWeaver Basis, ECC, CRM or any other SAP product – I’ll simply call this “the foundation” from now on. For different releases of the foundation your product relies upon, you will frequently have to deliver slightly different product versions. This means that during a major upgrade, objects may have to be deleted while others might have to be added or changed. You have to figure out a way to support this manually – there’s nothing in the CTS/TMS that will help you with that.

As you can see, while this obvious solution will be workable with a number of limitations for a wide variety of scenarios, it is far from ideal, it places a huge burden on the people managing the export and import process, and it will totally collapse in certain situations (SID collisions). One would think that there just has to be a better way.


Pricey Professional Product Preparation


Fortunately, there is a better way – after all, SAP manages to deliver a wide range of software based on the NetWeaver ABAP platform, and it all has to be packaged somehow. The software to do so (or at least a software that is capable of doing so) is available; it’s known as the Add-On Assembly Kit (AAK for short; the software component is AOFTOOLS, probably for Add-On Factory Tools).

The online documentation of the AAK is freely available at http://help.sap.com/aak, so I won’t try to replicate all that’s written there. In a nutshell, the AAK provides two tools that assist with defining, checking and maintaining the contents of a deliverable software package (Software Delivery Composer) and turning that package into an installable file (Software Delivery Assembler). While the whole process uses TMS tools internally, the entire process is much more sophisticated and specially designed to support the “delivery to anonymous customer” scenario.

Note that although the AAK is an installable unit, it’s not a product that you can buy a license for. You sign a separate contract with the SAP Integration and Certification Center to have your solution certified, and as part of the process you get the AAK. The details are specified here and here as well as in note 929661.

The advantages of this approach, to name only a few, are:

  • Supported by SAP, used by SAP. What better reference could you wish for?
  • There’s extensive documentation available – online reference, 130 pages PDF, even SAP Tutor introductory videos when I last had the chance to use it. In addition, you’ll be assigned a personal contact, and at least for the people I’ve had the pleasure to work with, their competence and professionalism leaves nothing to be desired.
  • The import tools, namely transactions SAINT and SPAM, are known to most basis admins. In contrast to the common TMS transport operations, they are designed for imports of large software packages and deal with all kinds of issues.
  • If you ever wondered where the strange software component names come from – with the AAK, you get your shot at creating your own software component. The name of the software component is based on a registered namespace and therefore guaranteed to be unique; the delivery object lists contain the software component identifier and are therefore unique as well. The final EPS delivery files contain the system ID and installation number, which in this combination are unique as well (at least unique enough for all intents and purposes). Collisions with customer system IDs are thus avoided.
  • There are extensive consistency checks during the packaging and export process that can even be extended by customer checks. For instance, as an i.s.h.med developer, you may want to stop some generated function groups from being delivered because they need to be generated on the customer system. Writing a custom check for this is rather straightforward.
  • As the import uses the well-known SAINT/SPAM route, you’ll get full SPDD/SPAU modification support, including modification adjustment transports that can be used to automatically adjust the modifications on QA and production systems.
  • There’s an integrated dependency management system that allows you to specify which software components have to be present or may not be present in which versions. These dependencies are checked early during the import process – if a dependency is not met, the entire import won’t happen.
  • The AAK provides support for various upgrade or cross-grade scenarios, including release upgrades. You can build special Add-On Exchange Upgrade packages that allow you to cleanly remove objects that are no longer required in the new release and import whatever new objects are needed. This is fully integrated into the upgrade process itself.
  • Hardly worth mentioning, but of course there is full support for deleting objects. With the rather recently released version 5.0, there’s even support for deletable add-ons.
  • Compared with transports, a lot of additional checks take place during the installation process, including version checks (e. g. a downgrade protection) and collision checks with other application components.
  • Since an entirely different set of tools is used for import, AAK Add-Ons can’t be mixed up with regular transports.
  • Aside from regular versions (“Releases”), the AAK provides support for patches. The patch import process enforces the correct order of patches, thus ensuring a consistent software state on the customer system.
  • Your software will be listed in the component version list via System > Status, which is something that every software developer should aspire.
  • Finally, the certification process will get you a check by SAP and a certification logo, as well as a listing in the partner directory.

Considering the many advantages, one would have to think that this just has to be the solution to all problems. So why isn’t everyone using it? Obviously, there have to be some drawbacks. Let’s see…

  • It’s largely unknown to the general public, or at least that’s my observation. I hope I’m helping to change that, bit by bit.
  • The certification isn’t exactly cheap. One of the articles mentioned above has the figures – 15k€ for the first year and 10k€ for every subsequent year.
  • An extensive system landscape is required or at least strongly recommended for the packaging and testing process. The usual rule of thumb is three systems per foundation release supported – depending on your requirements, this number might change in either direction, but it’s a good estimate for starters.
  • The delivery process looks huge at first. It can be cut down once you get to know the system better and once the scenario is well-defined – there are many special cases that might not apply in your scenario, but you’ll have to think about them and decide whether to handle them consciously.

That last point is actually not necessarily a drawback: The AAK forces you to think about a lot of things that could go wrong and devise ways to prevent them. It imposes a certain level of quality – you can’t just deliver anything, at least not without consciously suppressing the warning telling you that you’re about to make a mistake.


The Steep Incline


After going through the delivery processes in detail, what’s the point?

Starting SAP development is nowadays relatively easy and cheap – if I read the SAP PartnerEdge Program for Application Development information correctly, you get a full-fledged package for 25 named developers for 2.5k€ per year. This will enable you to develop software and deliver it using transports, which – as we have seen – is not an optimal solution.

If you wanted to upgrade to the professional solution, you’ll easily end up paying ten or twenty times as much. You have to consider the basic certification fee (15k€ for the first year) for one thing, but you will also have to provide and maintain the system landscape. This might be an ideal opportunity for the cloudizationalized world – just use some pre-configured instances available from the SAP Cloud Appliance Library, link them up in a transport domain and there you are, right? Right, just that each system will easily set you back 1k€ or more in CAL and cloud provider fees – per month). So for certification and three cloud-based systems during the first year, you’ll probably end up with a total bill in the area of 30-50 k€. Add to this the effort required to setup the delivery process if nobody is familiar with the AAK and the basis administration tasks (which is probably the case), and the point becomes clear:

Getting the software to the customer using the tool that is ideally suited for the job is so expensive in time and money that many (most?) small companies and even some of the largest there are resort to a sub-optimal solution. One would hope this might change, so that now that we have the development environment available for a fraction of the cost of a few years ago, the same would apply for the delivery solution.

In ABAP, as a rule, the name is not always the game (see an entertaining recent discussion about that).

 

But as you all know there is a prominent exception to that rule: All the syntax forms involving CORRESPONDING for assigning structure components that (by chance) have the same name.

 

  • Before ABAP 7.40, these were mainly MOVE-CORRESPONDING for the components of structures, the CORRESPONDING addition to Open SQL's SELECT, and some obsolete calculation statements.

  • With ABAP 7.40 MOVE-CORRESPONDING was enabled to handle structured internal tables and a new constructor operator CORRESPONDING was introduced that allows an explicit mapping of structure components with different names.

 

What was still missing?  A dynamic mapping capability! And this was introduced with ABAP 7.50.

 

The new system class CL_ABAP_CORRESPONDING allows you to assign components of structures or internal tables with dynamically specified mapping rules.

 

The mapping rules are created in a mapping table that is passed to a mapping object, e.g. as follows:

 

DATA(mapper) =

  cl_abap_corresponding=>create(

    source      = struct1

    destination = struct2

    mapping     = VALUE cl_abap_corresponding=>mapping_table(

     ( level   = 0

       kind    = cl_abap_coresponding=>mapping_component

       srcname = '...'

       dstname = '...' )

     ( level   = 0

       kind    = cl_abap_coresponding=>mapping_component

       srcname = '...'

       dstname = '...' )

     ( level   = 0

       kind    = cl_abap_coresponding=>mapping_component

       srcname = '...'

       dstname = '...' ) ) ).

 

This is a simple example, where all structure components are on top level (0) and where all components are to be mapped (kind = cl_abap_coresponding=>mapping_component). More complicated forms involve nested structures and exclusions. With srcname and dstname the component names  can be specified dynamically. The table setup is similar to the mapping-clause of the CORRESPONDING operator.

 

After creating the mapping object, all you have to do is to execute the assignment as follows:

 

mapper->execute( EXPORTING source      = struct1

                 CHANGING  destination = struct2 ).

 

You can do that again and again for all structures or internal tables that have the same types as those used for creating the mapping object.

 

Not much more to say about that. For details and more examples see CL_ABAP_CORRESPONDING - System Class.

 

 

Outlook

 

Up to now, only the basic form of the CORRESPONDING operator is mirrored in CL_ABAP_CORRESPONDING. But a variant for using a lookup table is already in the queue.

Hey There


After hearing lots of Questions about “how can we print the data to the next page based on the certain condition?


So here is the answer -> Use Command in the Smartforms.


Now question is how we can use the Command so that we can achieve our requirement?


Below are the step by step detail which will help you to Use Command functionality correctly.



Example:  This Example shows you how you can print the 3 different Internal Tables data in 3 Different Page.

 

Step 1.  Create the Form having 2 Pages.

                         Img1.png


Step 2.  Create Main Window inside both the Pages (with both Dimensions) :

                                                                   Page1 – Main Window

      Img2.png

     

                                                                  Page2 – Main Window

    Img3.png

 

 

Step 3. Inside the Main Window of first page create 3 Tables and 2 Commands:

                           Three tables are for 3 different internal tables which we want to show in three different Pages. Also set the attributes of the Command.

                                           

                                                            Img4.png

 

1st table Loop : Loop it with  First Internal Table which needs to display on first page.    

                                                       Img5.png

2nd Table Loop: Loop it with Second Internal Table which needs to display on second page.

                                                      Img6.png

3rd Table Loop: Loop it with Third Internal Table which needs to display on second page

                                                     Img7.png


 

Command #1: In attribute Tab, tick the check box and put PAGE 2 as the next page. As shown below :

                                      Img8.png

Command #2: In attribute Tab, tick the check box and put PAGE 2 as the next page. As shown below:

                                                   Img9.png


Try COMMAND  in your smartform to print different details in different pages.


From the Main Window of first  page we can hold the data to print in next below pages.


Note: - Things that need to take care is that Main Window of each Page should be of Same Width.


  Keep Learning 


Stay Awesome

   Romit Raina


After a long time of stagnation the development of Open SQL awoke from its deep slumber and took some major steps in ABAP 7.40 in order to comprise as many features as possible from SQL92 and to offer about the same functionality as the SELECT statement of the DDL of ABAP CDS. In order to do so, a new foundation for Open SQL was laid by introducing a new SQL parser into the ABAP runtime environment. One consequence of this is the fact that Open SQL plays a bit other role in ABAP than before. While before 7.40 Open SQL was regarded more a part of the ABAP language itself, meanwhile the SQL character becomes more and more pronounced. One of the major indicatons for this is the new role of host variables. Before 7.40, you used ABAP variables in Open SQL statements as it is done in all other ABAP statements.  In fact, this prevented further development quiet effectively. Open SQL statements are executed on the database after being transformed to native SQL. In order to push down more sophisticated things than simple comparisons with ABAP variables in WHERE conditions - say SQL expressions in many operand positions - the Open SQL parser must be able to distinguish clearly between operands that are evaluated by the database and ABAP variables whose contents has to be passed to the database. In order to fulfill this task, ABAP variables in Open SQL statements meanwhile fully play the role of host variables as ABAP variables always did in static native SQL (EXEC SQL). You can and should prefix ABAP host variables in Open SQL with @. In fact, you can use all the new SQL features introduced to Open SQL starting with Release 7.40 only if you do so. Other fundamental changes that were introduced to Open SQL in order to make it fit for the future were comma separated lists and placing the INTO addition of a SELECT statement behind the authentic SQL clauses.

 

As a first benefit of these measures, already with ABAP 7.40  fundamental new features in Open SQL were rolled out, comprising SQL expressions in various operand positions or the possibilty of inline declarations. With ABAP 7.50 this development is continued and this blog introduces some of the new features (more to come).

 

Host Expressions

 

In almost all positions where you could place host variables, including the operand positions of SQL expressions from 7.40 on or the work areas of writing SQL statements, you can use host expressions now using the syntax

 

... @( abap_expression ) ...


A host expression abap_expression can be any ABAP expression, that is a constructor expression, a table expression, an arithmetic expression, a string expression, a bit expression, a builtin-function, a functional method, or a method chaining inside parentheses () prefixed with @. The host expressions of an Open SQL statement are evaluated from left to right and their results are passed to the database as it is done for the contents of host variables. In fact you can see host expressions as short cuts for assignments of ABAP expressions to ABAP helper variables and using those as host variables. The following example shows a table expression that reads a value from an internal table carriers on the right hand side of a WHERE condition.


SELECT carrid, connid, cityfrom, cityto
       FROM spfli
       WHERE carrid =
         @( VALUE spfli-carrid( carriers[ KEY name
                                          carrname = name ]-carrid
                                          OPTIONAL ) )
       INTO TABLE @DATA(result).


I personally like the following:


DATA(rnd) = cl_abap_random_int=>create(
               seed = CONV i( sy-uzeit ) min = 1 max = 100 ).

INSERT demo_expressions FROM TABLE @(
   VALUE #(
    FOR i = 0 UNTIL i > 9
      ( id = i
        num1 = rnd->get_next( )
        num2 = rnd->get_next( ) ) ) ).


An internal table is constructed and filled with random numbers inside an INSERT statement. A cool feature for the ABAP documentation's demo programs ...


For more information see Host Expressions

 

SQL Expressions

 

With ABAP 7.50, the usage of SQL expressions  was extended as follows:

 

  • Besides using them in  the SELECT list, you can use them as left hand sides of comparisons with WHERE, HAVING, ON, and CASE and as Operands of a CAST expression. Note that this includes host variables and host expressions as operands of SQL expressions.

  • The following SQL functions can be used in SQL expressions now: ROUND, CONCAT, LPAD, LENGTH, REPLACE, RIGHT, RTRIM, SUBSTRING. The COALESCE function can have up to 255 arguments now.

As an example of an arithmetic expression on the left hand side of a WHERE condition see:

 

SELECT carrid, connid, fldate, seatsmax, seatsocc,

       seatsmax - seatsocc AS seatsfree

       FROM sflight

       WHERE seatsmax - seatsocc > @( meth( ) )

       INTO TABLE @DATA(result).

 

As an example for string functions see the following concatenation of columns into one column with CONCAT:

 

SELECT CONCAT( CONCAT( carrid,

                       LPAD( carrname,21,' ' ) ),

               LPAD( url,40,' ' ) ) AS line

       FROM scarr

       INTO TABLE @DATA(result).

 

This concatenation is not possible with the operator && that is available since ABAP 7.40.

 

For more information see SQL Expressions.

 

Path Expressions

 

Path expressions are something you know from CDS already (duh!). If a CDS view exposes an association, the same or another view can access it using a path expression.

 

For example, the following CDS view uses path expressions in its SELECT list:

 

@AbapCatalog.sqlViewName: 'DEMO_CDS_USE_ASC'

@AccessControl.authorizationCheck: #NOT_REQUIRED

define view demo_cds_use_assocs

  with parameters p_carrid:s_carrid

  as select from demo_cds_assoc_scarr as scarr

{ scarr.carrname,

  scarr._spfli.connid,

  scarr._spfli._sflight.fldate,

  scarr._spfli._sairport.name }

where scarr.carrid = :p_carrid

 

The name of the associations are prefixed by an underscore _ and are defined in the following views:

 

@AbapCatalog.sqlViewName: 'DEMO_CDS_ASC_CAR'

@AccessControl.authorizationCheck: #NOT_REQUIRED

define view demo_cds_assoc_scarr

  as select from scarr

            association to demo_cds_assoc_spfli as _spfli

              on scarr.carrid = _spfli.carrid

     { _spfli,

       carrid,

       carrname }

 

 

@AbapCatalog.sqlViewName: 'DEMO_CDS_ASC_SPF'

@AccessControl.authorizationCheck: #NOT_REQUIRED

define view demo_cds_assoc_spfli

  as select from spfli

            association to sflight as _sflight

              on spfli.carrid = _sflight.carrid and

                 spfli.connid = _sflight.connid

             association [1..1] to sairport as _sairport

              on spfli.airpfrom = _sairport.id

     { _sflight,

      _sairport,

       carrid,

       connid,

       airpfrom }

 

With ABAP 7.50 Open SQL's SELECT can also use such path expressions in its SELECT list or FROM clause when accessing CDS views. The following Open SQL statement does the same as the first CDS view above:

 

SELECT scarr~carrname,

       \_spfli-connid AS connid,

       \_spfli\_sflight-fldate AS fldate,

       \_spfli\_sairport-name AS name

       FROM demo_cds_assoc_scarr AS scarr

       WHERE scarr~carrid = @carrid

       ORDER BY carrname, connid, fldate

       INTO TABLE @DATA(result).

 

Looks not too different, eh? Only the dots have to be replaced by backslashes \ (and because of this, the path expressions looks like those for meshes). When compiling such an Open SQL statement, the path expressions are converted to joins on the database. Check it out with ST05.

 

For more information see Path Expressions.

 

More News

 

That's not all about Open SQL in ABAP 7.40. In an upcoming blog I will show you an enhancement to SELECT that became possible because the INTO clause can and should be placed at its end ...

Recently I had a requirement of providing search help to POWL selection screen field. Twist part was, search help result should be depend on the other POWL selection screen field’s value.

 

SAP has provided an Interface IF_POWL_OVS which could be used to provide OVS search help to POWL.

 

First create OVS event handler class by implementing the interface IF_POWL_OVS.  As prerequisite, set the class name as OVS handler for the required field. This should be done in GET_SEL_CRITERIA method of POWL Feeder class, Pass the class name to OVS_HANDLER_NAME.

The interface has four separate methods HANDLE_PHASE0, HANDLE_PHASE1, HANDLE_PHASE3 and HANDLE_PHASE4. Implementation is similar to like other OVS implementation.

HANDLE_PHASE0 to set configuration like title, selection mode, ROW count etc. Here SET_CONFIGURATION method of I_OVS_CALLBACK called.

scn_blog2.png

 

HANDLE_PHASE1 to define selection fields and default values of search help. Here SET_INPUT_STRCTURE method of I_OVS_CALLBACK called. This is optional, only required if dialog selection required to restrict search result. (Search help type C – Complex Dialog)

 

scn_blog3.png

HANDLE_PHASE2 to build the search help result. Here SET_OUTPUT_TABLE method of I_OVS_CALLBACK called with result set.

 

scn_blog4.png

HANDLE_PHASE3 to export result value back to entry field by setting value for the context attribute.

scn_blog5.png

 

Here I want emphasis more HANDLE_PHASE2 method to provide dynamic result. PHASE2 used to build search result set. POWL interface has provided one additional importing parameter IT_RELATED_FIELDS which has POWL selection screen fields. This importing table could be used to get selection criteria of POWL.

 

The IT_RELATED_FIELDS contain field called M_ID which is nothing but selection screen field name which we had given during defining GET_SEL_CRITERIA method. For each selection screen field, the importing table contains the properties like type (Parameter / Select Option), as check box, as drop down, read only etc.Values which are been selected on the screen will be available at Field M_VALUE (For Parameter) and MT_RANGE_TABLE (For Select Option).

 

My requirement was to provide business Partner value help for each voyage which has been selected on POWL screen. Hence I have to read which voyage has been selected.  To get voyage I have read the IT_RELATED_FIELDS with M_ID = S_003 (Screen field name of Voyage field).

scn_blog6.png

From the screen shot we can notice I’m reading voyage by passing M_ID = S_003 (Voyage) which is nothing the screen name I defined for voyage in GET_SEL_CRITERIA method. ET_R_VOYAGE of type RSELOPTION.

 

scn_blog7.png

Later I used the voyage information to get partner related to it. The final result set as output for search result list display. And so on.

scn_blog8.png

Runtime values IT_RELATED_FIELDS,

scn_blog9.png

 

Result:

 

Actions

Filter Blog

By author:
By date:
By tag: