Introduction

Most of the retail customers which are EDI trading partners of Lindt companies demand that EDI INVOIC messages are labelled with a unique and sequential Interchange Control Number (ICN) without gap. As already mentioned in a previous What SAP may have forgotten to  integrate we do not use a full-blown EDI Adapter like Seeburger but just an "Add-On" (for more details contact http://www.resource.ch about their rEDI Adapter). This rEDI Adapter is highly versatile and does an excellent job for all kinds of EDI messages we employ but it had one major drawback: the generated ICNs were not part of the XML-EDI message and, therefore, could not be searched for in the XI monitoring.This became a dilemma as soon as INVOIC messages were rejected by customers (see Blog What SAP may have forgotten to  integrate).

As a consequence I decided to use SAP standard means for generating document numbers (ICN = document number of an EDI message):

1. Use Number Range Objects (SNRO) to generate numbers.

2. Use ABAP Class mapping to incorporate the numbers as ICN into the EDI messages.

 

Defining Number Ranges for ICN Generation

As a prerequisite a number range object (SNRO) containing multiple number ranges is created. Each number range may be used for either all outbound messages or a certain message type (e.g. INVOIC) sent to an EDI partner.

The NR object ZEDICH_N06 defines 6-digit number ranges for multiple customers. The first customer (range = 00) has received already 1001 INVOIC messages, the second customer 146, the third one 1750 and so on.

 image

Within the ABAP mapping class the correct NR object and number range are set in the CONSTRUCTOR method.

    

 

Implementation of Interface IF_MAPPING

In order to create an ABAP mapping class we need to implement method IF_MAPPING~EXECUTE as described e.g. in Michal Krawczyk's blog (The specified item was not found. and the SAP Online Help (ABAP Mappings).For maintenance reasons I split the sample coding of the Online Help into distinct methods (e.g. INIT_INPUT_STREAM, PARSE_DOCUMENT, INIT_OUTPUT_STREAM).

image

 

Methods PREPROCESSING and POSTPROCESSING enable the developer to do a pre-processing e.g. of the XML-IDoc stream and a post-processing of the XML-EDI stream. Here I will focus on the POSTPROCESSING method because our target is to add the new Interchange Control Number (ICN) into the XML-EDI stream.

 

Post-Processing of the XML-EDI Stream (Adding the ICN)

Method POSTPROCESSING is protected and needs to be re-defined to match the requirements of the customer. Method MAP_ICN is responsible for adding the ICN into the XML-EDI stream.

NOTE: Adding LOG-POINTs throughout the mapping class is quite useful when it comes to troubleshooting.

 

The coding of method MAP_ICN is split among the following 2 textareas corresponding to the 2 major functions being executed:

1. Get a new number (fm NUMBER_GET_NEXT) and generate the ICN according to the customer's requirements.

2. Insert the ICN into the XML-EDI stream. 

 

First we get a new number from the number range followed by a MODULO calculation (depending on the size of the number range object -> here: 6-digits). Then we use this number to generate the ICN.

The ICN may be identical to the NR number. However, some customers demand mnemonics for both supplier and customer as prefix. And others may have even more complicated "formulas" to generate the ICN.

NOTE: In case of TRADACOMS messages we would map the File Generation Number (FLGN) into the XML-EDI stream.

 

As you can see from the data elements (D_0020) we are mapping the ICN into an EDIFACT/EANCOM message. Instance attribute ME->MO_DOCUMENT (IF_IXML_DOCUMENT) is our handle to the XML-EDI stream. This instance attribute was created within method IF_MAPPING~EXECUTE.

Using method GET_ELEMENTS_BY_TAG_NAME (of IF_IXML_DOCUMENT) we specifically fetch the D_0020 data elements in the UNB and UNZ segment and overwrite their values with the generated ICN.

The ICN has now become an integral part of the XI message.

 

And by What SAP may have forgotten to  integrate a little bit the ICN (and the XI-GUID) can be displayed within SAP R/3.

image

 

Summary

In this blog I wanted to demonstrate you how you can utilize SAP standard number ranges on SAP-PI in combination with ABAP class mapping in order to sequentially number your outbound EDI messages.

However, even more important I wanted to show you the simplicity and elegance of ABAP class mapping on SAP-PI which will be lost on a Java-only stack. 

Introduction

When dealing with the manipulation of XML data (e.g. XSLT transformation, ABAP class mapping) we often face the problem to visualize the changed XML data in an easy way. As a consequence we usually need more iterations to adjust our program logic in order to get the required results.

Another challenge is when we receive XML data as xstring input. If errors occur how can we exclude (or confirm) that the XML was erroneous? Example: The input of the eBPP Customer component are base64-encoded DocumentSets. During the processing of these records XML INVOIC-IDocs are generated.

Finally, another trigger to write this blog was a comment to Michal Krawczyk's blog (The specified item was not found. requesting for a better output of the XML data.

 

Further Readings:

Upload XML file to internal table

 

Class CL_XML_DOCUMENT - The jack-of-all-trades

The following piece of coding shows that as soon as you have created an instance of CL_XML_DOCUMENT you are just 2 simple steps away from displaying XML data received as xstring:

(1) Parse the XML Stream (XString) to DOM

(2)  Display document

 

The XML document is displayed in a HTML viewer. In order to search through the XML simply use CTRL+F.

image

 

 

Irrespective of whether the XML input is provided as file, internal itab, string, or xstring class CL_XML_DOCUMENT provides the appropriate methods for loading (file) or parsing the XML input (not shown; see Appendix: report ZUS_SDN_XML_DISPLAY).

 

Use Cases for XML Visualization

Currently I have 2 Use Cases in which I apply the versatility of class CL_XML_DOCUMENT:

(1) All ABAP class mapping on SAP-PI have a static TEST method in which I can display the XML before and after the mapping.

(2) eBPP Customer (Troubleshooting): As soon as I execute processing of DocumentSets in Debugging mode I can display the intermediate XML-IDocs in the HTML viewer for visual inspection. 

 

Summary

Whenever you need to quickly visualize XML data take advantage of the versatile methods of class CL_XML_DOCUMENT

 

 

Appendix

Coding of sample report ZUS_SDN_XML_DISPLAY.

Introduction

Quite often developers struggle with the communication between ALV grid instances. As soon as a record is selected in the "main" ALV list the details for this records should be displayed in the "detail" ALV list (e.g. Customer -> SalesAreas). Whereas the first double-click event works as expected quite frequently the second event fails, e.g. threads:

CALL METHOD ob_grid1->set_table_for_first_display does not change display

Problem with Custom container ->cl_gui_custom_container/cl_gui_alv_grid

Text edioter in Pop up window

Can I use Docking container in Subscreen

 

In the past I answered such questions several times but even I have problems to recover my own sample reports from SDN. Therefore, I document this solution a last time in this blog and I show that there is almost no difference whether the two communicating (S)ALV grids are displayed on the same screen, on 2 different main screens or even in a popup.  

 

(1) Example: Both ALV Lists Displayed in Same Main Screen

Below you see the (single) main screen showing the ALV list containing customer records.

Communicating ALV Grids: Initial State

As soon as the user double-clicks on a record the details (Salesareas of the customer) are displayed in the second ALV list. For displaying the two ALV grids together on the same screen a splitter container is used.

image

Double-clicking on a different customer row automatically refreshes the the second ALV grid. 

image

 

(2) Each ALV List Displayed on Different Main Screen (not shown)

In this case each grid instance is displayed using a separate docking container.

 

(3) Detailed ALV List Displayed on Popup Screen

As soon as the user double-clicks on a record the details (Salesareas of the customer) are displayed in a popup.


image

 

Screens - Keep It Simple and Stupid

Provided that the ALV lists are the only elements displayed on the screens we can keep them quite simple: basic flow logic and NO elements (not even a custom control because it is not required)!

image

 

 image

 

NOTE: The sample report ZUS_SDN_TWO_ALV_GRIDS contains 3 screens (0100, 0200, 0205). The flow logic of all 3 (empty) screens is described in the documenation header of the report (see Appendix: report ZUS_SDN_TWO_ALV_GRIDS).

 

Initialize & Display Controls Before Calling any Screen

Even though virtually all sample reports (BCALV_...) of SAP show the creation and display of controls (container, grid, tree) within the flow logic of the displaying this is not mandatory. On the contrary, I prefer to do the entire control initialization BEFORE calling any screen. Below you see the START-OF-SELECTION section of my sample report. Within routine INIT_CONTROLS all container and grid instances are created. In addition, I even call method SET_TABLE_FOR_FIRST_DISPLAY within this routine.

As next step I link the main ALV list (parent container = GO_DOCKING) to the target screen '0100'. Eventually, the main screen is called.

 

 

What is the advantage of this approach? Well, if you want to display the ALV list on another screen of your application then just do it - by calling again the LINK method of the container.

 

Displaying ALV Grid in Main Screen vs. Popup - So What?

The only difference is the dynpro level:

 

Leave the Event Handler and Trigger PAI

The advantage of this approach is that you always end up with a clearly defined status. After handling the user-command at PAI the flow moves on to PBO at which flushing of the controls occurs by default.

 

Classic vs. New ALV Object Model - Nothing Special

Regarding the different modes of displaying ALV lists there is no difference whether you are using class CL_GUI_ALV_GRID or CL_SALV_TABLE because the LINK method belongs to the container controls and not to the grid controls (see Appendix: report ZUS_SDN_TWO_SALV_GRIDS).  

 

Summary

Communicating ALV grids is a standard ABAP development task which is not complicated by different modes of displaying the ALV lists. 

The approach described here can be adapted for any kind of communicating controls (e.g. tree & grid).

 

 

 

Appendix

(A) Report ZUS_SDN_TWO_ALV_GRIDS

 

 

(B) Report ZUS_SDN_TWO_SALV_GRIDS

*Introduction*

One major implicit agreement in the EDI business is that +"No News is Good News."+. A customer ordering via EDI assumes that its purchase orders will be fulfilled completely as long as the customer is not informed otherwise by the supplier. A supplier assumes that its EDI invoices have been accepted and will be paid unless they are rejected by the customer. Given the fact that we do not use a full-blown EDI Adapter like Seeburger at Lindt (CH) but just an "Add-On" (for more details contact http://www.resource.ch (http://www.resource.ch) about their *rEDI Adapter*) which does the conversion XML to Flat and vice versa we do not (yet) have and End-to-End Monitoring of our EDI processes. How do we cope with this situation in case of EDI incidents?  *The Challenge* If EDI invoices are rejected by the customer we are informed by their EDI support team. Many customers tells us not only that a transmission failed but also which invoice (i.e. the invoice no.) was affected. However, some customer just let us know that +"Transmission (LNDSMUMU004392) failed"+ (see below). E-Mail Notification The cryptic number LNDSMUMU004392 is the +sequential +*transmission number* (or ICN = *interchange control number*) as defined in the customer's EDIFACT EDI guidelines (LNDS = Supplier prefix; MUMU = Customer prefix; 004392 = sequential number). In case of TRADACOMS messages (UK-specific standard) the FIL segment is used for the numbering (field FLGN = File Generation Number). The EDI error described in the e-mail is that we sent a wrong invoicee party (NAD+IV) due to a wrong assignment of partner roles in the customer master data. Now how do we find out the affected invoice by just knowing its transmission number, sent date and value? Using standard means (e.g. *WE02*) may become tedious if many invoices were sent to this customer at the same day (which actually is the case for this particular customer). Another option would be to check table *VBRK *for the specific net value. Or searching through the XML messages on SAP-XI (*SXMB_MONI*) in the possible period of time. However, none of these ways is really efficient and all of them are too time-consuming. And what about multiple incidents happen within a short period of time?



*eXtended IDoc Monitoring at Lindt* As an answer to tackle this acute support problem I developed an "eXtended IDoc Monitor" for Lindt UK (our EDI "pacemaker" among the Lindt group). The three main features of this extended monitor are: 1. Display the *business object* (e.g. invoice, delivery, etc.) on the IDoc list 2. Display the SAP-XI message ID (*GUID*) on the IDoc list 3. Display the *ICN *(or transmission number) of outbound IDocs on the IDoc list


In the standard IDoc monitor (WE02) you need 3-4 mouse-clicks in order to navigate from the IDoc on the list to the business document. Since the link between IDoc and business document can be retrieved by +standard +means I just removed this unnecessary navigation and brought the business document to the surface (i.e. on the IDoc list).

On SAP-XI we can see the link between IDocs and XI-message ID (*GUID*) using transaction *IDX5*. Provided a suitable RFC connection is available we can jump from SAP-XI into the business system and display the IDoc. Thus, we have again +standard+ means to retrieve this link (IDoc - GUID).   | | In summary: All I have done is *to grab the Lego bricks that SAP has built already and merged what obviously belongs together.*  The result is shown below: eXtended IDoc Monitor: Selection-Screen   As soon as the selection is started the message displayed in the status bar shows the additional selection of business object data. eXtended IDoc Monitor: StatusBar   The business document related columns are coloured in dark green. eXtended IDoc Monitor: List Overview   As soon as we restrict the Idoc list to a particular message type (e.g. *INVOIC*) additional data specific for this business document type can be displayed (e.g. net value and payer in case of invoices).  By simply sorting the IDoc list by column ICN (= Interchange Control number) we have found out that the rejected transmission LNDSMUMU004392 belonged to invoice 90081069. eXtended IDoc Monitor: List Details   By right-clicking on the affected IDoc we can quickly call the appropriate function (e.g. "Maintain Store (code) mapping") in order to solve this incident. Needless to say that a double-click on either the business document number or the GUID forwards the user to the corresponding display +business transaction+ (e.g VF03 for invoices) or the list "XML Messages in Adapter (IDX5) on SAP-XI, respectively. eXtended IDoc Monitor: Context Menu     *Business Activity Monitoring* You may have noticed that the selection-screen of the eXtended IDoc Monitor (ZWE02) has an additional tabstrip *"Bus.Object"* as compared to the standard.Wouldn't it be nice to follow the business activities of a document based on the sent and received IDocs? eXtended IDoc Monitor: Bus.Object Selection-Screen   The selection for a specific outbound delivery gives the following result: * The warehouse sent the picking confirmation (WHSCON) back. *  The inbound WHSCON triggererd an outbound SHPORD as ASN (= Advance Shipping Notice) to the customer (by means of a VOFM condition). *  Finally the warehouse sent a proof of delivery (STPPOD) which allows billing of the delivery. Or in other words: We can see the all business activities of the outbound delivery in the eXtended IDoc Monitoring. eXtended IDoc Monitor: Business Activity Monitoring

Introduction

My comment about the importance of ABAP naming conventions to Thorsten Franz' blog Great Programmers will their Code to the new Guy triggered an interesting discussion about naming conventions for ABAP. The opinions range from "...is important" to "...We do not advise you on naming conventions, do whatever you want".

The latter statement sounds like a capitulation. This resignation might be due to the following questions:

  1. Are there useful naming conventions for ABAP? Apparently not even SAP has them.
  2. Do naming conventions really improve ABAP coding and how?
  3. How can we check the compliance of ABAP program objects with the naming conventions?


ABAP Naming Conventions: "Yes, we exist"
I started my SAP career more than 10 years ago at Cirrus Consulting. The very first thing I got there was a comprehensive documentation about ABAP naming and programming conventions which they had created for one of their biggest customers. And at the Zürcher Kantonalbank they had such a document, too.

Admittedly there were deviations from these conventions in particular because many external developers worked at these customers and the Code Inspector (SCI) was not yet available at that time. Nevertheless these conventions facilitated the readability of the coding and the handing over (for maintenance) to other developers. And if you look carefully around you will realize that even SAP (or at least a subset of its developers) uses naming conventions.

Now let us imagine a "...do whatever you want" customer where a developer creates a report (containing a constant for the speed of light) which in the subsequent years has been maintained by three other developers. Three of them use their own naming conventions and the forth one uses none at all so we may end up with the following representations of a single constant:

  1. GC_LIGHTSPEED or LC_LIGHTSPEED
  2. CO_LIGHTSPEED
  3. C_LIGHTSPEED

The forth developer may replace all previous representations with his own SPEED_OF_LIGHT without any prefix because everybody "knows" that this is a (pyhsical) constant. This medley of naming conventions will definitely confuse every developer. And this is just a simple example considered with the mixing up the naming of global and local variables.

In the following section I will present my own naming conventions which are a "best-of-breed" blend of SAP naming conventions and those of my previous employers.

 

Call a Spade a Spade

Naming conventions must be concise, short and distinct. Any ambiguity is a burden for maintenance in the future. Using three categories of criteria it is possible to unambigously name virtually all variables and most signature parameters:

  1. Visibility: Global / Local / Class Context
  2. Type: Field / Structure / Table Type / Reference / Constants
  3. Structural Context: FORM routines / Function Modules / Methods

 

There are two general naming conventions:

  • Variables and Class Attributes: <Visibility>_<Type>_<Description>
  • Signature Parameters: <Structural Context>_<Type>_<Description>

 

 

Visibility & Type

We have three kinds of visibility: Global, Local and Class Context (which includes Interfaces as well) and seven different types (see below).

 

 

VisibilityPrefix
Type Prefix
Example Coding
Global GFieldDGD_MATNR DATA: gd_matnr TYPE matnr.
Structure S GS_KNB1 DATA: gs_knb1 TYPE knb1.
Table Type T GT_VBAK DATA: gt_vbak TYPE TABLE OF vbak.

Class

Interface

O

GO_GRID

GO_MSGLIST

DATA: go_grid TYPE REF TO cl_gui_alv_grid.

DATA: go_msglist TYPE REF TO if_reca_message_list.

Data Object DO GDO_DATA DATA: gdo_data TYPE REF TO data.
Constant C GC_LIGHTSPEED CONSTANTS: gc_lightspeed TYPE i VALUE '300000'.
LocalLFieldD LD_MATNR DATA: ld_matnr TYPE matnr. 
Structure S LS_KNB1 DATA: ls_knb1 TYPE knb1.
Table Type T LT_VBAK DATA: lt_vbak TYPE TABLE OF vbak.

Class

Interface

O

LO_GRID

LO_MSGLIST

DATA: lo_grid TYPE REF TO cl_gui_alv_grid.

DATA: lo_msglist TYPE REF TO if_reca_message_list.

Data ObjectDO LDO_DATA DATA: ldo_data TYPE REF TO data.
Constant C LC_LIGHTSPEED CONSTANTS: lc_lightspeed TYPE i VALUE '300000'. 
Class Context M Field D MD_MATNR DATA: md_matnr TYPE matnr.
Structure S MS_KNB1 DATA: ms_knb1 TYPE knb1.
Table Type T MT_VBAK DATA: mt_vbak TYPE TABLE OF vbak.

Class

Interface

O

MO_GRID

MO_MSGLIST

DATA: mo_grid TYPE REF TO cl_gui_alv_grid. 

DATA: mo_msglist TYPE REF TO if_reca_message_list.

Data Object DO MDO_DATA DATA: mdo_data TYPE REF TO data.
Constant C MC_LIGHTSPEED CONSTANTS: mc_lightspeed TYPE i VALUE '300000'. 

 

The vast majority of variables within program objects are either global or local. And in the future there will be a remarkably shift towards class attributes (ABAP-OO). Thus, assigning a unique prefix to each group makes their visibility unambigous for every developer.

Class attributes (instance and static) are special because they are global within the class/instance whereas they appear local from outside the class/instance. Without referring to a class(name) or instance these attributes are "hidden" (i.e. local).

Question: Does the type prefix offer any benefit?

In ABAP forum posts you may find the following definitions for ALV grid instances:

  • GRID: global (?), local (?), class context (?); class reference (?)
  • G_ALV_GRID: global (!?), local (?), class context (?); class reference (!?)
  • GO_GRID: global class reference OUTSIDE any class context (my naming convention)

Answer: Yes, because we meet the developer's expectation and anticipation.

You may argue: What a big fuss about such a little subtlety. My answer to this is: Every SAP developer who comes across a GO_GRID variable in any of my programs knows in advance(!) the meaning and scope of it.

 

 

Structural Context (1): FORM routine Signature

The SAP system does not care about which kind of formal parameters (USING or CHANGING) are used in FORM routine signatures. Both kinds of formal parameters can be changed within the routine and the modified contents transferred back to the calling program. This is ambiguity at its worst.

In order to make the signature of a FORM routine clear as crystal we define every Input = USING parameter and every Output = CHANGING parameter. Within the FORM routine all USING parameters should be regarded as "constants" meaning that they are not changed nor is any change transferred back to the calling program.

 

ParameterPrefix
Type Prefix Example
USINGUField D UD_MATNR
Structure S US_KNB1
Table Type T UT_VBAK

Class

Interface

O

UO_GRID

UO_MSGLIST

Data Object DO UDO_DATA
CHANGING C Field D CD_MATNR
Structure S CS_KNB1
Table Type T CT_VBAK

Class

Interface

O

CO_GRID

CO_MSGLIST

Data Object DO CDO_DATA

 

Again, by defining Input = USING (prefix 'U') and Output = CHANGING (prefix 'C') we meet the developer's expectation and alleviate understanding of the FORM routine.

 

 

Structural Context (2): Function Module Signature

The same logic applies to function modules parameters. In addition, we can facilitate the understanding of TABLES parameters (Yes, I know they are obsolete yet some still like to use 'em) by the semantics of their Input/Output behaviour:

  • IT_ITAB = Input only
  • ET_ITAB = Output only
  • XT_ITAB = Input & Output

 

ParameterPrefix
Type Prefix Example
IMPORTING IField D ID_MATNR
Structure S IS_KNB1
Table Type T IT_VBAK

Class

Interface

O

IO_GRID

IO_MSGLIST

Data ObjectDO IDO_DATA
EXPORTING EFieldDED_MATNR
StructureSES_KNB1
Table Type T ET_VBAK

Class

Interface

O

EO_GRID

EO_MSGLIST

Data ObjectDOEDO_DATA
CHANGING C Field D CD_MATNR
Structure S CS_KNB1
Table Type T CT_VBAK

Class

Interface

O

CO_GRID

CO_MSGLIST

Data Object DO CDO_DATA
TABLES "Importing" IT IT_VBAK
"Exporting" ET ET_VBAK
"Changing" XT XT_VBAK


Of course there is no technical difference whatsoever between "Importing", "Exporting" and "Changing" TABLES parameters. Yet the different naming gives the developer already an idea about the function of this module without looking into the coding. Or in other words: Try to "express" the function of the module already in its signature.

 



Structural Context (3a): Method Signature

 

The logic explained above is just applied to method parameters as well and extended to the additional RETURNING parameters of methods.

 

 

ParameterPrefix
Type Prefix Example
IMPORTING IField D ID_MATNR
Structure S IS_KNB1
Table Type T IT_VBAK

Class

Interface

O

IO_GRID

IO_MSGLIST

Data ObjectDO IDO_DATA
EXPORTINGEFieldDED_MATNR
Structure SES_KNB1
Table TypeTET_VBAK

Class

Interface

O

EO_GRID

EO_MSGLIST

Data ObjectDOEDO_DATA
CHANGING C Field D CD_MATNR
Structure S CS_KNB1
Table Type T CT_VBAK

Class

Interface

O

CO_GRID

CO_MSGLIST

Data Object DO CDO_DATA
RETURNINGR FieldDRD_MATNR
StructureS RS_KNB1
Table TypeT RT_VBAK

Class

Interface

O

RO_GRID

RO_MSGLIST

Data Object DO RDO_DATA

 

 

Structural Context (3b): Static vs. Instance Attributes

Static attributes are special because they exist only once for all instances of this class. Modifying a static attribute within a given instance makes this change visible to all other instances. Both types of attributes have the prefix 'M' (class context) in my naming convention. In order to distinguish between static and instance attributes I apply the following convention:

  • Instance Attribute: with self-reference me-> (e.g. me->md_key)
  • Static Attribute: without self-reference (.e.g. ms_row)

You can find an example of this naming convention in the coding of INCLUDE ZRSWBOSDR_C01 in my blog Multi-Purpose ALV Programming.  

Another unambigous convention is to drop the self-reference prefix and use the full qualified name for static attributes:

  • Instance Attribute: md_key (with or with self-reference prefix)
  • Static Attribute: lcl_eventhandler=>ms_row

 

 

Field-Symbols

We can apply the same naming conventions even to field-symbols which makes them much more readable and understandable for every developer. This is particularly important because the contents of field-symbols cannot be analyzed statically but is only determined at runtime.  

VisibilityPrefix
Type Prefix
Example
Global GFieldD<GD_MATNR>
Structure S <GS_KNB1>
Table Type T <GT_VBAK>

Class

Interface

O

<GO_GRID>

<GO_MSGLIST>

Data Object DO <GDO_DATA>
LocalLFieldD <LD_MATNR>
Structure S <LS_KNB1>
Table Type T <LT_VBAK>

Class

Interface

O

<LO_GRID>

<LO_MSGLIST>

Data ObjectDO <LDO_DATA>

 

SAP and Lego

Why has Lego become one of the world's most famous toys? Part of the answer can be found here:

"Lego pieces of all varieties are a part of a universal system. \ Despite variation in the design and purpose of individual pieces over \ the years, each remains compatible in some way with existing pieces. \ Lego bricks from 1958 still interlock with those made in 2009, and Lego \ sets for young children are compatible with those made for teenagers.

\ Bricks, beams, axles, mini figures, and all other parts in the Lego \ system are manufactured to an exacting degree of precision. When \ snapped together, pieces must have just the right amount of strength \ and flexibility mixed together to stick together. They must stay \ together until pulled apart. They cannot be too easy to pull apart, or \ the resulting constructions would be unstable; they also cannot be too \ difficult to pull apart, since the disassembly of one creation in order \ to build another is part of the Lego appeal. In order for pieces to \ have just the right "clutch power", Lego elements are manufactured \ within a tolerance of 2 µm."

 

QUESTION: "Have you ever seen a child which sits in front of a pile of Lego bricks claiming it can't build anything because all bricks have the same size???" 

 

For me SAP is just an incredibly huge pile of Lego bricks (for grown-ups). Consistent naming conventions are an indispensable standard which enables us to manufacture long-lasting developments that are easy to understand and maintain (pull apart and reassemble).

 

 

Code Inspector: Check Compliance with your Naming Conventions

The Code Inspector (transaction SCI) provides you with all means to check compliance with your naming conventions. Within the check variant you an explicit check Programming Conventions.

CodeInspector_1_Overview.png

Looking at the check Naming Conventions a popup appears where you can define your conventions for data variables and signatures (FORM routine, function module, classes).

Since I have not yet mentioned  macros (DEFINE) yet here is my convention:

  • MAC_<description>: e.g. mac_suppress_toolbar_btn


Another set of frequently used global variables are (select-)parameters and select-options:

SELECT-OPTIONS:

  S_MATNR      FOR mara-matnr.

PARAMETERS:

  P_MATKL      TYPE matkl.

CodeInspector_2_NamingConventions.png

Looking at the check Extended Naming Conventions for Programs you will find my 7 types (Field, Structure, Table Type, Class, Interface, Data Object, Constant) grouped into 5 prefixes:

  • Elementary Type includes Field and Constant
  • Object Reference includes Class and Interface

CodeInspector_3_ExtendedNamingConventions.png

The prefixes shown above are the default values when opening the check with which I do not agree. For example, the prefix "R" indicates a global range according to my conventions:

DATA: rt_matnr    TYPE RANGE OF matnr.

 

Summary

It is possible to define useful naming conventions for ABAP. Naming conventions significantly help in understanding a program and navigating around it. Both the new and the senior guy will benefit in particular when it comes to maintenance because they can focus their analysis on the program logic and not the "logic" of the previous developer.

 

Conclusions

Naming conventions are not impositions for creative developers but, on the contrary, side rails for long-lasting development.

 

Addendum

I no longer think it makes sense to explicitly distinguish between classes and interfaces because at the end of the day you always work with an interface implementing class (even if the static type is that of an interface). Therefore, the two separate rows for "Class" and "Interface" have been merged.

 

Further Readings

Great Programmers will their Code to the New Guy

The Little SE80 of Horrors

Random Ramblings of an obsolete programmer 

Introduction

When I came back from my holidays one of the first issues I had to solve at work was to distribute complete IDocs for material master data generated from change pointers. Fortunately Michal Krawczyk's The specified item was not found. hit the bull's eye and provided the solution.

However, since we are still on SAP release 4.6c enhancement techniques are not yet available. Modifying function module CHANGE_POINTERS_READ was not an option either because we would need to dig deeply into the ALE layers.

 

Chain of Calls

We have scheduled report RBDMIDOC which generates the MATMAS IDocs using change pointers. This report does not yet generate the IDocs but is responsible for locking table BDCPS (Change pointer: Status) for the selected message type.

The report calls another report RBDMIDOX which makes a call to fm MASTERIDOC_CREATE_SMD_CLSMAS in which the change pointers are read using fm CHANGE_POINTERS_READ. Thus, the chain of calls looks like this:

RBDMIDOC -> RBDMIDOX -> MASTERIDOC_CREATE_SMD_CLSMAS -> CHANGE_POINTERS_READ

I assume you understand my reluctance against any modification better now. 

 

The Change Pointer Trick

Change pointer records are save in table BDCP. The field BDCP-FLDNAME contains the name of the field that was changed in the master data resulting in a change pointer. 

The quintessence of the trick devised by Michal is: When we change the contents of BDCP-FLDNAME to 'ALELISTING' the master data are transferred in its entirety.

 

The Solution (available on 4.6c, too)

Instead of changing the change pointer records dynamically at the end of the process my solution modifies the records persistently at the beginning of the process.

1. Copy RBDMIDOC to ZRBDMIDOC

In the selection-screen section of the report I just added a new checkbox parameter used to trigger full distribution.   

 

 

2. Persistently modify change pointer records on demand

In the main section of the report I added the routine for updating the change pointer records. Note that finally the standard report RBDMIDOC is executed.

 

This routine does the same as already described by Michal except that the changes are persistent in the DB table.

 

The following screenshot shows the different selection-screens of the standard and the Z-report:

 

Summary

The advantages of the solution described above are:

  • The functional difference between standard and Z-report is highly visible to the user by just looking at the selection-screens. If the flag is unmarked the Z-report just executes the standard report.
  • There is no need for any user-exit or modification (I subsume "enhancements" under modifications...).
  • The solution can be implemented on older SAP releases (e.g. 4.6c). 

 

Admittedly the direct DB update is "bad" programming but at the end of the day that is how we wanted the change pointer records to look like.

 

Conclusion

Before rushing towards user-exits or modifications (and enhancements) twist your solution around and see it from a different angle. You may come up with a tiny little "plug-in" that you can put on top (or ahead) of the standard and whose function is obvious to the user.

 

Appendix

Here the entire coding of ZRBDMIDOC:

*Introduction*

Currently more and more Lindt companies are on their way to integrate more tightly with their customers and suppliers by means of *EDI *(= Electronic Data Interchange).

EDI Ordering is the process where the customer sends its purchase orders e.g. as EDIFACT ORDERS messages. With respect to EDI Ordering we can distinguish between:

    *Static *EDI Ordering: the customer does not get any response about the reception of the EDI order but only a functional acknowledgement from the receiving EDI system (in our system landscape we use ODEX Enterprise )
      1. *Dynamic *EDI Ordering: the customer does not only require to receive a *CONTRL *message as immediate response but, in addition, an *ORDRSP* message that must be sent within a defined time frame after having received the ORDERS message.


    The *CONTRL *message is an Acknowlegement/Rejection Advice Message:

     

    "Based on the United Nations Electronic Data Interchange for Administration,<br />Commerce and Transport (UN/EDIFACT) format, CONTRL is a message that<br />syntactically acknowledges or rejects an interchange.<br />Within the interchange it might also be used to notify the trading partners<br />of an acknowledgement or rejection of any of its functional groups or messages.<br />Explanations for the rejection are also clarified and communicated within the message."

     

     

    The *ORDRSP *message is a Purchase order response message :

     

    "A message from the seller to the buyer,<br />responding to a purchase order message or a purchase order change request message."

     

     

    Whereas the CONTRL message is still an acknowledgement on the functional (syntactical) level the ORDRSP message is a full-blown +business +response.  At the beginning of the stylesheet I defined variables for the Message Function and the Response Type (see Business Rules section above).Summary

    Obviously it would have been possible to do the entire XSLT transformation ORDRSP IDoc -> ORDRSP EDIFACT within a single stylesheet. However, splitting the transformation into a pre-processing of the XML-IDoc and a main processing (XML-IDoc -> XML-EDI) clearly reduced the complexity of the stylesheet. In addition, the stylesheets are easier to read and understand which facilitates maintenance in case of new or changed business rules (or modified IDoc contents) in the future.

     

     

    *Conclusion </p><p>One of the fundamental principles of Systems Engineering is to split a large project into smaller ones in order to reduce complexity and increase manageability. The same principle can be applied to complex XSLT transformations required for EDI.</p><p> </p><p> </p><p>Further Readings*

    For more details about the EDIFACT message types you may refer to:

     

    Syntax and service report message for batch EDI (message type-CONTRL)

    [UN/EDIFACT ORDRSP | http://www.unece.org/trade/untdid/d01b/trmd/ordrsp_c.htm]

    [Edifactory: ORDRSP | http://www.edifactory.de/msginfo.php?s=D01A&m=ORDRSP]

     

     

    Introduction

    It seems that my first blog about organizing Function Module-Exits (CMOD / SMOD) touched a sore point looking at the lively discussion that followed after its publication. Whereas I suggest to organize the exits in independent function groups others proposed to use BAdI like technologies (e.g. Implementing a BAdI in an enhancement Project (CMOD)). My major objections against this proposal are:

    • Using interfaces does not really increase transparency in case of the Multiple-Use exits
    • We should not mix two different kinds of Exit-technologies 

    The latter point does not imply that we should refrain from using ABAP-OO within Function Module-Exits as you will see below.

     

    Lindt UK: EDI vs. Retail Orders

    Customers of Lindt UK sending their purchase orders via EDI are obliged to order our delicious chocolate products in so-called cases (= traded units, e.g. a box of chocolate bars) instead of consumer units (e.g. a single chocolate bar).

    Example: 1 Case = 20 Consumer Units, i.e. a box of chocolate bar contains 20 bars.

    If nevertheless a customer orders in consumer units the ordered quantity is calculated in cases within a user-exit (using fm MATERIAL_UNIT_CONVERSION).

    Recently Lindt UK opened up a couple of LindtShops where the products are sold in consumer units. The shop sales are sent via EDI to the R/3 system where it creates (Retail) sales orders containing the quantities in consumer units. EDI orders and Retail orders can be distinguished based on the order type (AUART).

     

    Below you see the package ZCMOD in the R/3 system of Lindt UK. Already at a glance we see that

    • a User-Exit has been implemented for SD-EDI
    • two different requirements have been implemented (EDI vs. Retail) 

    Lindt UK: Package ZCMOD

     

    The implementation of this User-Exit is simple and straightforward:

    • The Frame User-Exit ZEXIT_SAPLVEDA_011 is called within include ZXVEDU13 (not shown)
    • If the order type belongs to Retail we call fm ZEXIT_SAPLVEDA_011_RETAIL
    • ELSE we call fm ZEXIT_SAPLVEDA_011_EDI

    User-Exit ZEXIT_SAPLVEDA_011

     

    A few months after implementing this User-Exit Lindt UK came up with a new Retail order type. A quick look at the TOP-include of the User-Exit function group revealed that I just had to add a single line of coding in order to take this new Retail order type into account. TOP-include of User-Exit Function Group

     

    Lindt Italy (IT): Default vs. EDI Invoices

    Lindt IT sends its outbound invoices to two destinations:

    1. 3rd party sales system
    2. EDI customers (as EDIFACT messages)

     

    Below you see the package ZCMOD in the R/3 system of Lindt IT. Already at a glance we see that

    • a User-Exit has been implemented for Billing Document Output
    • two different requirements have been implemented (Default vs. EDI) 

    Lindt IT: Package ZCMOD

     

    All outbound INVOIC IDocs are enhanced by default, irrespective of their destination.

    Again, the implementation of this User-Exit is simple and straightforward:

    • The Frame User-Exit ZEXIT_SAPLVEDF_002 is called within include ZXEDFU02 (not shown)
    • All outbound invoices pass fm ZEXIT_SAPLVEDF_002_DEFAULT
    • In addition, invoices for EDI customers pass fm ZEXIT_SAPLVEDF_002_EDI

     User-Exit ZEXIT_SAPLVEDF_002

     

    Outbound invoices for EDI customers are well defined by their output attributes. The User-Exit service class ZCL_EXIT_SAPLVEDF_002_SERVICES (nomen est omen!) provides two methods:

    • ADD_CONSUMER_UNITS_PER_TU: The base unit of the sold products is cases (= traded units). The ratio Traded Unit : Consumer Units (CU) is missing in the standard INVOIC02 Idoc yet required by the EDI customers.
    • ADD_CONSUMER_UNITS_EAN: The base unit of the sold products is cases (= traded units) and, therefore, the TU-EAN (European Article Number) appears in the INVOIC02 IDoc. However, the EDI customers require to receive the CU-EAN as well

    These two methods are called within User-Exit ZEXIT_SAPLVEDF_002_EDI. Mapping of these enhanced INVOIC02 IDocs to EDIFACT D.96A INVOIC messages happens on SAP-XI where the additional data show up in the LIN, PIA and PRI segments.

    User-Exit ZEXIT_SAPLVEDF_002_EDI

     

    Lindt Poland (PL): EDI Invoicing without β-blockers

    Lindt PL is about to catch up with other Lindt companies in terms of EDI trading. Namely, they want to start EDI invoicing with several customers.

     

    Below you see the package ZCMOD in the R/3 system of Lindt PL. Already at a glance we see that

    • many more User-Exits are implemented on this system
    • the same User-Exits are used by different Lindt companies (Switzerland, Spain, Poland, and - not yet implemented - Sweden)

    Lindt PL: Package ZCMOD

     

    And here is the implementation of this Multiple-Use User-Exit:

    • The Frame User-Exit ZEXIT_SAPLVEDF_002 is called within include ZXEDFU02 (not shown)
    • Depending on the client the company-specifc User-Exit is called (e.g. ZEXIT_SAPLVEDF_002_ES for Spain and ZEXIT_SAPLVEDF_002_PL for Poland)

    User-Exit ZEXIT_SAPLVEDF_002

     

    The implementation of the EDI requirements in User-Exit ZEXIT_SAPLVEDF_002_PL is done by an external consultant mandated by Lindt PL. The User-Exit Framework ensures structural independence among the different Lindt companies on the same R/3 system. Despite the fact that external consultants are currently working on a business-critical User-Exit in our R/3 system I can sleep well without the need for any β-blockers (see my previous Dangerous Liaisons in User-Exits and How to Avoid Them).

     

    Lindt Australia (AU): ToBeDone...

    Lindt AU does not yet use any User-Exits (or to be more precise: I have not yet done the refactoring of existing User-Exits). However, you need little imagination to guess

    • what package you will find on the R/3 system of Lindt AU,
    • the naming convention of the User-Exits and
    • how they will be organized  

     

    Summary

    In this Weblog I have presented you the current benefits of the User-Exit Framework we are reaping among the Lindt companies. These benefits include:

    • High transparency of implemented User-Exits
    • High transparency of implemented requirements
    • Reusability of implementations due to recurring requirements among different Lindt companies
    • Structural independent implementation (resulting in a significant drop of sleepless nights...)

    The User-Exit Framework consists of "old" technology (function groups and modules). However, this technology is highly sufficient for the structural organization of different requirements and Multiple-Use implementations of User-Exits. Yet as soon as the structural purpose is served I prefer to switch to ABAP-OO means, i.e. to implement the required services within methods. 

     

    Conclusion

    In the long-term my User-Exit Framework, based on function groups, is likely to be superseded by the Enhancement Framework. However, what remains is that a smart organization of Exits / Enhancements is a prerequisite for high transparency and reusability. Both factors will speed up your implementations and eventually will reduce the development costs.

    *Introduction* Approved change requests (CR) initiate the creation of transport requests within our SAP systems. Each change request has a unique identifying number. In order to link change requests and transport requests we have defined a mandatory transport attribute Z_CR where the users have to maintain the change request number. Using the expanded selection screen of transaction SE03 (Option +Find Requests+) we can search for attribute Z_CR and change request numbers.   The internal control system at Lindt (LICS) demands that a request must not be imported into the productive system by the owner of the request. Admittedly this is not the most sophisticated transport control yet in the context of Lindt (size and complexity of the SAP system landscape) it makes sense - and it works efficiently.  In SAP terminology the importing user is the so-called *Admin *user. But where can we find the Admin user? Neither the +standard +list nor the +detailed +list of the import queue (transaction STMS) reveals the Admin user.

     Import Queue (STMS): Standard list 

    Import Queue (STMS): Detailed list

     

    *Exploiting the SAP Standard* Looking under the hood of transaction STMS you will find that the function module behind the import history is *TMS_TM_IMPORT_HISTORY* (alternative: TMS_TU_IMPORT_HISTORY). Having the main functionality at our disposal we just need an appropriate selection screen and then build the ALV list around the collected data. Regarding the selection screen we can resort back to another SAP standard function module: *TRINT_SELECT_REQUESTS* This function module offers us the same selection screen like we have for transaction SE03 (Option +Find Requests+ => report RSWBOSDR) and this is exactly what we want. The user who is already familiar with this standard function can use the same selection screen yet gets a different output (see below).   *Utility Class ZCL_REQUEST* Instead of using the standard function modules directly I created the utility class ZCL_REQUEST in order to collect all required data of a selected transport request. Within the CLASS_CONSTRUCTOR of this class I determine the transport configuration using function module *TMS_CI_READ_DOMAINCONFIG*. Here I have made the following assumption (which is correct in case of our system landscape but may be wrong in a different environment): 0.1. The first system of the configuration list is the development system (DEV) 0.2. The last system of the configuration list is the productive system (PRD)   At Lindt we have 3-systems landscapes (DEV -> TST -> PRD). Thus, for each selected transport request we will generate three rows in the ALV list, one for each system (DEV, TST, PRD). The data for the ALV list are built from three different sources: * Transport History (fm TMS_TM_IMPORT_HISTORY) * Transport Log (fm TR_READ_GLOBAL_INFO_OF_REQUEST)    *The Art of ALV Programming* The three different data sources are reflected by different column colours on the ALV list: 1. Transport Request: light and dark blue 2. Transport History: yellow 3. Transport Log: green In addition, I added three status columns: 0.1. +Status=RC+: LED showing the last return code (0=green, 4=yellow, ELSE=red)0.1. +Status:Admin+: Admin=Owner => EQUAL icon; Admin<>Owner => NOT EQUAL icon0.1. +S=Attribut+: Attribute Z_CR maintained => ATTRIBUTE icon; ELSE empty In case of Admin=Owner the status icon depends on the target system: 0.1. DEV, TST: normal EQUAL icon because here the owner is allowed to release or import his or her request 0.2. PRD: _red_ EQUAL icon because here the owner violated the internal control rules   Sorting of the list: 1. Request number 2. Counter for systems (1=DEV; 2=TST; 3=PRD)   NOTE: A MUST read for all ALV programmers is the excellent tutorial {code:html}An Easy Reference For ALV Grid Control{code}   *The Power of ALV Lists* The initial purpose of this report was to reveal violations of the internal control rule that the owner must not import his transport request himself into the productive system. All we have to do is to define a new +layout +with the following filtering options: 0.1. Status:Admin = red EQUAL icon 0.2. Target System = LP0 (productive system) (optional)  And here we have the "bad guys" caught in the act: the SAP user USCHIEFERST imported his own requests into the productive system!     *Single ALV List - Multiple Purposes * Going back to the full ALV list you will notice the following: the *Admin *user is empty

    • when the request is not yet released on DEV (pseudo-RC = NREL)
    • when the request is not yet imported into TST or PRD (pseudo-RC = NIMP)

     

    A request which is released but not yet imported into the PRD system may cause a problem after a system-copy (PRD -> TST) because these changes are lost and need to be re-imported into TST. Thus, we define a new +layout +with the following filtering options:

    • Status:Admin = TRANSPORT icon
    • Target System = LP0 (productive system)

     

    And now the ALV displays a list of transport requests that we should import prior to the next system-copy.

    CL_RECA_MESSAGE_LIST - The ultimate message handler

    This class (interface IF_RECA_MESSAGE_LIST) is the ultimate message handler (collector) in ABAP at least for me.

    About two years ago I used this class very successfully in a data migration project (purchase orders including entire history; total volume ca. 800 Mio. CHF; Message Handling - Finding the Needle in the Haystack). I created a custom validation class and used CL_RECA_MESSAGE_LIST to collect and display the error messages in a tree view. Initially we started with about 150 thousands(!) error messages in the tree (no joke. Approx. 30% of the messages were overhead due to the hierarchical structuring of the messages. And we had a multiplication of errors, e.g. 5000 purchase orders had 4 vendos assigned in the history all of which contained some error => 5000 x 4 = 20'000 error messages).

    Surprisingly, the tree display had no problems with this huge numbers of nodes. The message handler helped us tremendously to find out the errors in the migration data. Within a single day we were able to reduce the number or error messages from 150'000 (morning) to about 12'000 (lunch time) to about 500 (tea-time).

    Do not miss to check out the sample reports in package SZAL, e.g. SBAL_DEMO_04_DETLEVEL

     

    CL_RECA_GUI_SERVICES

    Still using function module TH_CREATE_MODE to open a transaction within a new GUI mode? If you have this class available on your SAP system then have a look at its static method CALL_TRANSACTION. Using parameter IF_NEW_EXTERNAL_MODE you can decide whether you want to open a new window or not.

    I have not yet used any of the other available methods yet they may be useful as well:

    GET_GUI_FUNC_OF_GUI_STATUS Gets All Functions of a GUI Status
    GET_ICON_FOR_BUSOBJGets Icon with Quick Info for Business Object
    GET_QUICKINFO_FOR_ICON Gets the Quick Info for an Icon
    MSGLIST_RAISE_AND_FREE Exception of Message List with FREE

     

     

     

     

     

    CL_RECA_DATE

    I believe all of us have already encounter the problem to check whether two periods overlap or not. Two periods are overlapping if the beginning or the end of one period lies with the other period or if one period lies completely within the other period. In case of overlapping periods the static method CHECK_INTERSECTION will raise exception PERIODS_HAVE_INTERSECTION.

     

    CL_RECA_GUID - Making object identifiers unique

    If you need to get a unique identifier for your objects then get your GUID using the static method GET_NEW_GUID.

     

    There are many more interesting classes in package RE_CA_BC e.g. like CL_RECA_COMM_SERVICES (Sending E-Mail) or CL_RECA_STRING_SERVICES (String: Utilities) which may be worth being investigated.

     

    CL_REEXC_COMPANY_CODE & CL_REEXC_CONTROLLING_AREA

    If you need data related to company codes (BUKRS) and controlling Areas (KOKRS) these two classes provide many useful methods.

     

    CL_PT_EMPLOYEE - The Infotype Broker

    You require access to all kinds of PAnnnn infotype data? Use this class (interface IF_PT_EMPLOYEE). It is as simply as that (Unified Access to All HR Infotypes).

     

    CL_ABAP_CONTAINER_UTILITES - Being Unicode-Compatible

    If you need to shuffle data between structured and unstructured variables and you do not want to run into any problems on Unicode systems then this is the right class to use.

    FILL_CONTAINER_C Fill Container of Type C or STRING with Contentstructured -> unstructured
    READ_CONTAINER_C Read Container of Type C or STRING unstructured -> structured

     

    unicode -program giving dumpunicode -program giving dump

    Unicode - Transfer structure with packed fields (type p, x) into c-fieldUnicode - Transfer structure with packed fields (type p, x) into c-field 

     

    CL_GUI_CFW - Mastering Control Event Handling

    Controls (like ALV grid, ALV tree, etc.) are very powerful and user-friendly development tools which should be used whenever appropriate.

    However, there are stumbling blocks when working with controls which can be circumvented if you keep a few basic principles in mind:

    1. Refreshing or Updating the control occurs automatically when passing PBO
    2. Control events usually do NOT trigger PAI (and therefore there is no succeeding PBO - see (1.)).
    3. Control events can be handled as system events (done by the control framework) or application events.
    4. Using controls we have a frontend (= control) and a backend (e.g. the itab used for data display in an ALV grid).
    5. For editable controls the data displayed at the frontend can differ from the data in the backend. Special control-specific methods are required to ensure synchronization between frontend and backend.


    The effect of such a "sychronization" method is demonstrated in put X into checkbox of alv when button select all entries.put X into checkbox of alv when button select all entries. (CHECK_DATA_CHANGED of CL_GUI_ALV_GRID) and ALV Tree not getting refreshed (UPDATE_CALCULATIONS and FRONTEND_UPDATE of CL_GUI_ALV_TREE).

     

    In almost all cases I use the control events as system events. Below you see what the SAP online documentation says:

    You construct the tables using a special ABAP Objects Control Framework method (control->set_registered_events).  When you register the event, you must specify whether the event is to be processed as a system event or as an application event.

    • System eventsare is triggered before any automatic field checks (for example, required fields) have taken place on the screen, and before any field transport. The PAI and PBO events are not triggered. Consequently, you cannot access any values that the user has just changed on the screen. Furthermore, there is no field transport back to the screen after the event, so values that you have changed in the event handling are not updated on the screen.

    The handler method that you defined for the event is called automatically by the system.  However, you can use the method set_new_ok_codeto set a new value for the OK_CODE field. This then triggers the PAI and PBO modules, and you can evaluate the contents of the OK_CODE field as normal in a PAI module.

    • Application eventsare triggered automatically at the end of the PAI event. Consequently, all field checks and field transport has taken place. If you want the event handler method to be called at a particular point during PAI processing, you must trigger the event handler using the static method
    • CL_GUI_CFW=>DISPATCH.

     

    Not using application events means no automatic field transport, no triggering of PAI (followed by PBO) and a potential discrepancy between frontend and backend.

    So why I am still in favour of system events? Because they give me the freedom of choice.

    If the control event is just used to display additional data (e.g. double-click on user name in ALV list -> call transaction SU01 for this user) there is no additonal effort required and I explicitly do not want to trigger PAI.

    However, if the control event is used to change data (Hotspot-clicking (Insert function) plus ALV SortingHotspot-clicking (Insert function) plus ALV Sorting) then I call method CL_GUI_CFW=>SET_NEW_OK_CODE to trigger PAI. Thus, the system event has become an application event. All required updating of the ABAP backend (i.e. itab) is done here at PAI (and not within the event handler method). The succeeding PBO takes care of refreshing/updating the control (automatic flushing).

     

    Conclusion

    SAP provides many valuable and powerful classes and it is worth spending some time to search and find them. And, of course, adopt them.

    Introduction

    "The R/3 enhancement concept allows you to add your own functionality to SAP's standard business applications without having to modify the original applications." (Introduction to the Enhancement Concept).

    This blog is about a specific type of exit, namely Function Module Exits. Assuming that you are familiar with transactions CMOD and SMOD I will show you how User-Exits can be organized in order to achieve four major goals:

    1. Easy Maintenance
    2. High Transparency
    3. Process-Independent Testing
    4. Avoidance of Dangerous Liaisons 

    The Problem

    User-Exits are normally implemented "On-Demand". Having found out that a specific behaviour of a standard application cannot be realized by means of customizing the next step is to look for appropriate User-Exits. An enhancement project (CMOD) is created and the required enhancements assigned. Now it is time to create the implementation of the User-Exit which occurs in a ZX-include predefined by SAP.

     

    These ZX-includes have to be assigned to customer packages. Depending on who implements the ZX-includes and for which purposes they are implemented these User-Exit includes are likely to be scattered around many packages.

    On one of our ERP systems the situation looks like this:

    • 98 implemented ZX-includes are assigned to 15 different packages

    The ZX-includes of User-Exit XLTO (User Exits for TO Processing) have been assigned to four(!) different packages demonstrating the historical evolution of this User-Exit.

    Considering the fact that these User-Exits are shared by three different companies residing on three different clients this is already a bad situation regarding maintenance and transparency of the User-Exits.

     

    From Bad to Worse

    Being responsible for SAP-XI & EDI support at our company I sooner or later had to modify existing User-Exits for Billing-Doc-Output (XEDF). However, I almost got a heart attack when I saw the implementation of include ZXEDFU02 (EXIT_SAPLVEDF_002) for the first time (see below). 

     

     

    While it is easy to spot the two companies (Spain & Switzerland) that have used this User-Exits it is less obvious to identify and understand the manifold requirements that have been implemented.

     

    Step 1 - Organizing the Mess

    In order to organize the existing and new User-Exits in a more meaningful way I created a new package ZCMOD. This package is intended to contain all User-Exit related repository objects.

    The next step was to clearly separate the company specific coding. This was achieved by a generic Five-Step approach:

    1. Create a Z-exit function group (if not already existing)
    2. Copy the Exit-Function Module to a Z-Exit Function Module
    3. Create company specific Z-Exit Function Modules
    4. Replace coding in ZX-include with function module call
    5. Reassign ZX-includes to the new package ZCMOD

     

    The result of the first three steps is shown below:

    A new function group ZXEDF was created and assigned to the package ZCMOD. The Exit-Function module EXIT_SAPLVEDF_002 was copied to ZEXIT_SAPLVEDF_002 in function group ZXEDF. This function module was again copied twice to yield ZEXIT_SAPLVEDF_002_CH and ZEXIT_SAPLVEDF_002_ES. The main point is that all Z-exit function modules have the same interface like the SAP Standard Function Module-Exit.

     

    Implementing User-Exits: Once and for All

    If we need to touch SAP standard objects we should always try to do it once and for all. In case of Function Module-Exits this can be done quite easily:

    The ZX-include contains nothing but a call to the frame Z-exit function module (see below).

    Using this approach we obviously never ever have to touch this ZX-include again.

     

    The Frame Z-Exit Function Module

    The frame function module ZEXIT_SAPLVEDF_002 has the function to separate the coding logic at the highest hierarchical level which in our case are the companies:

     

    The company specific function modules contain the company specific coding:

     

     

    Step 2 - Complete Structural Separation of Concerns

    Even though we have now separated the company specific coding the function modules are still linked together into a single function group. Thus, if developers need to change the User-Exit for different companies at the same time they might interfere with each others. Therefore the next level of organizing User-Exits is required: the complete structural separation of concerns.

    For each company a specific Z-Exit function group is created:

    • ZXMGV - Frame Z-Exit function group
    • ZXMGV_CH - specific for Lindt Switzerland
    • ZXMGV_ES - specific for Lindt Spain
    • ZXMGV_ES - specific for Lindt Poland

     

    Within the frame function module ZEXIT_SAPMV01_002 we determine the name of the company specific function module and dynamically call it.

     

    There are two obvious advantages of this structural organization of User-Exits:

    1. Errors accidentially introduced into a User-Exit will not affect any other company.
    2. Different companies can modify the same User-Exit at the same time without interfering with each other.

     

    Summary

    In this weblog I presented a possible strategy how to organize User-Exits (CMOD / SMOD). The major steps of this approach are:

    • Create a specific package (e.g. ZCMOD) containing all User-Exit related repository objects
    • Copy the Exit-Function module to a Z-Exit function module having the very same interface. Only the frame function module is called in the ZX-include.
    • Implement the ZX-include once and for all.
    • Create Z-Exit function groups for the highest hierarchical level that employs the User-Exit

     

    An example for a hierarchy of Z-Exit function modules is given below:

    1. Frame: ZEXIT_SAPLVEDA_001
    2. Country specific: ZEXIT_SAPLVEDA_001_CH
    3. Country & Customer specific: ZEXIT_SAPLVEDA_001_CH_SHOP

     

    If I need to change the User-Exit for inbound ORDERS IDocs intended for our Lindt Shops I immediately know that I have to change function module  ZEXIT_SAPLVEDA_001_CH_SHOP. Thus, based on the requirements (SD-EDI User-Exit & Lindt Shops) I can deduce the affected Z-Exit function module.

     

    Is there another advantage of replacing the coding in the ZX-includes with the call to the frame Z-Exit function module? 

    Yes, there is.

    The Z-Exit function modules enable you to test the User-Exit independent of the entire process / transaction !!!

    If you are already working on SAP releases >= 6.20 you can define ABAP Unit Tests for each Z-Exit function module. Having assigned all User-Exit related repository objects (ZX-includes and Z-Exit function groups) into a single package you can then use the Code Inspector (transaction SCI) to easily test all your User-Exits at once:

    • Define an object set containing this package
    • Define a check variant executing all dynamic tests (ABAP Unit)
    • Run an inspection with object set and check variant created before

     

    Finally, if you intend to get rid of obsolete User-Exits and revert back to the SAP standard processes a package like ZCMOD will give you a headstart for your project.

     

    Conclusion

    Function Module-Exits (CMOD / SMOD) can be organized by simple means into a useful structure thereby achieving major goals like

    • Easy Maintenance
    • High Transparency
    • Process-Independent Testing

    and at the same time avoiding dangerous liaisons by separating concerns into completely independent structural components.

     

    Addendum

    The topic of this blog has been continued in:

    Dangerous Liaisons in User-Exits - Revisited

    Introduction

    In thread deactivate 'APPROVE'   button  in the ALV  in RCATS_APPROVE_ACTIVITIES. it has been asked how to inactivate an ALV toolbar function within a CATS approval transaction. Having done little work on CATS and none on CATS approval I nevertheless was able to provide the specific solution.

    In the following sections I describe my strategy how I found this solution based on my experience and combined with some Learning-By-Doing. Perhaps these insights may be useful for others to improve their solution finding strategies as well. 

     

    Background

    About three years ago I developed a report for a customer where CATS data needed to be collected, evaluated and aggregated and then used to update cost objects in assessment cycles (transaction KSU2). During this development I became acquainted quite well with class CL_DBSEL_CATS and its BAdI enhancement (see CONSTRUCTOR method):

    CONSTRUCTOR method of CL_DBSEL_CATS

     

    Following this customer report development I worked for about six months in a localization project for an Add-On of RE-FX (which was one of the best ABAP-OO training I ever had experienced; see Understanding ABAP Objects). In this project I came across an ALV based approval report for rental contracts which I assumed should be somehow similar to the CATS approval report. (see below).

    Finally, about half a year ago I answered a question related to BAdI CATS_REPORTING (see Select-options in CATS_DA using BAdI).

     

    The Problem

    Below you see the ALV list with the approval toolbar function that should be inactivated (report RCATS_APPROVE_ACTIVITIES).

    ALV list for CATS approval

    Thus, the problem can be split into two parts:

    • Trivial aspect: Inactivate toolbar function of ALV grid
    • Non-trivial aspect: How to achieve this in the context of CATS approval?

     

     

    My Suspicion

    My first idea was that the non-trivial aspect of this problem might be solved by using BAdI CATS_REPORTING (interface IF_EX_CATS_REPORTING).

    The solution for the trivial aspect has been described in detail already many times, e.g.:

     

     

    The Solution

    I created an implementation for BAdI CATS_REPORTING with the implementing class ZCL_IM_US_SDN_CATS_REPORTING. In order to manipulate the ALV toolbar I added the event handler method HANDLE_TOOLBAR which handles event TOOLBAR of CL_GUI_ALV_GRID.

     

     

    (1) Searching for a Suitable Interface Method
    Looking at the interface IF_EX_CATS_REPORTING its method IF_EX_CATS_REPORTING~BEFORE_DISPLAY_APPR immediately caught my attention. Based on its description (Arbeitszeit & Reise Genehmigung vor der Anzeige am Schirm = Working Times & Trip Approval before display on screen) I was convinced having found the right method. The IMPORTING parameter IM_ALV_GRID granted me access to the grid instance used for the approval list.

     

    (2.a) Method Implementation: 1st Try

    The first try-out was based on the assumption that I was responsible for both making the toolbar interactive and setting an event handler.

    Below you see the first implementation I tried...



    ...which resulted in a short-dump when I executed the report. Apparently the ALV toolbar instance had not been instantiated yet when I called method SET_TOOLBAR_INTERACTIVE of the grid instance.

     

     (2.b) Method Implementation: 2nd Try

    In a silly attempt I tried to catch the exception which - of course - failed because CL_GUI_ALV_GRID does not raise class-based exceptions at its public interface.

     (I also tried to replace method BEFORE_DISPLAY_APPR with BEFORE_DISPLAY yet the report never stopped at the break-point meaning that this method was not called during report execution.)

    Thus, my assumption on which this coding was based on appeared to be wrong and needed to be refined.

     

     (2.c) Method Implementation: 3rd Try

    I modified my assumption that perhaps the standard report would take care of making the toolbar interactive. In this case I would only need to set the event handler method.

    Below you see the third implementation I tried... 

    ...and it worked!

    The report execution stopped at the break-point in the event handler method. Looking through the entries in E_OBJECT->MT_TOOLBAR it was obvious that function 'CX_APPROVE' was the one that should be deactivated.

     

    A quick look at the data definitions of report RCATS_APPROVE_ACTIVITIES showed that the grid instance was based on class CL_GRID_APPROVAL_ACTEXP which contains the function codes as public constants. The final version of the HANDLE_TOOLBAR implementation is shown below.

     

     And that is how the final result looks like.

     

    Summary

    In this weblog I have tried to explain one of the strategies that I apply to find a hitherto unknown solution to a new problem. This scientific approach consists of combining experience with formulating working hypotheses. These hypotheses are tested in a systematic manner.

    My first hypothesis that BAdI CATS_REPORTING is involved in the solution proved to be correct whereas the second hypothesis (ALV event handling) required a few cycles of refinement.

     

    Conclusion

    The more you know about the domain of the problem the more systematic will be your approach towards the problem. At a certain level of experience this proceeding will lead you almost inevitably to the solution.

    Actions

    Filter Blog

    By date:
    By tag: