1 6 7 8 9 10 70 Previous Next

ABAP Development

1,037 Posts

Couple of frequently asked questions in SCN forum,

1. How many secondary indexes can be created on a database table in SAP?

2. How many fields can be included in a secondary index (SAP)?

 

By seeing many threads over the above couple of questions in SCN forum marked as 'Answered' (correctly) with different answers, I have decided to test the limitations on the Secondary Indexes. The different answers are like 9, 10 (1 Primary and 9 Secondary), 15, 16 (15 Secondary, 1 Primary), No such limit.

 

So, to check, I have created Secondary indexes on table SFLIGHT.

 

1. How many Secondary Indexes can be created on a database table in SAP?

Ans. I have created 18 secondary indexes, but the system has not objected at 9 or 10 or 15 or even 16.

 

Capture.PNG

 

So, I believe that, there is no such limit for number of Secondary indexes to create on database table in SAP. But it is not at all recommended to create more than 5 Secondary indexes on a database table.

 

2. How many fields can a Secondary Index can contain?

 

When I am testing this I have created Secondary Index for EKKO table and for  an Index I have assigned all the table fields (134). Then the system says that 'max 16 fields can be assigned' with an error message.

 

error.PNG

 

So, for a Secondary index we can assign maximum of 16 fields in a database table. But it is recommended to create a secondary index with not exceeding 4 fields.

 

 

> These are the points to be remembered before creating an Index.


a. Create Secondary Indexes for the tables that you mainly read. Because every time we update a database table, it would update indexes also. Let's say there is a database table where we create (or update) 100s of entries in a single day. Avoid using Indexes in such cases.    

b. We should take care that an index shouldn't have more than 4 fields and also the number of indexes should not exceed 5 for a database table. Or else, it would result in choosing a wrong one for particular selection by an optimizer.

c. Place the most selective fields at the beginning of an Index.

d. Avoid creating an Index for a field that is not always filled i.e., if it's value is initial (null) for most entries in a table.

 

> These are the points to be remembered while coding in ABAP programs for effective use of Indexes i.e., to avoid the full table scan.

a. In the select statement, always put the condition fields in the same order as you mentioned in the INDEX. Sequence is very important here.

b. If possible, try to use positive conditions such as EQ and LIKE instead of NOT and IN which are negative conditions.

c. Optimizer might stop working if you use OR condition. Try to use IN operator instead of that.

d. The IS NULL operator can cause a problem for the Index as some of the database systems do not store null values in the Index structure.

 

 

Thanks and Regards,

Vijay Krishna G

Today I am going through the SAP help for BRFplus stuff and come across with some introduction about ABAP code composer.

 

I would like to share with you a very simple example to demonstrate its logic.

clipboard1.png

How to find the above help document in a quick way? Just google with key word "ABAP CODE COMPOSER" and click the first hit.

clipboard2.png

And here below are steps how to generate ABAP codes which contains a singleton pattern using ABAP code composer.

 

1. Create a new program with type "INCLUDE":

clipboard3.png

And paste the following source code to include and activate it:

 

*---------------------------------------------------------------------* 
*       CLASS $I_PARAM-class$ DEFINITION 
*---------------------------------------------------------------------* 
*       Instance pattern: SINGLETON 
*---------------------------------------------------------------------* 
CLASS $I_PARAM-class$ DEFINITION 
@if I_PARAM-GLOBAL @notinitial 
  \ PUBLIC 
@end 
\ FINAL CREATE PRIVATE. 
  PUBLIC SECTION. 
    INTERFACES: 
      $I_PARAM-interface$. 
    CLASS-METHODS: 
      s_get_instance 
        RETURNING 
          value(r_ref_instance) TYPE REF TO $I_PARAM-interface$ 
@if I_PARAM-exception @notinitial 
        RAISING 
          $I_PARAM-exception$ 
@end 
\. 
  PRIVATE SECTION. 
    CLASS-DATA: 
      s_ref_singleton TYPE REF TO $I_PARAM-interface$. 
    CLASS-METHODS: 
      s_create_instance 
        RETURNING 
          value(r_ref_instance) TYPE REF TO $I_PARAM-class$ 
@if I_PARAM-exception @notinitial 
        RAISING 
          $I_PARAM-exception$ 
@end 
\. 
ENDCLASS.                    "$I_PARAM-class$ DEFINITION 
*---------------------------------------------------------------------* 
*       CLASS $I_PARAM-class$ IMPLEMENTATION 
*---------------------------------------------------------------------* 
*       Instance pattern: SINGLETON 
*---------------------------------------------------------------------* 
CLASS $I_PARAM-class$ IMPLEMENTATION. 
************************************************************************ 
*       METHOD S_CREATE_INSTANCE 
*----------------------------------------------------------------------* 
*       Constructs an instance of $I_PARAM-class$ 
*......................................................................* 
  METHOD s_create_instance. 
*    RETURNING 
*      value(r_ref_instance) TYPE REF TO $I_PARAM-class$ 
@if I_PARAM-exception @notinitial 
*    RAISING 
*      $I_PARAM-exception$ 
@end 
************************************************************************ 
@if I_PARAM-exception @notinitial 
    DATA: 
      l_ref_instance TYPE REF TO $I_PARAM-class$. 
************************************************************************ 
    CREATE OBJECT l_ref_instance. 
@slot object_construction 
*   Construction of the object which can lead to $I_PARAM-exception$ 
@end 
    r_ref_instance = l_ref_instance. 
@else 
    CREATE OBJECT r_ref_instance. 
@end 
  ENDMETHOD.                    "s_create_instance 
************************************************************************ 
*       METHOD S_GET_INSTANCE 
*----------------------------------------------------------------------* 
*       Keeps track of instances of own class -> only one 
*......................................................................* 
  METHOD s_get_instance. 
*    RETURNING 
*      value(r_ref_instance) TYPE REF TO $I_PARAM-interface$ 
@if I_PARAM-exception @notinitial 
*    RAISING 
*      $I_PARAM-exception$ 
@end 
************************************************************************ 
    IF s_ref_singleton IS NOT BOUND. 
      s_ref_singleton = s_create_instance( ). 
    ENDIF. 
    r_ref_instance = s_ref_singleton. 
  ENDMETHOD.                    "s_get_instance 
ENDCLASS.                    "$I_PARAM-class$ IMPLEMENTATION

The string wrapped with a pair of @,for example, the string "$I_PARAM-class$", acts as a importing parameter of code composer, which means during the code generation, you must tell code composer what is the actual class name in generated code, by passing the actual name to this parameter.

 

This activated include will act as a code generation template. We now have the following importing parameter:

 

  • $I_PARAM-class$
  • $I_PARAM-global$
  • $I_PARAM-interface$
  • $I_PARAM-exception$

 

2. create another driver program which will call code composer API to generate the code with the help of the template include created in step1. The complete source code of this program could be found from attachment.

clipboard5.png

I just use the cl_demo_output=>display_data( lt_tab_code ) to simply print out the source code.

 

In the output we see all of the placeholder ( $XXXX$ ) in the template have been replaced with the hard coded value we specify in the driver program.

clipboard6.png


Although the google result shows the code composer API is marked as for SAP internal use only and thus could not be used in application code, however I think we can still leverage it to design some tool which can improve our daily work efficiency.

clipboard7.png

Jerry Wang

ABAP keyword syntax diagram

Posted by Jerry Wang Sep 10, 2015

As a Fiori developer, I am now reading this famous Javascript book.

clipboard1.png

In this book, the following graph is used to explain the Javascript grammar in a very clear way.

clipboard2.png

And today I just find in ABAP help documentation there are also similar syntax graph to illustrate the grammar of each keyword.

 

Just open one ABAP report, select any keyword and press F1, and you can find "ABAP syntax diagrams".

clipboard3.png


Double click on it and choose one keyword like "APPEND" in the right area:

clipboard4.png

Then the syntax diagram is opened. Ckick small "+" icon to drill down.

clipboard5.png

Click the "?" icon to get the meaning of each legend used in the graph.

clipboard6.png

Hope this small tip can help those ABAP newbie to fall in love with ABAP

Introduction

Usually when I blog on SCN I write about some specific development problem and the solution I found for it. In contrast this blog is about a more abstract topic, namely how to efficiently debug code. While it is quite easy to debug SAP code (the business suite is open source after all, at least the applications written in ABAP) debugging a certain problem efficiently is sometimes quite complex. As a result I've seen even seasoned developers getting lost in the debugger, pondering over an issue for hours or days without being close to a solution. In my opinion there are different reasons for this. One, however, is that some special approaches or practices are necessary in order to find the root cause of complex bugs using debugging.

In this blog I try to describe the approaches that are from my experiences successful. However, I'd also be interested which approaches you use and what your experiences are. Therefore I'm looking forward to some interesting comments.

 

Setting the scene

First I'd like to define what I would classify as complex bugs. In my opinion there are basically two categories of bugs. The simple ones and the complex ones . Simple bugs are all the bugs that you would be able to find and fix with a single debugger run or even by simply looking at the code snippet. For example, copy and past errors or missing checks of boundary conditions fall in this category. By simply executing the code once in the debugger every developer is usually able to immediately spot and correct these bugs.

The complex ones are the once that occur in the interaction of complex frameworks or APIs. In the SAP context these frameworks or APIs are usually very sparsely documented (if documentation is available at all). Furthermore, in most cases the actual behaviour of the system is influenced not only by the program code but also by several customizing tables. In this context identifying the root cause of a bug can become quite complex. Everyone that has every tried to e.g. debug the transaction BP and the underlying function modules (which I believe were the inspiration for the Geek & Poke comic below) or even better a contract replication form ERP to CRM knows what I'm talking about. The approaches I will be discussion in the remainder of this blog are the ones I use to debug in those complex scenarios.

http://geekandpoke.typepad.com/.a/6a00d8341d3df553ef016767875265970b-800wi

Know your tools

As said in the introduction I want to focus on the general approach for debugging in this blog. Nevertheless, an important prerequisite for successful debugging is knowing the available tools. In order to get to know the tools you need to do two things. First, its important to keep up to date with new features. In the context of ABAP development SCN is a great resource to do so. For example, Olga Dolinskaja wrote several excellent blogs regarding new features in the ABAP debugger (cf. New ABAP Debugger – Tips and Tricks, News in ABAP Debugger Breakpoints and Watchpoints , Statement Debugging or News in ABAP External Debugging – Request-based Debugging of HTTP and RFC requests). Also Stephen Pfeiffers blog on ABAP Debugger Scripting: Basics or Jerry Wangs blog Six kinds of debugging tips to find the source code where the message is raised are great resources to learn more about the different features of the tools. Besides the debugger also tools like checkpoint groups (Checkgroups - ABAP Development - SCN Wiki) or the ABAP Test Cockpit (Getting Started with the ABAP Test Cockpit for Developers by Christopher Kaestner) can be very useful tools to identify the root cause of problems.  However, reading about new features and tools is not enough. In my opinion it is important to once in a while take some time to play with the new features you discovered. Only if you tried a feature in toy scenario and understood what is able to do and what now will you be able to use the feature in order to track down a complex bug in a productive scenario.

Besides the development tools there are other important tools you should be able to use. Recently I adopted the habit to reply to questions by colleague whether I knew what the cause of a certain bug could be if they already performed a search on SCN and in the SAP support portal. In a lot of cases the answer is no. However, in my opinion searching for hints in SCN and the SAP support portal should be the first step whenever you encounter a complex bug. Although SAP software is highly customizable and probably no two installations are the same those searches usually result in valuable information. Even if you won't find the complete solution you will at least get information in which areas the cause of the bug might be. And last, but not least, also an internet search usually turns up some interesting links.

 

Thinking about the problem...

The starting point for each debugging session is usually an error ticket. Most likely these tickets was created by a tester or a user that encountered an unexpected behaviour. Alternatively the unexpected behaviour could also be encountered by the developer during developer testing (be it automated or manual). In the first case the next step is normally to reproduce the error in the QA system. Once a developer is able to reproduce the error it is usually quite easy to identify the code that causes an error message or an exception (using the tools described in the previous chapter). If no error message or exception but rather an unexpected result is produced identifying the starting point for debugging can already become quite challenging.

In both cases I recently adopted the habit to not start up the debugger immediately. Instead I start by reasoning about the problem. In general I start this process of by asking myself the following questions:

  • What business process triggers the error?
    The first question for me is always which business process triggers a certain error. Without an detailed understanding of which business process and its context causes an error identifying the root cause might be impossible.
  • What does the error message tell me?

In the case of a dump this is pretty easy. The details of the dump clearly show what happened and where it happened. However, in the case of an error message the first step should always be to check if a long text with detailed explanations is available. Most error massages don't have an detailed e

description available. But if a detailed description is available it is usually quite helpful.

Even the error messages without detailed descriptions can be very helpful. For example error message following the pattern "...<some key value> not available." or "....<some key value> is not valid." usually point to missing customizing. In contrast to that a message like "The standard address of a business partner can not be deleted" points to some problem in the process flow. Once one gets used to reading the error messages according to those kind of patterns they are quite useful to narrowing down the root cause of a error.

  • Which system causes the error?

Even if it seams to be trivial question it is in my opinion a quite important on. Basically all software systems in use today are connected to other software systems. So in order to identify the root cause of an error it is important to understand which system (or which process in which system) is responsible for triggering the error. While this might be easy to answer in most cases there are a lot of some where answering this question is far from trivial. For example consider SAP Fiori application that is build using oData service from different back end systems.

  • In which layer does the error occur?

Once the system causing an error is identified, it is important to understand in which layer of the software the error occurs. Usually each layer has different responsibilities (e.g. provide the UI, perform validation checks or access the database) For example, in a SAP CRM application the error could occur in the BSP component building the UI, the BOL layer, the GenIL layer or the underlying APIs. Understanding on which layer an error occurs helps to take short cuts while debugging. If the error occurs in the database access layer it's probably a good idea to not perform detailed debugging on the UI layer.

 

Usually I try to get a good initial answer to this questions. In my opinion it is important to come up with a sensible assumptions for answers to these questions. If the first answers obtained by reasoning about the error are not correct the iterative process described below will help to identify and correct these.

 

...and the code

The next step I take is looking at the code without using the debugger. After answering the question mentioned in the previous section I usually have a first idea in which part of the software the error occurs. By navigating through the source code I try to come up with a first assumption what the program code is supposed to do and which execution path leads to the error. This way I get a first assumption what I would expect to see in the debugger and also test my assumptions if come up with so far.

Note that trying to understand the code might not be sensible approach in all cases. Especially when dealing with very generic code it is usually far easier to understand what happens using the debugger. Nevertheless, I've had the experience that first trying to understand the code without the debugger allows me to debug much more efficient afterwards.

 

Debugging as an experiment

After all the thinking it is time to get to work and start up the debugger. I try to thinks about debugging as performing an experiment. After understanding what the scenario and context are in which the error occurs (by thinking about the problem) and getting a first assumption what the cause of the error might be (by thinking about the code) I use the debugger to test my assumptions. So basically I use the cycle depicted below to structure my debugging sessions.

debugging_as_experiment.png

First I try to think of an "experiment" to test my assumptions about the problem. Usually this is simply performing the business process that causes the error. Especially if an error occurs in a complex business process it might be better to find a way to test the assumptions without performing the whole complex process. The next step is to execute the "experiment" in order to test the assumptions. This basically is the normal debugging everyone is used to. If the root cause of the problem is identified during debugging the cycle ends here. If not, the final step of the cycle is to refine the assumptions based on the insights gained during the debugging. On the basis of  the new assumptions we can redesign the experiment and start the cycle over again. In this step it is important to move forward in small increments. If you change to many parameters between to debugging sessions it might be very difficult to identify the cause of a different system behaviour. For example consider a situation where an error occurs during the address formatting for a business partner. If order to identify the root cause of the problem it might be sensible to first test the code for the address formatting with a BP of type person and after that with a BP of type organization with the same address. This will enable to check if the BP type is part of the formatting problem or not.

 

<F5> vs. <F6> vs. <F7>

During the debug step of the cycle presented above the important question in each debugging step is if to hit <F5>, <F6> or <F7> (step in, step over or step out respectively). Using <F5> it is easy to end up deep down in some code totally unrelated to the problem at hand. On the other side using <F6> at the wrong position might result in not seeing the part of the source code causing the problem.

In order to decide if to step into a particular function or method or to step over it I use a simple heuristic that has proven very useful for me:

  • The more individual a function or method is the more likely is it to use <F5>
  • The more widely used a function or method is the more likely is it to use <F6>.

Using this heuristic basically leads to the following results:

  1. I will almost always inspect custom code using <F5>. the only exception is that I'm sure the function or method is not the cause of the problem.
  2. I will only debug SAP standard code if I wasn't able to identify the root cause of a problem in the custom code.
  3. I will basically never debug widely used standard function modules an methods and instead focus on new ones (e.g. those delivered recently with a new EhP).

As an example consider an error in some SEPA (https://en.wikipedia.org/wiki/Single_Euro_Payments_Area) related functionality. When debugging this error I would first focus on the custom code around SEPA. If this doesn't lead to the root cause of the error I would start also debugging SEPA related standard functions and methods. The reason is that this code has only been recently developed (compared to the general BP function modules). If I would encounter function modules like BAPI_BUPA_ADDRESS_GETDETAIL or GUID_CREATE in the process I would allways step over them using <F6>. These function modules are so common that it is highly unlikely they are the root cause of the problem.

Nevertheless it might turn out that in rare cases everything points to a function module or method like e.g. BAPI_BUPA_ADDRESS_GETDETAIL as the root cause of an error. In this case I would always check the SAP support portal first before debugging these function modules or methods. As these are widely used for quite some time it is highly unlikely I'm the first one encountering the given problem. Only if everything else fails I would start debugging those function modules or methods as a last resort.

 

The right mind set

For all the techniques described before it is important to be in the right mind set. I don't know how often I heard sentenced like "How stupid are these guys at SAP?" or "Have you seen this crappy piece of code in XYZ". I must admit I might have used sentences like these one or two times myself. However, I think this is the wrong mind set. The developers at SAP are neither stupid nor mean. Therefore, whenever I see something strange I try to think what might have been the reason to build a particular piece a code a certain way. What was the business requirement they tried to solve by the code. This usually has the nice effect that with each debugging session I learn something new about some particular area of the system. This will in the future help me to identify the root cause of new issues more quickly.

 

And probably the most important technique of all is the ability to take a step back. It happened to me numerous times already that I was working on a problem (be it a bug or trying to implement a new feature) for a while without any progress. For whatever reason I had to stop what I was doing (e.g. because the night guard walked in and ask me to finally leave the building). After coming back to the problem the next day i quickly found the solution. It then always seemed like I had been blind for the solution the day before. So whenever I get stuck working on a problem I started to force myself to step back, doe something else, and revisit the problem afresh a few hours later.

 

What do you think?

Finally I'd like to here from you what your approaches to debugging are. Do you use similar practices? What are the ones you find useful in identifying the root cause of complex errors?

 

Christian

Introduction

In order to upload pricing conditions in SAP system, we need to create a conversion program which caters to all condition tables and uploads the respective data. Now since all condition tables have different structures/key fields/fields, the BDC approach can’t solve the problem unless we do recordings for all Condition Types. Also in case a new condition type is added, then we need to add a new recording to the code, which increases the development/maintenance hours.

Each time pricing conditions need to be uploaded in the SAP system, a technical resource is required to create a conversion program which uploads the data into the system. This tool helps in uploading the pricing conditions for SD and MM module, thereby eliminating any multiple manual intervention.


Solution Details

As the requirement is to upload any condition table, we have to design a solution which caters to all condition type uploads.

So in order to make it generic for all condition types, we would be expecting the Condition Table name to be as a part of upload fields. Now using this Table Name, we would fetch the schema of corresponding Condition Table and map fields dynamically. Also we would be using the Conversion Exits on the various field values using the field information returned for each table.

 

For example, say the condition table name coming in upload file is A652. We will use FM “CATSXT_GET_DDIC_FIELDINFO” to the details for table fields. As a result from this FM, we will get all the fields with their attributes like:-

  • Key Flag (denotes whether the field is a part of Primary Key or not)
  • Domain Name
  • Data Element Name
  • Check Table Name
  • Length
  • Field Labels
  • Conversion Exit, etc..

 

The basic common fields in upload structure can be:-

  • KAPPL (Application)
  • KSCHL (Condition Type)
  • TABLE (Condition Table Name)

 

Now the point lies how we map the data from file to different condition tables, as each table has a different structure and also varying in number of fields.

Since any Database Table can have only 16 fields (max) as a part of Primary Keys. And there are 5 fields which are common in all A* tables:-

  • MANDT (Client)
  • KAPPL (Application)
  • KSCHL (Condition type)
  • KFRST (Release status)
  • DATBI (Validity end date of the condition record)

 

So remaining number of key fields left are 11 (16 – 5), now the upload structure of the file would have 11 fields as floating/generic value. So we keep FLD1-FLD11 of type FIELDNAME (Char 30).

 

The other fields in the upload file structure (common to all condition types) are:-

  • DATAB (Start Date of Condition Record)
  • DATBI (End Date of Condition Record)
  • KBETR (Condition Value)
  • KPEIN (Condition Price Unit)
  • MEINS (Unit of Measurement)
  • KRECH (Calculation Type for Condition)

 

So the final upload structure is:-

Field NamesData TypeDescription
KAPPLKAPPLApplication
KSCHLKSCHLCondition Type
TABLETABNAMETable Name
FLD1FIELDNAMEField Name
FLD2FIELDNAMEField Name
FLD3FIELDNAMEField Name
FLD4FIELDNAMEField Name
FLD5FIELDNAMEField Name
FLD6FIELDNAMEField Name
FLD7FIELDNAMEField Name
FLD8FIELDNAMEField Name
FLD9FIELDNAMEField Name
FLD10FIELDNAMEField Name
FLD11FIELDNAMEField Name
DATABKODATABValidity start date of the condition record
DATBIKODATBIValidity end date of the condition record
KBETRKBETR_KONDRate (condition amount or percentage) where no scale exists
KPEINKPEINCondition pricing unit
MEINSMEINSBase Unit of Measure
KRECHKRECHCalculation type for condition


Now since in every A* table first 3 key fields are:-

  • MANDT
  • KAPPL
  • KSCHL

(NOTE: other 2 fields KFRST and DATBI are the last 2 key fields)

And first 3 fields in upload file are:-

  • KAPPL
  • KSCHL
  • TABLE

 

So rest of the key fields from A* table would be mapped to upload file fields FLD1-FLD11 based on the number of primary keys. Thus we will start mapping from field 4 of condition table to FLD1, FLD2 and so on till FLD11, based on the number of key fields.

In case we have 3 more key fields (excluding 5 common key fields), then in the upload file we will have values in fields FLD1, FLD2 and FLD3. In case any other field has a value, then it is an erroneous data, and nor these 3 fields can be blank (as they are part of primary keys).


For instance, let consider the table A652 (refer the snapshot in attachments)

The mapping of upload file to Condition Table would be like:-

1.      FLD1 --> VBELN

2.      FLD2 --> MATNR

3.      FLD3 --> VRKME

4.      FLD4 -to- FLD11 would remain as blank.

 

Also the data coming in these fields should be in continuous chain.

For instance if FLD1, FLD2 and FLD4 has values and FLD3 is initial, then also this record is erroneous.

 

1.      In case of the above erroneous situation, an error message “Discontinuity in Variable Key Fields” is appended.

 

2.      Validate the Processing Status from table T686E. In case no valid record found then append an error message “Invalid Processing status for conditions”.

 

3.      Now using the field information returned from FM “CATSXT_GET_DDIC_FIELDINFO”, check if any Conversion Exit is applicable, and then use the value coming in the upload file field and apply the same to convert the value and then we can pass this to IDoc structures. In case any error occurs then append message coming from the Conversion Exit.

 

4.      Check if the field is present in Segment

  • E1KOMG,
  • E1KONH, and
  • E1KONP

If field is found in the segment, then pass the value in the segment(s).

 

5.      Also concatenate the key field values in a string called Variable Key.

 

6.      After all key fields are covered with the above steps specified; then check for the length of the field. If field length is greater than 100, append a message “Variable Key too big”.

 

7.      Get the Logical System Name from table T000 where Client = SY-MANDT. In case no record found then append an error message “No Partner Function Found”.

 

8.      Concatenate ‘SAP’ SY-SYSID to form the Port Number.

 

9.      In case no error is found till now and test run is not requested, then populate the IDoc Segments.

    1. a.      Pass Control Records
      • Pass the Sender and Receiver Information.
      • IDoc Type as COND_A04
      • Message Type as COND_A
      • Basic Type as COND_A04
      • Direction as Inbound
      1. b.      Pass Data Records
        • Now pass the above prepared data into segments
          • E1KOMG
            • Application
            • Condition Type
            • Variable Key
            • Region
            • E1KONH
              • Start Date
              • End Date
              • E1KONP
                • Condition Type
                • Condition Value
                • Condition Unit
                • Condition Price Unit
                • Calculation Type for Condition
                1. c.      DIRECT POST – Post the data using FM “IDOC_INPUT_COND_A
                  • Pass all the above prepared data into the FM.
                  • If some error is returned, then append the same.
                  • If no error found, and data is successfully posted then check for Status as 53. If found, append a success message “Changes done successfully”
                  1. d.      IDOC POST – Post the data using FM “IDOC_INBOUND_WRITE_TO_DB
                    • Pass the data records into the FM.
                    • If some error returned from FM, then append the same in the log, to be displayed to the user. If no error found, Commit Work and append message “Idoc successfully posted:” with the IDOC number.


Business benefits

The above explained approach will upload all the relevant condition records in to the SAP system for SD and MM module using the iDoc approach (which is faster as compared to using BDC’s or LSMW’s).

The only thing which is crucial for using this tool is to understand the mapping of condition table with the upload file format. Once the mapping is done and a Tab Delimited Text file is provided to this program, it uploads the data in the desired tables; and thereby saving around 80% of estimated time. For instance, the general effort spent in developing the conversion program is 40 hours -versus- 8 hours spent in using this tool.


In addition, no maintenance is required in case any other change request is to be catered.

 

Thus this solution minimizes:

  • The functional effort of manually entering conditions records one by one.
  • The technical effort of developing conversion program using BDC’s for different condition tables. The number of these conversion program can vary depending upon the conditions to be uploaded.
  • The maintenance effort required as and when new condition types are added.

Quick intro: This post is part of a series in which I show you some interesting ABAP tips and tricks. I'll present this in the context of our own developments at STA Consulting to have a real-life example and to make it easier to understand.

 

Requirement: we have a Business Object displayed in a field of an ALV grid and we want to add the Generic Object Services to our custom context menu.

 

Background information:

 

Business Objects

 

In practically all ALV grids you will find fields that contain Business Objects (BO to keep it short). For example, a Plant, a Vendor, a Material is a BO defined by SAP. You can display BOs using transaction SWO1. Our example will be BUS1001 (Material).

 

http://sta-technologies.com/wp-content/uploads/2015/08/blog_GOS_01_resized.jpg

 

In order to uniquely identify a BO, there is a link to at least one field of a database table. BO BUS1001 is linked to MARA-MATNR, which is the unique identifier of a material.

 

http://sta-technologies.com/wp-content/uploads/2015/08/blog_GOS_02_resized.jpg

Click on the image above to see the full screenshot

 

This makes it easy to identify if an ALV field contains a BO or not: simply check the field catalog of the ALV: if there is a reference to a table field which is also referenced by a BO, then we can add the GOS menu to it.

 

Generic Object Services (GOS)

 

GOS is a very useful standard tool that allows us to do certain things with BOs. You can add notes and attachments, start and display workflows, link BOs together, send BOs as attachments in messages etc. I'm sure you've seen the classic toolbar menu of GOS in many transactions like MM03:

 

http://sta-technologies.com/wp-content/uploads/2015/08/blog_GOS_04_resized.jpg

 

Why is it needed?: The basic reason we made this development is that the GOS menu is only available in certain transactions. For example, if you want to attach a file to a material, you have to launch MM03. In order to do this, you have to open a new window, copy-paste the material number, hit enter etc. It would be great to attach the file in the transaction you are in.

 

Solution: let's assume that we have already identified which ALV field contains the material number. After this, we will use a standard class to add the GOS menu to our context menu.

 

First declare and create the object:

 

DATA: lo_gos TYPE REF TO cl_gos_manager.
CREATE OBJECT lo_gos
   EXPORTING
     ip_no_commit = 'R'
   EXCEPTIONS
     others       = 1.

It is important to add parameter ip_no_commit to control database commits made by GOS, which may interfere with the current program. Space and 'X' are pretty trivial, 'R' means that updates will be performed using an RFC call. Naturally you have to add your own error handling in case there was any error.

 

The next step is to get the GOS menu as a context menu object. We have to supply the BO type and the BO key (BUS1001 and the material number the user right-clicked on):

 

DATA: lo_gos_menu TYPE REF TO cl_ctmenu,
       ls_object   TYPE borident.
ls_object-objtype = 'BUS1001'.
ls_object-objkey  = lv_matnr.
CALL METHOD lo_gos->get_context_menu
   EXPORTING
     is_object = ls_object
   IMPORTING
     eo_menu   = lo_gos_menu.

The object reference received in parameter eo_menu will be exactly the same as in the toolbar of MM03.

 

The last step is to add this context menu to the context menu of the ALV grid. There are hundreds of forum posts about creating your custom context menus so I won't elaborate it here. There is a standard demo program where you can check it out: BCALV_GRID_06. The bottom line is that you will have a context menu object that you can manipulate:

 

CALL METHOD lo_alv_context_menu->add_submenu
   EXPORTING
     menu     = lo_gos_menu
     text     = text-027.     " Generic Object Services

The end result will look like this (we have actually added the GOS menu under our own nested submenus "STA ALV Enhancer - Material"):

 

http://sta-technologies.com/wp-content/uploads/2015/08/blog_GOS_05.jpg

Click on the image above to see the full screenshot


Conclusion: This is pretty useful because now you can access the GOS in any ALV you want. Naturally if you attach a file using this context menu, it will be visible in MM03 and vice versa.

 

I hope you liked this first post, there are lots more things to come. Have a nice day!

 

p.s.: Actually it is possible to dynamically add this menu to all BOs in ALVs of all standard and custom reports, so 'BUS1001' it is not hardcoded...

 

Daniel Mead

Spool List with blank line

Posted by Daniel Mead Aug 27, 2015

We found the list spool with one blank line in every 10 lines.  . It’s because NW740 (SAPKB74010)  has an SAP standard FM ‘RSPO_GET_LINES_AND_COLUMNS’.  Since the default layout X_PAPER defines lines to 10 and following condition doesn’t meet, lines keep as 10 then spool lists with a blank line in every 10 lines (i.e. 10 lines page):

 

    if lines < 5.

     lines = 0.

    endif.

 

 

In NW 731 the parameter show_realheight is always false line becomes 0. Then the spool doesn’t show the blank line.

 

    if lines < 5 OR show_realheight = abap_false.

     lines = 0.

    endif.

 

 

 

In order not to show the blank line in spool, one possible solution was to modify the CFI content builder and do not use default layout X_PAPER but use X_SPOOLERR. But this solution probably requires very heavy regression test and I am not sure if worth it or not.

 

 

Applying SAP Note 2169148 resolved the issue for us.

 

 

Cheers,

 

 

Dan Mead

Praveer Sen

Let's chat...

Posted by Praveer Sen Aug 24, 2015

                                                                scn.jpg

Hi,

 

Everyday we are using lot's of application to sharing our information (personal, professional..etc). But what about business information, our work information, process information, yes information, it could be any type of. let's take an example, some time a end user has to inform another business user related to created document, then he makes call or mail to deliver the same.

 

Here i have made a simple program is a kind of chat program, where a user can see who all are logged in system and also send a message to them instantly and make a conversation without make any call or mail.

 

yes, for the privacy, the program is storing every chat information in customize table for later use.

 

In below steps, i have explain logic and steps.

 

    Initial Screen:

I made initial screen size same as other chat application.

                                                  chat.JPG

In above screen, two users are logged in. In this screen i have use CL_GUI_TIMER class and it's check the online user information every second.(interval value set in class attributes), The login chat screen will only refresh, if any new user found or vise versa.

 

If any new user logged in or any user left the system. respective user information will add or remove from the above screen.

Means, the above screen will only provide the logged / online users or bookmarked users.

 

In login exit, a logic has been maintained to check the user is already using the chat program or not, or if any reason user's SAPGUI crashed. the old login information will be deleted from customize table.

 

Exit Information: SUSR0001 [User exit after logon to SAP System].

attached code in ZXUSRU01.txt

 

Make the first chat:

To do chat with user, select the any user from above chat screen.

After select an user, a chat screen will open.

 

              chat screen.JPG

Selected user name will be the title of the screen, as display in above screen. (i have selected Rohan Sen).

To send message the selected user, enters the respective text in open text window.

              enter.JPG

 

After press 'ENTER' button. System will check for the selected user is logged-in or not  and chat screen is open or not.

If chat screen is not open for the selected user. A pop-message will open to selected user screen as below.

 

              popmsg.JPG

 

Used Function Module: TH_POPUP


Else message will be store in maintained table.


After executing the chat t-code, user will get initial screen (as above), but though user has received a message from user, then blue chat chat-out.pngicon will be displaying at the place of yellow chat icon, as below screen.


Suppose user is not online, then all received message will be store in maintained table and while login into the system, a pop-up message box will appear as below screen, which will have message sender and date information.

  

                    message receive.JPG

User can only send message to off-line users if they have bookmarked.

 

 

Message Received

 

When both users are using application and sharing information the view will be as below.

 

    chat start.jpg

 

It's same like other chat screen, user enters will be at the right side of the chat screen and receiver enters will be at the left side. (as above).

with other necessary information like time and date.

Here if date is current date, then it will display as "today".

 

One more tool has provided to see the earlier conversation.

 

          erlier conversation.JPG

By clicking the above button, the very next day available conversation will be display in chat screen as below.

 

            earlier load.JPG

If you see, the conversation details are more, so automatic a scroll bar will be appear (as above screen in red area), which is very thin and only display when user scroll the screen.

 

even only the current conversation will display in screen, mean scroll will always be on the bottom of chat screen.

 

 

Technical Information:

Though all solution is depends on HTML, CSS and java script, some additional installation requires from the SAPGUI, Local system side.

 

1. SAPGUI Installation, Please find below screen about SAPGUI.

              SAPGUI.JPG

    For more information, please go through with below discussion.

    https://scn.sap.com/thread/3782213

 

2. Java Script setting, internally SAP used IE in CL_GUI_HTML_VIEWER class, to make available java script functionality enable setting in IE.

 

    IE setting.JPG

    IE-Setting->Advance Tab


3. MIME setting

          Created a new folder inside SAP->PUBLIC->SAP_CHAT and imported jpg & png file from attached zip file.



All respective programs, include and class are in attached zip in below google drive link for further analysis.

 

SAP Communicator

 

Program Name : zcommunication

Class Name    : zcl_communication

Tables Name  : zcom_books (bookmark information)

                        zcom_run    (Communicator execution information)

                        zcommun    (Current Communication Information)

                        zcommun_his (Communicator history)

 

attached Nugget File in above link

.

File Name: NUGG_ZCOMM.nugg

 

Note: The application made in SAP Netweaver 740 Version.


Thanks & Regards,

Praveer


Welcome to another ABAP Trapdoors article. If you are intersted in the older articles, you can find a link list at the bottom of this post.

 

There are various ways to handle XML data in ABAP, all of them more or less well-documented. If you need a downwards-compatible event-based parsing approach, for example, you might want to use the iXML library with its built-in SAX-style parser. (Note that iXML still constructs the entire document, so it's more like a DOM parser with a SAX event output attached to it. If you're looking for a strictly serial processing facility, check out the relatively new sXML library instead.)

 

The iXML documentation has a, let's say, distinctive writing style, and the library proudly distinguishes itself from the remaining ABAP ecosystem (for example, by using zero-based indexes instead of one-based lists in various places), but all things considered, it's a viable and stable solution. That is, if you observe the first rule of SAX: Size Does Matter. Consider the following example:

 

REPORT ztest_ixml_sax_parser.
CLASS lcl_test_ixml_sax_parser DEFINITION CREATE PRIVATE.
  PUBLIC SECTION.
    CLASS-METHODS run.
ENDCLASS.
CLASS lcl_test_ixml_sax_parser IMPLEMENTATION.
  METHOD run.
    CONSTANTS: co_line_length TYPE i VALUE 100.
    TYPES: t_line   TYPE c LENGTH co_line_length,
           tt_lines TYPE TABLE OF t_line.
    DATA: lt_xml_data       TYPE tt_lines,
          l_xml_size        TYPE i,
          lr_ixml           TYPE REF TO if_ixml,
          lr_stream_factory TYPE REF TO if_ixml_stream_factory,
          lr_istream        TYPE REF TO if_ixml_istream,
          lr_document       TYPE REF TO if_ixml_document,
          lr_parser         TYPE REF TO if_ixml_parser,
          lr_event          TYPE REF TO if_ixml_event,
          l_num_errors      TYPE i,
          lr_error          TYPE REF TO if_ixml_parse_error.
    DATA: lr_ostream TYPE REF TO cl_demo_output_stream.
    " prepare the output stream and display
    lr_ostream = cl_demo_output_stream=>open( ).
    SET HANDLER cl_demo_output_html=>handle_output FOR lr_ostream.
    " prepare the data to be parsed
    lt_xml_data = VALUE #( ( '<?xml version="1.0"?>' )
                           ( '<foo name="bar">' )
                           ( '  <baz number="1"/>' )
                           ( '  <baz number="2"/>' )
                           ( '  <baz number="4"/>' )
                           ( '</foo>' ) ).
    " determine the size of the table - since the lines have a fixed length, that should be easy
    l_xml_size = co_line_length * lines( lt_xml_data ).
    " initialize the iXML objects
    lr_ixml = cl_ixml=>create( ).
    lr_stream_factory = lr_ixml->create_stream_factory( ).
    lr_istream = lr_stream_factory->create_istream_itable( table = lt_xml_data
                                                           size  = l_xml_size ).
    lr_document = lr_ixml->create_document( ).
    lr_parser = lr_ixml->create_parser( stream_factory = lr_stream_factory
                                        istream        = lr_istream
                                        document       = lr_document ).
    lr_parser->set_event_subscription( if_ixml_event=>co_event_attribute_post +
                                       if_ixml_event=>co_event_element_pre +
                                       if_ixml_event=>co_event_element_post ).
    " the actual event handling loop.
    lr_ostream->write_text(
        iv_text   = 'iXML Parser Events'
        iv_format = if_demo_output_formats=>heading
        iv_level  = 1
    ).
    DO.
      lr_event = lr_parser->parse_event( ).
      IF lr_event IS INITIAL. " if either the end of the document is reached or an error occurred
        EXIT.
      ENDIF.
      CASE lr_event->get_type( ).
        WHEN if_ixml_event=>co_event_element_pre.
          lr_ostream->write_text( |new element '{ lr_event->get_name( ) }'| ).
        WHEN if_ixml_event=>co_event_attribute_post.
          lr_ostream->write_text( |attribute '{ lr_event->get_name( ) }' = '{ lr_event->get_value( ) }'| ).
        WHEN if_ixml_event=>co_event_element_post.
          lr_ostream->write_text( |end of element '{ lr_event->get_name( ) }'| ).
      ENDCASE.
    ENDDO.
    " error handling
    l_num_errors = lr_parser->num_errors( ).
    IF l_num_errors > 0.
      lr_ostream->write_text(
          iv_text   = 'iXML Parser Errors'
          iv_format = if_demo_output_formats=>heading
          iv_level  = 1
      ).
      DO l_num_errors TIMES.
        lr_error = lr_parser->get_error( sy-index - 1 ). " because iXML is 0-based
        lr_ostream->write_text( |{ lr_error->get_severity_text( ) } at offset { lr_error->get_offset( ) }: { lr_error->get_reason( ) }| ).
      ENDDO.
    ENDIF.
    lr_ostream->close( ).
  ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.
  lcl_test_ixml_sax_parser=>run( ).

You can copy this program into your system and execute it, it doesn't do anything harmful: It simply assembles a simple XML document (in a real application, you would get this from a file, a database, a network source - whatever), constructs an input stream around it, passes it to a parser and executes a parse-evaluate-print-loop until either the end of the output is encountered or something bad happens.

 

If your system is a non-unicode (NUC) system (you can easily check if this is the case using System --> Status), the program will run just fine, producing an output similar to the following image:

 

OutputNormal.png

 

If your system happens to be a unicode (UC) system, the program won't behave quite the same way - you will get a rather nondescriptive error message (error at offset 0: unexpected symbol; expected '<', '</', entity reference, character data, CDATA section, processing instruction or comment).

 

OutputError.png

 

It certainly does not help that the parser does not return an offset (or a line and column number) when assembling the error message. However, the events logged prior to the error messages provide a hint: The error always occurs after half of the lines of the table have been processed. You can easily verify this by changing the number of baz elements in the sample above. Since I've already mentioned that this issue occurs on UC systems only, it's now easy to deduce what went wrong here:

 

iXMLInterface.png

 

The iXML stream factory expects the size to be the number of bytes, not the number of characters. The code works as long as a character is represented by a single byte, but in UC systems, that's not the case. The solution - or maybe one of the solutions - is relatively simple:

 

    " determine the size of the table for both UC and NUC systems
    l_xml_size = co_line_length * lines( lt_xml_data ) * cl_abap_char_utilities=>charsize.

This trapdor is a rather devious contraption because it will not be detected by the standard unicode checks and the error message is about as misleading as it can get. Also, whether you get to see the message at all depends on the actual implementation of the parsing program. If the original developer thought that error handling might be left to be implemented by those who follow - well, it's a long way down...

 

Older ABAP Trapdoors articles

Hi sdnmates,

While writing a RFC today I got some interesting stuff ( an example of how much does SAP concentrates on performance ), worth sharing..!!

 

Scenario :

So here it goes .. -- ) :

 

Untitled.png

Well, you will get this Warning popup ( labeled as Information ) while declaring a parameter to hold Internal Table data in Import, Export, Changing parameter of a RFC enabled FM.

 

So, what does SAP suggests ?

You should Declare it as Tables parameter ( but Tables has already been marked Obsolete )

 

Sounds strange,, Right ??

Probably Yes, if you do not know the reason..!!

 

Reason :

So, I searched for the root cause :

This Information / Check was included with OSS 736660 - RFC: Implementing performance checks in transaction SE37


For releases lower than 7.2 or 7.0 EHP2 SAP uses Internal Binary Format for flat types and Tables parameters and xRFC is used for Deep parameters, as per the protocol defined for communicating between systems in case of RFC.


For releases 7.2 onwards or higher than 7.0 EHP2 SAP uses basXML ( Binary ABAP Serialized XML ), which is again expected to change in coming releases.


In terms of performance Internal Binary Format method is the topper followed by basXML and then comes xRFC.



Prevention :

So, if you do not want this popup and you are on a higher release supporting basXML,

 

Do the following changes :

1. Specify the Transfer protocol as basXML in SM59,

   

Untitled.png
2. Tick mark the Check Box, basXML supported in SE37.

 

Untitled.png

 

Pls. Note : If your RFC FM has only flat parameter then this "basXML"  will result in loss of performance.This will help you to achieve better performance only if your RFC FM has too many complex parameters.

 

Suggestion :

Kindly come up with addition / comments / suggestion, that can add further value to this article.

 

Thanking You All..!!

Later this month SAP Press will be introducing its new E-Bite publications format to the SAP community. These small size electronic books concentrate on a specific topic that, due to practical constraints, are covered only generally in books where the topic is only one of a larger collection of topics, at best having its own chapter and at worst being reduced to only a few pages. E-Bites overcomes this limitation, taking the deep dive into the details of a specific subject, thoroughly exploring the nuances of its associated concepts, and I am honored to be included as one of the authors of this inaugural release of the E-Bites series.


I first learned about E-Bites during a conversation in late March 2015 with SAP Press editor Kelly Weaver, who was aware I was an ABAP programmer and, through an exchange of emails, had become familiar with some of my articles on Agile Software Development. She explained to me that she felt I had a good, clear writing style and that perhaps I would be interested in becoming an E-Bite author on one of the ABAP topics being considered for publication. I was flattered at what I considered such high praise coming from a representative of a well-regarded publishing company, but initially declined the invitation to write an E-Bite. 


A week later I called Kelly saying I had reconsidered and thought I could do an acceptable job on a book about using regular expressions with ABAP, one of the topics she had mentioned in our previous conversation. Thus began my journey as an E-Bite author.


The journey begins


After my initial chat with Kelly I had begun searching the internet for articles dealing with regular expressions, finding little of anything aimed at beginners. This, I thought, could explain why so many programmers avoided the use of regular expressions – there was no easy way to learn about the concept, and the dearth of such information is what prompted me to contact Kelly and accept the challenge of filling this void via E-Bite. Integrating regular expressions into ABAP programs would require the developer to be familiar not only with ABAP syntax but also with the syntax associated with the regular expression language, a syntax so cryptic it is suspected of causing headaches, stomach cramps and cases of glazed eyes, so no wonder it is shunned by programmers unfamiliar with it.


I spent my spare time over the next few weeks thoroughly researching the subject and writing sample ABAP programs illustrating the use of regular expressions, composing my initial E-Bite draft as I went along. At first I was not convinced I could manage to fill the 50 to 100 pages of text recommended as the size of an E-Bite, but soon found it necessary to eliminate content that would have caused the book to exceed this high limit.  Paul Hardy, in his superb account of his experience writing the book ABAP To The Future (http://scn.sap.com/community/abap/blog/2015/03/27/my-monster-its-alive-its-alive), also expresses his initial panic with not being able to identify enough topics to fill 15 chapters of a book, but then over time identifying more than 15 topics and having to decide what to leave out.


I found writing about regular expressions caused me learn much more about the subject, and eventually I found a way to introduce programmers to its language syntax in small, manageable bites, hoping to avoid the anxiety many might experience while trying to learn this on their own. At long last I had a complete draft I felt could convey the necessary concepts to seasoned programmers who were new to regular expressions.


Overcoming technical difficulties


Over the past few years I had been writing articles using the LibreOffice Writer application running on the Ubuntu operating system. These files are saved using the “open document text” format, the file extension for which is “.odt”.  Naturally, I intended to use the same application for the E-Bite draft.  The folks at Rheinwerk Publishing, Inc. required a book draft to be formatted using a Microsoft Word template and the file saved in the “.doc” format.


I did not own a copy of the Microsoft Word application, and we agreed at the start to find a way to exploit the features both applications had in common; to persevere and resolve any problems as we encounter them. This was unexplored territory for all of us, and we were learning as we moved through the process – the E-Bite book format was new and had yet to be tested in the marketplace, and this seemed to be the first time Rheinwerk Publishing, Inc. dealt with an author using an open source document editor.  To their credit, the technicians at Rheinwerk Publishing, Inc. created for me a LibreOffice Writer template equivalent to the one used with Microsoft Word, with detailed instructions to me on how to make it available during editing sessions.


My editor, Hareem Shafi, and I soon discovered many of the incompatibilities between Microsoft Word and LibreOffice, but eventually we found a way to overcome the challenges presented by these different applications. Hareem was very patient with the difficulties we were experiencing, and I commend her for the magnificent job she did wrestling my draft into submission. In some ways I felt we were trailblazers helping to establish a process by which open office documents could be used as the basis for future book drafts.


The dawn of a new day


Now that the work of writing the book is completed, I feel privileged that this E-Bite will accompany other E-Bites in the first release using this new book format. With its potential for providing a book on a narrowly focused topic without having to acquire a book also dealing with a host of other concepts, perhaps this new E-Bite format will appeal to the SAP community.


Jim


https://www.sap-press.com/


Hi again.

 

In the previous post I described the basic concepts of programming using SE91 messages

 

How to use messages properly in the code. Common rules.

 

If you used to do OO programming your logic probably works on class-based exceptions.

 

In most of cases I would choose IF_T100_MESSAGE variant to explain the reason of the error (Rule #4)

 

Meanwhile, sometime you have a foreign code you're not responsible to modify and this code raises an exception.

 

Now we speak about the case when you want to output the message immediately. To be abstract let's just use cx_root example.

 

If you go the easiest way:

try.
do_something( ).
catch cx_root into data(lo_cx).
  message lo_cx->get_text( ) type 'I' .
endtry.

you will get the popup:

Снимок.PNG

 

but unfortunatelly F1 button won't work here. Debugger on you, my friend.

 

But let's jut imagine that we press F1 and have a documentation like this:

 

Снимок.PNG

and when we click "Navigate to source" link we go directly to the source code when the exception has been raised:

Снимок.PNG

 

Pretty cool, isn't it?! =)

 

Let's just see how many actions do we need to do this? Saying it before, i wanted to reuse standard SAP UI without own screen creation.


1. We need 3 SET/GET parameters.


Go to SE80.


Edit object (Shift+F5) -> Enhanced options -> SET/GET parameter ID -> type zcw_nav_prog -> Create (F5).


repeat these steps for zcw_nav_incl and zcw_nav_line parameters.

 

2. Go to SE38 and create a very simple program:

 

program zcw_navigate_to_source.
parameters:
  p_prog type syrepid memory id zcw_nav_prog,
  p_incl type syrepid memory id zcw_nav_incl,
  p_line type num10 memory id zcw_nav_line.
start-of-selection.
  /iwfnd/cl_sutil_moni=>get_instance( )->show_source(
      iv_program    = p_prog    " Source Program
      iv_include    = p_incl    " Source Include
      iv_line       = conv #( p_line )   " Source Line
      iv_new_window = ''    " New Window
  ).

I really hope you have this component. If not - you can find something similar in where-used-list for 'RS_ACCESS_TOOL' FM

 

3. Create ZCW_NAV_SRC transaction in SE93.


Choose report transaction and assign ZCW_NAVIGATE_TO_SOURCE report to it.

 

4. We need a real SE91 message.


Just create some message with the text &1&2&3&4. Remove self-explanatory flag and go to long text.


Put the cursor where you wish to place a link ->Insert menu -> Link


Choose "Link to transaction and skip first screen" as Document class, use the transaction from step 3.


"Name in Document" is the real text that you see on the screen like "Navigate to source".


5. Now we're ready to code.

try.
  do_something( ).
catch cx_root into data(lo_cx).
   
" get source code position
      lo_cx->get_source_position(
        importing
          program_name =  data(lv_prog)   " ABAP Program: Current Main Program
          include_name =  data(lv_incl)
          source_line  =  data(lv_line)
      ).
      " it's not possible to store integer as parameter value
      data(lv_line_c) = conv num10( lv_line ).
      " export parameter values
      set parameter id 'ZCW_NAV_PROG' field lv_prog.
      set parameter id 'ZCW_NAV_INCL' field lv_incl.
      set parameter id 'ZCW_NAV_LINE' field lv_line_c.
      types:
          begin of message_ts ,
             msgv1 type bal_s_msg-msgv1,
             msgv2 type bal_s_msg-msgv2,
             msgv3 type bal_s_msg-msgv3,
             msgv4 type bal_s_msg-msgv4,
           end of message_ts .
      " parse our string to message format
      data(ls_message) = conv message_ts( lo_cx->get_text( ) ).
      " Output, don't forget we always use static message definition
      " Put here message created in Step 4.
      message id 'ZCW_COMMON' type 'I' number 124
        with ls_message-msgv1
             ls_messagemsgv2
             ls_message-msgv3
             ls_message-msgv4.
endtry.

That's it! What I actually did - I put this handling logic into a minimalistic method ZCL_MSG=>CX( lo_cx ) and actively use it in my code.

 

I hope you enjoyed it.

 

Petr.

 



Dear all,

 

Today I'm going to discuss message handling with you.

 

So basically message code - this is something important to a support team. When we have this code we can navigate to SE91 and by using where-used-list for the message to find all the places in the code where this message occurs. However developers sometimes don't care about future support issues and make fast solution based just on a text:

  • message 'Some message' type 'S'

 

In this case we have a generic message without having long text description at all. To find the reason of such a message is rather more diffciult debugging task than when having a code.

 

Rule #1: Use message code as much as you can. 

 

Instead of direct text try to use SE91 message number.

 

These are pretty simple steps to find out the reason of the message:

 

1. Double click on the message in the buttom of the screen or F1 key for the popup message of type 'I'.

2. We go to a technical information

Безымянный.png

3. In the popup window we double click on the message number

Безымянный.png

4. Put the cursor on the message number and go to the Where-Used-List (Ctrl+Shift+F3)

Безымянный.png

5. Now execute the search and all the possible places where message can occur

 

Rule #2. Try to keep the number of places with the same message code very low

 

I guess you know very well the case when you have some standard SAP message, you look for place where it's been called  and you have like a list of dozens diffirent programs with the same code. That's very difficult to find the place where you certain message occured.

 

In common words I would cover this rule with more abstract one:

 

Rule #2.1 Don't copy the same code twice.

Even it's just message call, but you're going to use it widely - provide some program unit for that.

 

Rule #3. Use static dummy message calls while dynamic message declaration when it's possible.

 

Sometimes we do not need to output message immediately but to store it into some log.

 

So in the code like this:

 

  CLEAR ls_msg.
      ls_msg-msgty     = 'I'.
      ls_msg-msgid     = 'FMFEES'.
      ls_msg-msgno     = '68'.
      ls_msg-probclass = '3'.
      CALL FUNCTION 'BAL_LOG_MSG_ADD'
        EXPORTING
          i_log_handle = iv_log_handle
          i_s_msg      = ls_msg.





Just don't forget to add a very simple but so important line:

message 068(fmfees) into data(lv_dummy).





This tiny 5 seconds effort can save up even hours to a person who will probably debug your code.

 

Rule #4. When creating an own exceptions that are going to be used as output messages implement IF_T100_MESSAGE.


As example you can check CX_SALV_X_MSG class.

Снимок.PNG

In opposite, if you perform steps from Rule #1, you'll navigate to this class.

 

Please notice that in this case once the exception has been caught:

 

catch cx_salv_x_msg into data(lo_cx).

 

you should output the message not like this:

message lo_cx->get_text( ) type 'S'.

 

but like this:

message lo_cx type 'S'.

 

In the case you have only abstract cx_root instance you can try the following apporach:

Advanced navigation to a source code from the message long text.

 

Rule #5. Use long text explanation.

Despite source code navigation is an important thing, don't forget that the main goal of our message is to explain the reason of the error to the end user. Properly documented software can let users sort the problem out even without contacting support team at all, that automatically moves your software on the next level of quality.

 

Therefore remove self-explanatory checkbox and provide some key details for the user how to get rid of this message by his own.

 


By following this simple rules you can provide rather better solution.


 

I hope you liked it.

 

Adios.

Hi All,

 

I have seen many posts for downloading from internal table to PC and many replies for the same. Many people have suggested different ways. But I saw those posts are yet Not Answered. Some complained that they are able to download with Field name. But Field name characters are only 10.

 

So for all these, I got a suitable way to download with proper field names. Some might have tried this method, some may be seeing newly. I thought of sharing this anyway.

 

Here I will be having 2 internal tables.

1. Final Internal table with the data to be downloaded.

2. Field names of the final internal table.

 

 

Fetching data and getting field names.

sap1.PNG

 

 

Downloading the Field names internal table.

 

sap2.PNG

 

 

After calling GUI_DOWNLOAD function module, again call GUI_DOWNLOAD and put the final internal table with data.

 

Downloading the Final Internal table

 

sap3.PNG

 

 

Check the exporting parameters passed while calling the function module both times.

 

Result:

sap4.PNG

Hi to everybody,

 

This is the third part of my blog about using persisntent classes as a quick tool for generating query classes and convenient reusing of them.

 

This is the beginning:

 

Part 1:

Persistent classes: revival of a spirit. Query by range.

 

and part 2:

Persistent classes: single get( ) method instead of multiple get_xxx() methods calls

 

 

Using this query class I figured out that in the case when we use the same request twice, we do the selection every time.

 

To improve the performance we need to create some buffer for retrieving previously selected requests.

 

So what we have as incoming parameters: request that is represented by a structure of any type and agent class.

 

Снимок.PNG

To solve this abstract task I decided to use serialization technique again.

Снимок.PNG

 

To be honest this is the first time I applied such a logic based on serialization and checksum calculation.

 

Has anyone used similar methods for generic hashing of anything? What is your opinion about overall performance: frequently called transormations and checksum calculations.

 

Thanks.

 

The full example you can find here: ZCL_OS_API

Actions

Filter Blog

By author:
By date:
By tag: