1 6 7 8 9 10 46 Previous Next

ABAP Development

685 Posts

Hi, please be gentle, as I dare to enter my first blog-post.

 

The following might be a bit simple and child knowledge for you all, but I wish someone would have told me this little trick, when I first started developing Smartforms.

 

I used to get this question a lot: “Martin – we have a problem with the invoice print, and we can’t re-create the error in the development system. Can you please debug and find the error in the production system?”

 

I really needed to create a session break-point directly into the Smartform in the production system, but how? – where was the button in the smartforms transaction?.No way was I going to create a transport request with the smartform holding a coded breakpoint.

 

Sure I could go to VF03 and use ‘/h’, but that is kind of a tedious way to go.

 

Instead I did this:

 

     1. Go to Smartforms transaction and type in the smartform name and hit “test”. Then you get the function module name that triggers the smartform print in      the given system.

 

01.png

 

     2. Copy the FM name to clip holder and go to SE80. 

 

02.png

 

     3. Select program and paste the name in.

 

03.png

 

 

4. Put an ‘L’ just before the last name and end it with a ‘F01’ and hit enter.

 

    04.png 

 

You now get the include, that holds all form routines representing every node in the smartform. Search for the one you need, and create your session breakpoint in the productive system.

 

(Global initialization in the smartform is in subroutine ‘GLOBAL_INIT)

 

Please note that this does not work for all smartforms but in my experience most SD forms can be debugged this way in productive and QA-systems.

 

/Martin

Calculate Dunning charges through BTE ‘00001071’ at the time of Dunning(Tcode-F150)

This is the scenario regarding calculate dunning charges and posting in customer line item. This calculates the charges of dunning on the basis of the dunning levels of the customers.

Following is the job log when we Dunn the customer through the transaction code F150.

Dun1.png

We can see the posted dunning charges for first level in FBL5N transaction code.

Dun2.png

This is implemented by using the BTE ‘00001071’. Following is the enhancement details.

Got o transaction code FIBF. Then Settings->Products->of a customer. Suppose we created product here as ZDUNN.

Dun3.png

 

 

Then Go to again settings->process modules->of a customer .And activate the BTE.

Create function module by coping ‘SAMPLE_PROCESS_00001071’ to Z_EVENT_001071

So assign the product ZDUNN to above created FM. This is shown as below:

Dun4.png   

Write the below code in function module Z_EVENT_001071

 

DATA : l_is_documentheader   TYPE bapiache09,                "Document Header
         l_is_customer        
TYPE bapiacar09,
         l_it_customer        
TYPE TABLE OF bapiacar09,  “account relievable
         l_is_currencyamount  
TYPE bapiaccr09,
         l_it_currencyamount  
TYPE TABLE OF bapiaccr09,  "Currency amount
         l_is_accountgl       
TYPE bapiacgl09,
         l_it_accountgl       
TYPE TABLE OF bapiacgl09,
         l_it_return          
TYPE TABLE OF bapiret2,  "Bapi return Messages

"Fixed G/L Account
    l_gl_account =
'123456789'.

   
"Conversion for the G/L account
   
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
     
EXPORTING
       
input  = l_gl_account
     
IMPORTING
       
output = l_gl_account.

"Filling the structure of BAPI_ACC_DOCUMENT_POST
   
"Document header
    l_is_documentheader-bus_act    =
'RFBU'.
    l_is_documentheader-username   = sy-uname.
    l_is_documentheader-header_txt =
'Charges'.            
    l_is_documentheader-comp_code  = i_mhnk-bukrs.
    l_is_documentheader-doc_date   = i_mhnk-laufd.
    l_is_documentheader-pstng_date = i_mhnk-laufd.
    l_is_documentheader-doc_type   =
'XY'.

    l_item_count             = l_item_count +
1.

   
"Acount Recievable (Customer)
    l_is_customer-itemno_acc = l_item_count.
    l_is_customer-customer   = i_mhnk-kunnr.
    l_is_customer-comp_code  = i_mhnk-bukrs.
   
APPEND l_is_customer TO l_it_customer.

   
"Currency Amount
    l_is_currencyamount-itemno_acc = l_item_count.
    l_is_currencyamount-
currency   = i_mhnk-waers.
    l_is_currencyamount-amt_doccur = c_mhngh.
   
APPEND l_is_currencyamount TO l_it_currencyamount.

    l_is_currencyamount-itemno_acc = l_item_count +
1.
    l_is_currencyamount-
currency   = i_mhnk-waers.
    l_is_currencyamount-amt_doccur = -
1 * c_mhngh.
   
APPEND l_is_currencyamount TO l_it_currencyamount.

   
"GL Account
    l_is_accountgl-itemno_acc =  l_item_count +
1.
    l_is_accountgl-gl_account =  l_gl_account.
    l_is_accountgl-pstng_date = sy-datum.
   
APPEND l_is_accountgl TO l_it_accountgl.


   
" Document posting
   
CALL FUNCTION 'BAPI_ACC_DOCUMENT_POST'
     
EXPORTING
        documentheader    = l_is_documentheader
     
TABLES
        accountgl         = l_it_accountgl
        accountreceivable = l_it_customer
        currencyamount    = l_it_currencyamount
       
return            = l_it_return.

 

DELETE ADJACENT DUPLICATES FROM l_it_return COMPARING type id number
                                                       message_v1 message_v2
                                                       message_v3 message_v4.


   
LOOP AT  l_it_return INTO l_is_return.
     
IF l_is_return-type EQ 'S'.
       
CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
         
EXPORTING
           
wait = l_c_x.

     
ELSEIF l_is_return-type EQ 'E' OR l_is_return-type EQ 'A'.
       
CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'.
       
"MESSAGE 'Error in posting the document' TYPE 'E'.
     
ENDIF.
   
ENDLOOP.

 
ENDIF.

In the Application Performance Engineering, all phases are equally crucial right from identifying the performance test scenario to the Analysis & tuning. Performance improvement is normally an iterative process until the product reaches a stable performance state abiding by all the performance standards.

     1. Identifying what to test

In this phase, if we do not identify the precise performance test scenarios the complete cycle will focus on inappropriate areas for performance tuning. In this step we need to establish the performance tuning goals and performance baseline. The difference between both is the performance gap which the process is targeted to eliminate.
While choosing the performance test scenario, we should also consider that different business scenarios would involve different portions of program code. If performance test case is not related to performance issue, then the “bad” code would not be traced. So it is critical to performance tuning process that a right performance test case is identified.

     2. Performance tracing & Measurements

In this phase, with the help of performance tools, measurements & tracing for the scenario is done. The various traces and tools available are, ST30 for capturing the statistical data from STAD, ST05 trace for capturing SQL, RFC, Buffer, HTTP, Enqueque trace, SE30/SAT for the ABAP Trace.

While tracing any scenario, you should first execute the scenario twice or thrice in order to eliminate the buffering impact on performance.
If the scenario is too time consuming, you should do several traces at different time during the test execution instead of one trace.

     3. Identifying Performance Issues & tuning opportunities

In this phase we need to analyze performance traces to identify the violations in ABAP logic or SQL operation
To pin point the problem in ABAP coding, we can analyze the SE30 ABAP trace, the major performance issues which causes the high CPU time are mostly inappropriate implementation of Internal tables either in terms of loop or read or sort.

If the DB Time of your application is high, we can refer to the SQL trace for getting to the root cause of the issue.

3.1 Analysis Steps for DB Time Analysis

      3.1.1 Identical Selects:

Get into the ST05 trace and summarize it by SQL statements, now sort descending the column for identical selects. All the statements with identical selects value greater than zero can be the candidate for performance tuning.

1.png

The various issues due to which identical selects occur

     1. Buffering not allowed on the table, solution can be to buffer the table according to the scenario, if this is not possible we could           implement Buffer Module. Also, if the same data is fetched again and again we could append the data into an internal table on the first           access to the data base and later read it from the internal table.

     2. Buffering is allowed but bypassed, there are certain queries which bypass the buffer and hit the data bases. For eg.[refer the list below].

2.png

     3. Buffering allowed but there is not data fetched by where clause and thus no records are found in the buffer and database table is           accessed multiple times. Solution is to implement a No found buffer.


3.1.2 Buffer Bypass:

Arrange the column BfTp in descending order; the statements on the tables with buffering switched off will have this column as blank, if otherwise the statements are the candidate for performance optimization. Here we need to analyze why the buffers are bypassed and what can be done to avoid the same.

3.png

3.1.3 Proper Indexes

Arrange the Summarized SQL statements in descending order of time/execution.


4.png

 

If the time/execution of the statement is greater than 10ms, it can be the candidate for performance tuning. The various issues due to which this occurs:

     1. No index is present on the table, which a where clause can use. Solution: if possible a new index needs to be created, or we can re-write           the where clause in order to use the existing index.

     2. The index is available, but the statement is written in such a way that is doesn’t use the index. For. Eg. A query with order by a,x,c           whereas the table has an index a,c,x; such a query will not use the index . Solution: is to re-write the query in such a way it uses the           index.

     3. Insert/Update statements on a few tables might take a lot of time, which may be because the table has too many indexes and the           time/execution also includes the time to update these indexes.


3.2 Analysis Steps for CPU Time Analysis

If the CPU time for the application is too high, the analysis can be done using the ABAP trace (SAT/SE30),

 

3.2.1 ABAP trace (SAT/SE30),

The SAT Traces help to identify hot spots in an application. This can be done by getting into the trace and sorting in descending order by net time. Here you may encounter issues due to

  1. Expensive operations on internal tables
  2. Unnecessary calls to processing blocks or entire code branches
  3. Long running modularization units
  4. Nested loops

 

3.2.2 Volume Scalability Single User Check

The major performance issues occur in an application when it handles large volume of data, (Customer like data), such huge data volume scenarios are generally difficult to simulate and analyze for the flow of data.
But, the SAT Runtime Analysis tool provides the option of testing the scalability of the application without creating huge volume of data. This is done by comparing the traces of the same scenario where different amount of data is handled.

 

5.png

This tool displays the comparison of various statements in both traces with respect to data processed in the statement/call and the net time taken by it, if both of these are in the same ratio then the application is scalable else that statement/call is the candidate for performance tuning.

 

Mostly these issues are due to the erroneous implementation of the internal table for e.g. sorting each time after append, using different & multiple sort order for an internal table etc.., which can be easily analyzed using the debugger.

 

3.3.3 Roundtrips & Network Delay

If the application has roundtrips of more than 2, the CPU time as well as response time would be high depending upon the location of your test system & server, due to network delay.

In SAT trace you can analyze the same, the net time of event RFC wait would be high and could be located in the SAT trace, or else the flush call could be searched in the trace.

Such calls directly for the application programs should be avoided.

 

4. Performance Tuning, Re-test & Analysis


In this phase the solutions need to be implemented to improve the performance.
If testing shows that program performance with the change is still short of expectation, you have to repeat the whole process again. It is important that we should execute the same test scenario and in the same SAP system with exactly same data before and after the change. Using different testing box is just adding another variable which makes performance measure harder.

SAP transaction SCI known as “SAP Code Inspector” can be used to scan through ABAP code to identify common performance pitfalls like SQL statement without where-clause etc. This is a static performance check. Output from this can be a part of improvement proposal together with what from trace analysis. Static code check is relatively simple is a good habit to use SCI.

I was recently debugging a sales order user exit and wanted to see data present in all the internal tables at that point.

Since all internal tables had header line, their records count was not visible in Variable Fast Display of New Debugger.

As a result, it was inconvenient to double click on 226 internal tables only to find that most of them are empty.

 

This blog will cover:

  1. How to filter the internal tables' names from a huge list of variables in New Debugger.
  2. How to easily see the record count of a given list of internal tables (esp. the ones with header line)

 

Scenario demonstration

A breakpoint is set in USEREXIT_READ_DOCUMENT of program MV45AFZZ and a sales order is displayed.

Navigate to Variable Fast Display > Globals tab in New Debugger.

 

 

 

The table icon in second column indicates that it is internal table.

We can't know whether the table has any record by just looking at Globals tab.

To see the number of records, we need to suffix the internal table names with [].

 

 

 

Entering the names manually is time consuming, adding the suffix is even more time consuming.

 

 

 

Filtering internal tables from global variables list

We can first sort the globals list so that all internal tables are together.

There is no sort button visible, but it can be done by clicking on settings button on right side.

 

 

 

Select the second column, click on settings and do sort.

 

 

 

The sorted list now shows 20+ items at a time.

Since there were 226 internal tables, Block selection (Ctrl+Y) and PageDown cycle wasn't so good to copy the names.

So, I used same settings button again to save the list to local spreadsheet.

We now have the list of internal tables, but they need to be suffixed with [].

 

 

 

Adding suffix to a list

I used Notepad++ to select every line using (.*) and regex replaced it with \1[] to add [] to all names.

However, MS Excel can also be used to do it easily using concatenation.

Just like ABAP has concatenate and && operator, MS Excel has & operator to join values.

Below screenshot shows the formula used to add suffix. Column A has the list, Column B has [].

Column C has formula =A[RowNumber]&B[RowNumber].

 

 

 

If I didn't know concatenation operator, I would have pasted Column A and B to Notepad and replaced TAB character with nothing.

 

Variables 1 tab will be able to show records for this data.

 

 

 

Unfortunately "paste from clipboard" option is not present in Variables 1 tab.

The visible area allows me to display only 25 variables at once.

I copied the suffixed names in batches of 25 to see the records.

This method may seem lengthy, but it is better than double-clicking on 226 internal table names.

An easter egg always is a nice way to let others enjoy your programming. But placing evil hacks inside the code can be slightly too-much:


evil_hack.png

 

And for those who think that all this internet stuff is eternal, here comes a warning:

close_internet.jpg


Thanks God there is no commit after the internet closing. Last one turns the lights off.

Calling all ABAP developers who need/want to learn JavaScript...

 

In Kevin Small's excellent blog he informed ABAP developers that:

a) Its OK to take other programming languages seriously   :-)

b) JavaScript is one such language

c) You really need to learn this language

 

So for those ABAPers who now want to take the next step and plunge into the low-level details of the language, I have revamped and released some training slides I wrote about a year ago.

 

One of the main conceptual differences ABAPers will need to understand is that fact that JavaScript is a highly dynamic language!  This then requires you to think in a completely different manner about how you construct your software.  I attempt to explain the differences in a step-by-step manner, without asking you to make any leaps of understanding.

 

These slides cover the JavaScript language from the ground up and have been designed with the assumption that the reader has no prior knowledge of the language.  You will be guided in gradual steps from the simplest concepts of language syntax and data types, right up to advanced topics such as creating prototype chains and the use of the functional programming style (as opposed to the more familiar imperative programming style used by ABAP)

 

Chapter 1: Introduction

Chapter 2: Data Types

Chapter 3: Syntax

Chapter 4: Scope

Chapter 5: Functions

Chapter 6: Inheritance

Chapter 7: Functional Programming

 

Since these slides are focused only on the JavaScript language itself, they do not cover the use of JavaScript within the specific context of a browser (E.G. DOM programming and event handling are not covered); neither are JavaScript frameworks such as jQuery, Sencha or SAPUI5 covered. These subsequent topics should be addressed only after you have built a solid foundation in the language itself.  For instance, once you have gone through these slides, you will be completely ready to start SAPUI5 training.

 

All seven chapters are contained in this ZIP file in PowerPoint SlideShare format.  Because the low level details of learning a language can be rather dry, I've taken a somewhat tongue-in-cheek approach and thrown in a few amusing comments and asides just to lighten things up.  :-)

 

Unfortunately, I have had no time to create any exercises to accompany these slides; however, if you open the Chrome or Firefox browser and then open the Developer Tools, you will have access to a JavaScript console in which you can execute JavaScript commands and create simple objects.

 

Alternatively, if you're feeling somewhat more adventurous, you could install NodeJS and then have a JavaScript runtime environment that does not require a browser (server-side JavaScript).

 

Hope that helps!

 

Chris W

Hi all,

I just thought making a new blog about some good behavior when getting a ABAP-developer.

Ok, most of the points hit all developers, I think.

First I thought, maybe this is a blog for all the freshers out there, but when I was collecting the facts and what I think, everybody should do, I recognized, that I work through a lot of coding and it is more than just the new developers, which need to know that (again). No, I’m not saying, they don’t do so, but I think, some maybe have to get the focus on such points again, so that they (includes me, of course) remember the facts.

Ok, let us have a look at the list.

Of course, everybody is invited to add additional points in the comments. A very cool thing would be, if we get a document in the end. I searched SCN, but all I found, was a lot of good content, but not a big guide or wiki.

 

1st of all, know your guidelines!

Make yourself familiar with the programming guidelines given you by your customer or company. Don’t mess up with the guidelines. I f there are rules in, follow them. It’s not up to you to change the rules. If there might be a mistake, and yes, there might be some, report them to the person, which is responsible for that. I’m sure, the person will give a hand and explain or change the rule.


The 2nd point is, introduce yourself to the consultants.

Make sure, you and the guys given you input talk on a same level. You can easily achive that, by repeating his concept in your own words and send it to the consultant. It confuses me everytime I see people developing things, which are not that, what the other side imagine. Make sure, that not you are the wrong guy in the chain!


The 3rd point: Be a smart developer and do not start coding as the world collides tomorrow.

Most of our developing will run for more than just a few months. Make sure, if you have to enhance or rebuild something, that you took the correct spots. You can easily prove the spot by searching SCN. If I wasn’t that sure, I found most of the times another, which get answered the question already. If not, be the first asking. By the way, talking to colleagues about things like that always helps and gives another perspective on it. Luckily I’m surrounded by  some

 

 

The 4th point is, think about your developing twice.

Before starting, as mentioned above and afterward. It is not wasted time, to work through your coding again and make comments, where comments are missing. It is not a good programming style, to be the only one to understand what it’s going on inside. You will get in trouble if you do so, for sure!

Use variables, that speech to the people and saying things like that:

“Hi, I’m Mr. Table, I store the dataset, which is used for proving the orders”. (Ok, I'm a bit silly at the moment, you know what I mean)

 

5th Use good technics inside your written code

Today it is more and more important, that we have to make a cut between the view and the coding itself. Use the Patterns and technics which people teached you. MVC is a big thing right now and it might be more important than ever before. What technics to prefer and stuff like that, I (we) can’t just summarize in such a few sentences here, that’s why I’m not trying it.

6th Use the codeinspector!

This is the easiest way to prove the coding. Back to the 1st point. Use it, if there is a profile in the system. If not, ask the author of the guidelines, why there isn’t one. If there is none, use the default variant, better use that one, than not using it at all.

The Codeinspector helps to make everybody’s development saver and easier. Again, use this awesome feature. If you got a newer release, there might be the ATC (ABAP Test Cockpit) available. But that would be too much at the moment.

 

7th Implement unit tests if possible

You know, unit test might be a messie work at the moment, but there is no program, which is guarded against changes. Most of the developing lives and the customer/users get new ideas, what they want to do with it. So the unit-tests help you right at the moment not that much, but if the object returns,

you needn’t to fight what done earlier, just press the button and see, if all scenarios work fine, after developing new cases with it. (That is another big story to tell, there are a lot out there, perhaps I bring another to the binaries, someday)


8th Make performance tests.

Don’t just say yourself, it’s working with my data. Think about the scenarios hitting your coding in productive areas. Make sure, that you tested or better told other to test your developing in different scenarios. You are the developer and you know the critical things inside. So do not leave them alone, make it public and tell the people what they have to focus in your eyes.


9th Don’t waste too much time with a technical documentation.

     Only waste time with TDs if your're going to make AWESOME ones

  Give the result to another developer and just let him look at it for 10 minutes. It just should be a feeling, if it is understandable what you are doing there. It doesn’t matter if he can understand everything, but the feeling should be there. When writing the technical docu remember this blog here:

Stop writing Technical Documentation nobody will ever read

We discussed about it in the office and yes, he is absolutely right.

 

As usual, a question in the end:

Who knows, when working through the guidelines of your company last time. How often is this document updated? Is there also a styleguide included?

 

Thanks for reading to the end. If I'm wrong with a point, let me know and I can rethink it.

Regards

Florian

 

PS: It just popped up in my mind, so I added the Star Wars picture*haha*


18.02.14: Updated point number 9 out of the comments. I agree with Mauricio Cruz  and Bruno Esperanca that the mentioned description gives a better clue, what is meant

Recently, have come across a requirement to send output of report developed based on SALV as an excel attachment. To achieve the same we can use the method TO_XML of class CL_SALV_TABLE. This method will have the formatted output content as XSTRING value.

 

Here are the steps.

 

a. Variable Declaration.

scn_blog_alv_decl02.png

b.Data Selection/ALV Customizing calls.

 

scn_blog_alv_call02.png

scn_blog_alv_call03.png

scn_blog_alv_call04.png

 

c.Call to Convert ALV Output as internal XML Format.

scn_blog_alv_call05.png

 

d.E-Mail Data Declaration.

scn_blog_alv_call06.png

e.EMail - Content Conversion/Body/Attachment Creation

scn_blog_alv_call07.png

Send E-Mail

 

scn_blog_alv_call08.png

 

 

Excel Output

scn_blog_alv_call09.png

 

we can also use TO_XML method to download the content as an XLS or XML documents  in Abap Web dynpro based applications .

 

Here is the sample source code for this approach.

Hi SCN community,

 

This blog post is a logical follow-up from this blog post of mine, where I share my design for a region specific implementation framework using ABAP OO and the factory pattern strategy design (what a fancy name!). From the discussion that followed the idea came up to try and make use of the BAdI technology available in SAP, and this is my proposal for a design using this technology!

 

This proposal is based in a lot of other blog posts, like this one by Andrea Olivieri, this one by Thomas Weiss, and many others. Check my bookmarks, I usually bookmark interesting stuff. These blogs show a more or less static view over BAdIs. What I found out while designing and implementing this approach, is that actually the BAdI technology is amazingly flexible and powerful. Hopefully you'll share my point of view after reading this post.

 

Thanks to Debopriyo Mallick and Suhas Saha and their valuable contributions in the comments I have revised my design and, consequently, this blog post. I believe the design is now one step closer to that "idyllic", probably non-existent, perfect solution. By the way, if you're looking for an interesting way to activate/deactivate several BAdI implementations at once on different levels you should check this blog post that Suhas shared in the comments.

 

The premise

 

The premise remains pretty much the same as before, so I'm not going to go into it again in great detail. If you're working in a system shared internationally, or by different regions, eventually you'll get user-exits with so many IF and CASE statements, and shared by so many developers, that you will definitely want to have a framework put in place to cope with the specific requirements for each region/country/sales organization/whatever.

 

In my previous blog, the solution presented doesn't have any flaws "per se" (not that I can think of), but it does represent a very strict and formal solution. If you want more flexibility and freedom, while retaining the same advantages, I think this BAdI design might be the way to go. It also doesn't feel like you're redesigning the wheel

 

Let's get to it! I'm going to showcase this design with the user-exit INCLUDE MV45AFZZ. Anyone that has ever developed anything for SD should probably know this user-exit, so I think it's the perfect candidate.

 

Oh, by the way, this is NOT meant to teach you how to create a BAdI. If you are unfamiliar with BAdIs, please take a look into the blog posts I mentioned above.

 

Let's go!

 

We start by creating the BAdI definition. The multiple use definition is arguable. For this example I will leave it on, and I guess this would be pretty much standard unless you want to make sure that ONLY ONE implementation of the BAdI is executed. You should keep in mind that when using this option you cannot have parameters defined as exporting/returning, as this would be against the idea behind it. Read in the comments for a more detailed explanation, or press F1 on it . Developers implementing this (and any multiple use) BAdI should pay attention to the fact that other implementations could also be executed and affect variables they are trying to determine, so proper documentation and descriptions of the BAdIs is not a bad idea. Also, in my first version of this blog post, I had defined one BAdI for the entire MV45AFZZ include, with one method per routine. After Debopriyo pointed out, and rightfully so, that this would mean many unimplemented methods in the BAdI implementations, I decided to revise this and have one BAdI definition per routine. I agree this makes more sense. Now, nabheet madan asked in the comments if it would also be a good idea to define one BAdI per method when applying this design on a standard BAdI. Quite honestly I'm not sure. I guess that would depend on the BAdI... if the BAdI has methods that are related to each other and should all be implemented, I guess one BAdI definition with the same interface as the standard BAdI would make more sense. Otherwise, to avoid having several unimplemented methods, it might not be a bad idea to define one BAdI per method. If you have something to say about this leave a comment below!

 

Picture1.png

Figure 1 - BAdI definition

 

Now, in my situation, we have different clients per region, so a filter that will come very handy for sure will be the client filter. So I set it up straigh from the start.

 

Picture2.png

Figure 2 - BAdI filter definition

 

In the discussion in the comments it was also pointed out that a BAdI with many parameters in the interface was a BAdI poorly designed, and this could be a problem in user-exits from include MV45AFZZ, since there is no formally defined interface. To "solve" this problem, I thought of defining one structure for the BAdI's interface, like this.

 

Picture3.png

Figure 3 - Defining a structure to use in the BAdI's interface

 

In this case I've already defined internal item table XVBAP, but if you don't think you're going to need anything special for now, you can just declare some dummy field, or don't create this structure at all and create it only when you need it. It will not cause you any pain afterwards, even if you already have BAdI implementations created, as they will simply not use the newly created parameters. So, after revising the design thanks to the discussion in the comments, the BAdI interface now has only one method (in this example I'm showing the method for userexit_move_field_to_vbak).

 

Picture4.png

Picture5.png

Figure 4 - BAdI method definition

 

And that's it for the BAdI definition (for now)! It wasn't that difficult. Now all you have to do is call it from your routine.

 

Picture6.png

Figure 5 - Calling the BAdI from your user-exit

 

Ok so now we implement it! For this implementation, the requirement is specific to client 077. Couldn't be easier.

 

Picture7.png

Figure 6 - Implementing a BAdI with a filter value

 

The rest of it is standard, yes? All you have to do is implement the method with the requirement you want. Personally I think the best would be one implementation per requirement. Major advantage is total independence per requirement. You can have one developer per requirement working at the same time on as many requirements as you'd like, no problems with object locking. The disadvantage could be low-awareness between developments. Meaning that a developer implementing a new requirement should take a look at already existing implementations to check if there's not going to be some conflicting implementations (like a field being overwritten or something).

 

Now comes the really interesting part for me. What if a new requirement comes along which needs a new parameter? At first glance I would think this would mean a lot of trouble. Going through every implementation and adjusting. Well, not really... or at least I could do it without much trouble (but I guess you'd better not change the parameters already existing, just add new ones)! You change the structure we created earlier and add your new parameters:

 

Picture8.png

Figure 7 - Adjusting the BAdI's interface

 

Now we need to populate this variable in the code and that's it. The interface will update every implementations method signature, and if the parameters aren't used... well... they're not used, no problems there.

 

Picture9.png

Figure 8 - Adjusted BAdI call

 

You can now implement your second requirement easily. So, we now have implemented two BAdIs, filtered to be executed only for client 077. What if now we get a third requirement, which is to be executed globally? A core requirement? It also couldn't be easier, we implement a filterless BAdI!

 

Picture10.png

Figure 9 - Implementing a filterless BAdI

 

The rest you already know. What happens in runtime? In runtime, regardless of which client you are on, the filterless BAdI is executed, and if you happen to be in client 077, the previous BAdIs get executed as well. As you should already know, there is no guarantee to which BAdI runs first, so make sure one implementation does not rely on a result from another implementation, and also try not to change the same fields, because only one value will prevail, and you have no idea which one. It means you can't implement some default values for the "core" implementation and hope that the specific implementations will prevail, you have to implement this accordingly. Actually, as Sougata Chatterjee pointed out in the comments, there is a way to sort BAdIs in the new Enhancement Framework. To do this you implement BAdI "BADI_SORTER". You can find documentation on this here. Personally, I would try to avoid this. Even though it could be interesting to have a BAdI implementation for a global requirement being executed first, and then the local/specific requirements executed overriding the global requirements, I think this adds too much complexity and could be hard to maintain. I would rather have a requirement either global or specific, and if it's not global, each region will have to implement it its own way. But now I know (and you know) this option is available. There's no such thing as too much flexibility, is there?

 

Ok, last but not least, let's say that even if we have successfully separated implementations per client, we're still getting many conflicts. We want a new filter, per sales organization. That's also not a problem, we change the BAdI's definition! This is mostly valid if you are implementing multiple requirements in the same BAdI implementation, but I'll keep this here for educational purposes.

 

Picture11.png

Figure 10 - Adding a filter to the BAdI's definition

 

The existing implementations will not care! They will keep being executed as long as the values for their filters match. How cool is that? But of course, you will have to adjust the BAdI's call, otherwise you'll get a nasty runtime error!

 

Picture12.png

Figure 11 - Adjusting the BAdI's instantiation for the new filter

 

Done properly, this will allow you to have a nice overview of your enhancements and requirements implemented in your system. You can also use the search feature in the enhancement spot implementation "implemented BAdIs" tab to search for filter value! Which is nice.

 

Picture13.png

Figure 12 - Enhancements overview

 

 

Picture14.png

Figure 13 - Checking BAdI implementations for a certain filter value combination

 

Conclusions

 

That's it from me! I'll admit, I think this approach is very elegant and powerful. Some care must be taken to make sure there are no conflicting implementations, but I don't think there's any way you can avoid that risk.

 

Let me know what you think and what you would do differently!

 

All my best,

Bruno

The IDOc WPUWBW for goodsmovement, has a Basic Type WPUWBW01 which has some segments:



 

 

 

As you can notice in the image, the IDOc has three segments:

 

  • E1WPG01: Header segment which will have the information of the transaction (IT_TRANSACTION information)
  • E1WPG02: Item segment which will have the information of Goodsmovement (TRANSACTION-GOODSMOVEMENT)
  • E1WXX01: This segment is used to map some extra information (customer enhanmecent).

 

 

In my POSDM system, the information of GOODSMOVEMENT is not mapped so I have to map them with the information of RETAILLINEITEM. For this purpose I need to implement the /POSDW/TASK badi, and in the CALL method of my implementation, I receive these parameters:

 

 

   

 

 

Furthermore, in this method I call the FM  /POSDW/IDOC_OUTPUT_WPUWBW, which creates the IDOc.

 

 

 

 

At this point, the IDOc will be generated and for each transaction we will be able to see two segments (one E1WPG01 header and one or more E1WPG02 segments with goodsmovement information) in the IDoc. If you need to add some enhancement data to the IDoc, you will need to map the GOODSMOVEMENT-EXTENSIONS table in the CALL method that I showed before. This table has three main fields:

 

 

 

 

 

And the E1WXX01 segment has the same fields but with other names:

 

 

 

 

For each EXTENSIONS entry that you map in the CALL method there will be a E1WXX01 segment in the IDoc.

 

 

For example: I have mapped one EXTENSIONS entry in the GOODSMOVEMENT-EXTENSIONS table:

 

 

 

 

The result of my IDoc is:

 

 

 

 

 

 

 

In addition, if you need to do some extra modifications on the IDoc information, you can implement the /POSDW/IDOCOUTPUT badi to modify EDIDC  or EDIDD data. The CALL method of this badi receives as parameters the CT_EDDID table which contains all segments that have been created for all transactions.

 

 

 

Jayashree Desai

ABAP Memory Inspectors

Posted by Jayashree Desai Feb 12, 2014

The amount of memory an ABAP program consumes depends on the amount of data being processed, which is typically stored in some type of in-memory structure (such as internal tables) that grows dynamically to accommodate the stored data. If the amount of stored data to be loaded into system memory exceeds the size of available storage area, the program terminates, possibly with the runtime errors such as SYSTEM_NO_ROLL or TSV_TNEW_PAGE_ALLOC_FAILED.

 

  Because these errors can arise for a variety of reasons, the root cause may not be immediately obvious .For example, the runtime error TSV_TNEW_PAGE_ALLOC_FAILED occurs when the system can’t increase the size of an internal table due to insufficient available memory. However, this internal table might not be the reason why memory is exhausted. What you need in this situation is a tool to help you determine the real reason that the application has run out of memory.

 

The ABAP Memory Inspector provides you with an overview of dynamically allocated data (that is, all dynamic in-memory structures) at a particular time, which can be very helpful for diagnosing memory consumption problems, as well as a specialized transaction for analyzing this data.

 

Using the ABAP Memory Inspector

 

Analyzing the memory consumption of an application typically consists of two types of scenarios:

 

  • You’re interested in the current memory consumption of a running program in order to check if it is unexpectedly high.
  • You want to compare the memory consumption of a program at different times in order to find out if it increases in an undesirable way and to identify which memory objects contribute to the increase.

 

 

Creating the Memory snapshots

 

There are several ways to create a memory snapshot:

  • When debugging an application, select Development->Memory analysis->Create Memory snapshots from the ABAP debugger menu bar .A completion message indicates when the file is ready.
  • Enter the command /hmusa in the command field on any SAPGUI screen –you don’t need to be in the debugger .When the file is ready, you will see the same completion message shown for the previous option.

 

Analyzing and Comparing Memory Snapshots

 

The ABAP Memory Inspector provides a dedicated transaction for displaying the content of stored memory snapshots.

 

You start this transaction via S_MEMORY_INSPECTOR, or Memory analysis ->Compare Memory snapshots from the menu bar.

 

 

For Example:

Lets say you want to analyze the memory consumption of some program (demo program : ZTEST_MEMORY)

Run the report ZTEST_INDEX


Memory.png

 

You will get the output as below.


Memory1.png

Now enter /hmusa in the command field.


memory2.png

Message will be displayed at the bottom indicating that memory snapshot is created.


Memory3.png

To the view the memory snapshot go to the transaction S_MEMORY_INSPECTOR.

 

memory4.png

Double-click on an entry in the list to open a screenshot and display its contents in the lower part of the screen. You can have up to two snapshots open at one time. Opening a third snapshot automatically closes one of the others based on the difference in creation time. The first opened snapshot is referred to as (t_0);second as (t_1) .To select an open snapshot for display ,use the (t_0) and (t_1) buttons in the application at the top of the screen, or use the Memory Snapshot dropdown list located above the display tree in the lower part of screen.

 

 

memory5.png

Hi everybody, in this blog post I am trying to explain about transactional RFC, common issues in tRFC and troubleshooting the issues. The below information’s are gathered from various SCN discussions as part of solving the issues reported to me on transactional RFC from customer side and thought to share it as a blog .

 

The blog post contains,

 

  1. Transactional RFC
  2. tRFC process flow diagram
  3. Common issues and trouble shooting
  4. Important transaction codes
  5. SAP Notes

 

Transactional RFC

 

Remote Function Call (RFC) is the standard SAP interface for communication between SAP systems. RFC calls a function to be executed in a remote system.

Transactional RFC is an asynchronous communication method that executes the called function module just once in the RFC server. The remote system need not be available at the time when the RFC client program is executing a tRFC. The tRFC component stores the called RFC function, together with the corresponding data, in the SAP database under a unique transaction ID (TID).We can use function module “ID_OF_BACKGROUNDTASK” to retrieve the TID.

 

In case target system is down, the call remains in the local queue of source system until a later time. The calling program can proceed without waiting to see whether or not the remote call was successful. If the target system does not become active within a certain amount of time, the call is scheduled to run in batch.

 

Transactional RFCs use the suffix IN BACKGROUND TASK.

 

Call function 'Function Module Name’ IN BACKGROUND TASK DESTINATION ‘Destination name’.

 

As with synchronous calls, the DESTINATION parameter defines a program context in the remote system. As a result, if you call a function repeatedly (or different functions once) at the same destination, the global data for the called functions may be accessed within the same context.

 

The system logs the remote call request in the database tables ARFCSSTATE and ARFCSDATA with all of its parameter values. You can display the log file using transaction SM58. When the calling program reaches a COMMIT WORK, the remote call is forwarded to the target system.

All tRFCs with a single destination that occur between one COMMIT WORK and the next belong to a single logical unit of work (LUW).

 

Disadvantages of tRFC

 

•       tRFC processes all LUWs independently of one another. Due to the amount of activated tRFC processes, this procedure can reduce performance significantly in both the send and the target systems.

•       In addition, the sequence of LUWs defined in the application cannot be kept. It is therefore impossible to guarantee that the transactions will be executed in the sequence dictated by the application. The only thing that can be guaranteed is that all LUWs are transferred sooner or later.

 

tRFC process flow diagram

 

process1.jpg

 

process2.jpg

 

Common issues in tRFC queue

 

  • 1.   SM58 with status as “Transaction Recorded”

‘Transaction recorded’ is the status when the SM58 entry is triggered for execution at the target and there is no more work process available to process this request.

Check the SMQS to see if the destination CL3RCV003 is registered in outbound scheduler for tRFC processing.

 

pic-3.jpg

 

Click on the type field, it will be ‘R’ for registered.

 

pic-4.jpg

If entries are remaining in SM58 in status "transaction recorded" and the destination is registered on the outbound scheduler for tRFC processing, the only way to speed up the processing of these entries is by increasing the "max conn" value for that particular destination in SMQS. If destination is not registered in SMQS for trfc processing the entries in SM58 can be reprocessed by scheduling report RSARFCEX.

 

The number of max connections can be seen in SMQS .

 

pic-5.jpg

Destination CL3RCV003 is Registered (Type "R") on the Outbound scheduler. The "Max. conn." Value is 1 which means that the maximum number of used dialog used for this destination is 1, this may cause a problem so the number can be increased.

 

To do this, highlight the destination and choose "Edit" and "Registration":

 

pic-6.jpg

 

If you are increasing the max conn value, check that there are enough resources available. To do this from SMQS, choose "goto" in the Menu and then "qRFC Resources":

 

pic-7.jpg

This issue ‘Transaction recorded’ usually happens for Idoc processing and BW loads.

 

2.     SM58 with status as “Transaction Executing”

 

'Transaction executing' is the status when the SM58 entry is triggered for execution at the target and the source system is waiting for a response from the target system. This status can occur when connecting to another R/3 system or connecting with an external program.

 

You have to check in target system to see whether there are still running processes transactions    (SM66) for the destination user (this is the user you have setup in transaction SM59 on the source system for logging on to the Target system).  This user can be found in the"Logon & Security" tab of the used RFC destination.

 

pic-8.jpg

 

 

 

If there is nothing running in the target that corresponds to these SM58 entries in the source system, is it possible that the network connectivity was lost.

pic-9.jpg

 

3.   SM58 with status TSV_TNEW_PAGE_ALLOC_FAILED

 

This issue can happen when the applications register a huge number of tRFC calls in the queue on the same TID; with commit when standard program tries to fetch the entries from ARFCSDATA to execute the registrations there can be a state of unavailability of memory.

In SMQ1 you can goto "QRFC" in the Menu and then choose reorganize, this will delete ALL queues in SMQ1. However, if you want to delete selected queues then you could choose "Edit" in the menu and then choose "delete selected objects".

 

 

1.     Deleting all the queue entries

 

pic-10.jpg

2.     Deleting Selected entries

 

pic-11.jpg

 

Check and enhance the calling program in the source system to prevent large number of registrations in the queue, probably you can restrict the entries to a particular number and then do the commit.

 

Important transaction codes

 

 

SMQ1

qRFC Monitor (Outbound Queue)

SMQS

qRFC Monitor(QOUT Scheduler)

SMQR

qRFC Monitor(QIN Scheduler)

SM66

Global Work process overview

SMQ2

qRFC Monitor(Inbound queue)

SARFC

Server resources

RZ11

Maintain profile parameters

RZ12

RFC server group maintenance

 

SAP Notes

 

527481

tRFC or qRFC calls are not processed

1051445

qRFC scheduler does not use all available resources

532918 

"RFC trace generation scenarios", section 2 "Communication from ABAP to an external program".

1403974

Determining the maximum connections in transaction

1623430

Outbound queue scheduler does not process all LUW's

Dynamic table creation using RTTS.

 

I was reading the blog published by Pieter Lemaire on (Dynamic tables in ALV with RTTI); there he has explained some dynamic functionality using RTTS – I thought the way he was creating the dynamic internal tables using RTTS could be achieved in a much simpler way; therefore I thought of writing this blog to show that how easily we can generate dynamic internal tables & work areas from user definitions using RTTS.  Please see the code developed below.

 

PARAMETERS:: p_table(30) TYPE c.

DATA:

      it_component
TYPE abap_component_tab,
      wa_component
TYPE abap_componentdescr,
      o_ref_type
TYPE REF TO cl_abap_typedescr,
      o_ref_struct
TYPE REF TO cl_abap_structdescr,
      o_ref_table
TYPE REF TO cl_abap_tabledescr,
      o_table
TYPE REF TO data,
      o_workarea
TYPE REF TO data.

FIELD-SYMBOLS:
               <fs>
TYPE any,
               <fs_table>
TYPE ANY TABLE.


*Calling this method to get the TYPE Definition.

cl_abap_typedescr
=>describe_by_name(
 
EXPORTING
    p_name        
= p_table
  RECEIVING
    p_descr_ref   
= o_ref_type
 
EXCEPTIONS
   type_not_found
= 1 ).

CHECK o_ref_type IS BOUND.
o_ref_struct ?= o_ref_type
.

 

*Calling to get the components of the structure
it_component[]
= o_ref_struct->get_components( ).

IF it_component[] IS NOT INITIAL.
 
CLEAR:
        o_ref_struct
.

 

*Factory Method
  o_ref_struct
= cl_abap_structdescr=>create( it_component ).

 
CHECK o_ref_struct IS BOUND.
 
CREATE DATA o_workarea TYPE HANDLE o_ref_struct.
 
ASSIGN o_workarea->* TO <fs>.


*Factory Method.
  o_ref_table
= cl_abap_tabledescr=>create(
  p_line_type
= o_ref_struct
  p_table_kind
= cl_abap_tabledescr=>tablekind_std ).

 
CHECK o_ref_table IS BOUND.
 
CREATE DATA o_table TYPE HANDLE o_ref_table.
 
ASSIGN o_table->* TO <fs_table>.

ENDIF.



I faced a scenario where I had an ALV grid where all columns were of the same generic type (let's make it simple) CHAR255. Each column has its name in field catalog in format TABNAME-FIELDNAME (+ more human readable column header texts of course).

What I needed to achieve was to make validation of data that user entered in the ALV cells. Since the cell validation is not available by default (because of type CHAR255) I had to make it dynamically.

In this article I'd like to share my solution

 

 

If you know the Data Dictionary table name and field name you can get the search help name (if it exists) by calling a FM F4IF_DETERMINE_SEARCHHELP.

 

 

If this module returns valid data you can use it to call second FM called F4IF_SELECT_VALUES which returns itab with values that are normally displayed when the search help is triggered.

 

 

Generally the second FM can return enormous number of results so it's wise to limit the search with a filter (filter the one and only value - which was entered by the user).

 

 

If the second FM returns any result, it means the value is accepted as valid.

If no result is returned, it means the value enetered by user is not valid for the given field.

 

 

Now let's take a look at how exactly this can be implemented:

 

DATA:
* Table and field name you get during runtime
  g_tabname TYPE dfies-tabname,
  g_fieldname TYPE dfies-fieldname,

* Search help helper variables
  gs_shlp TYPE shlp_descr,
  gt_allowed_values TYPE TABLE OF ddshretval.

* Constants used for testing
CONSTANTS:
  gc_test_werks_ok    TYPE werks_d VALUE '2021',
  gc_test_werks_error TYPE werks_d VALUE '6058'.

FIELD-SYMBOLS:
  <fs_selopt> TYPE ddshselopt.

* We are testing against MARC table and its field WERKS
g_tabname   = 'MARC'.
g_fieldname = 'WERKS'.

* Get the search help if it exists/is defined
CALL FUNCTION 'F4IF_DETERMINE_SEARCHHELP'
  EXPORTING
    tabname           = g_tabname
    fieldname         = g_fieldname
  IMPORTING
    shlp              = gs_shlp
  EXCEPTIONS
    field_not_found   = 1
    no_help_for_field = 2
    inconsistent_help = 3
    OTHERS            = 4.
IF sy-subrc = 0.
* Check if its a collective search help - in this case pick first one from list of included search helps
  CALL FUNCTION 'DD_SHLP_EXPAND_HELPMETHOD'
    EXPORTING
      shlp_top = gs_shlp
    IMPORTING
      shlp_tab = gt_shlp_tab.

  CLEAR gs_shlp.
  CHECK gt_shlp_tab[] IS NOT INITIAL.

  READ TABLE gt_shlp_tab INDEX 1 INTO gs_shlp.

* Test with correct plant
  APPEND INITIAL LINE TO gs_shlp-selopt ASSIGNING <fs_selopt>.
  <fs_selopt>-shlpname = gs_shlp-shlpname.
  <fs_selopt>-shlpfield = g_fieldname.
  <fs_selopt>-sign = 'I'.
  <fs_selopt>-option = 'EQ'.
  <fs_selopt>-low = gc_test_werks_ok.

  CLEAR gt_allowed_values[].
* Collect values from search help filtered
* by the plant user entered
  CALL FUNCTION 'F4IF_SELECT_VALUES'
    EXPORTING
      shlp           = gs_shlp
      call_shlp_exit = 'X'
    TABLES
      return_tab     = gt_allowed_values.
  IF gt_allowed_values[] IS INITIAL.
    WRITE:/ ' Plant ', gc_test_werks_ok, ' is not valid'.
  ELSE.
    WRITE:/ ' Plant ', gc_test_werks_ok, ' is OK'.
  ENDIF.

* Test with invalid plant
  CLEAR gs_shlp-selopt[].
  APPEND INITIAL LINE TO gs_shlp-selopt ASSIGNING <fs_selopt>.
  <fs_selopt>-shlpname = gs_shlp-shlpname.
  <fs_selopt>-shlpfield = g_fieldname.
  <fs_selopt>-sign = 'I'.
  <fs_selopt>-option = 'EQ'.
  <fs_selopt>-low = gc_test_werks_error.

  CLEAR gt_allowed_values[].
  CALL FUNCTION 'F4IF_SELECT_VALUES'
    EXPORTING
      shlp           = gs_shlp
*     call_shlp_exit = 'X'
    TABLES
      return_tab     = gt_allowed_values.
  IF gt_allowed_values[] IS INITIAL.
    WRITE:/ ' Plant ', gc_test_werks_error, ' is not valid'.
  ELSE.
    WRITE:/ ' Plant ', gc_test_werks_error, ' is OK'.
  ENDIF.
ENDIF.

 

 

The output (depending on data in your system) will be like on picture below:

 

Validation using search help

 

 

 

The original post is on my blog at oprsteny.com


Step1 Create a new form interface in tcode SFP

clipboard1.png

Click tab "Interface",add a new attribute QRCODE_INPUT with type string for Form Interface - Import. This attribute is used to hold the content input by end user which will be used to generate QR code. Activate the interface.

clipboard2.png

Step2 Create a new form template in tcode SFP


Specify the interface you created in step1 as its interface:

clipboard3.png

In Context tab, drag the attribute QRCODE_INPUT to form Context:

clipboard4.png

Click tab "Layout", drag a QR Code control from Adobe Form Designer Object Library:

clipboard5.png

Specify its data binding from context attribute which we drag from form interface in step2:

clipboard6.png

activate the form template.

 

 

Step3 Create a new ABAP webdynpro

It has one text edit which allows end user to input some string which will be used to generate the QR code, a button to trigger the PDF generation, and an interactive form element to display the rendered PDF with QRCode.

 

clipboard7.png

Select the interactive form element, maintain the template source with ZPF_GRCODE we created in step2, choose Yes to let framework to generate the necessary context for us:

clipboard6.png

Bind the text edit to the automatically generated context attribute:

clipboard7.png

Test

 

 

type some test string and click Generate button, the generated QR code is displayed in the interactive form element.

clipboard8.png

and I can use the QRCode scanner installed in my cellphone to parse the QRCode successfully.

clipboard9.png

Actions

Filter Blog

By author:
By date:
By tag: