R. Bailey

LSMW read ahead technique

Posted by R. Bailey May 5, 2015


Although SAP currently promotes “best practice data migration” using Business Objects Data Services, LSMW is still the tool of choice for many projects.

LSMW is free and simple to use and handles many things more or less automatically.


In this and subsequent blogs I want to share some of my experiences from recent projects. I am not a programmer so any ABAP code that I show will not necessarily be the best that could be used. However, data migration is a one-off so it usually isn’t important if the code is beautiful and efficient (unless of course you have very large data volumes). I am assuming that you are already familiar with LSMW. If you are a beginner then there are plenty of other SCN posts that will help you.


One of the advantages of LSMW can sometimes be a disadvantage. LSMW controls the flow of data in the program generated in the convert step. It reads the input records, applies the conversion logic, writes (for each segment if there are multiple segments) a record to the output buffer and, after processing all records relating to a transaction, it writes the transaction. However, sometimes you would like to know the content of the next input record and use this information while processing the current record. Unfortunately when LSMW reads a record, the previous record has already been written and is no longer available. This can be solved by using a “read ahead” technique.


Use cases

Here are some examples of where a read ahead technique might be used:

  • Processing GL bookings with RFBIBL00 using flat file input. Normally you have a header record followed by items and LSMW automatically detects each new header.
  • Processing WBS elements where operational indicators should be set on the lowest level. If the depth of the WBS structure is not fixed then you only know you reached the lowest level when you read the next record.
  • Processing vendor records from a legacy system where there are multiple records per vendor and you need to process all of the records before writing an output record.

All of the above occurred in a recent project of mine. I’ll now explain the technique using the RFBIBL00 example.

Worked Example

If you want to process GL bookings, AR open items or AP open items then SAP provides the standard batch input program RFBIBL00 which you can select in




For the transfer of opening balances in our project the input file provided from the legacy system was a flat file containing a limited number of fields. The Oracle Balancing Segment in the input file is used to determine a Profit Centre. The input account is actually an Oracle GL account which is converted using a lookup table in LSMW.



The input file is sorted by Company Code, Currency Key and Oracle Balancing Segment. A separate GL document is written for each combination of these values. The document is balanced by a booking to an offset account. If the balances have been loaded correctly then the balance of the offset account will of course be zero. During testing the GL conversion table was incomplete so some code was added to allow processing even if some input records were invalid – in this case the offset account will have a balance but we can see what is processed.


The structure relations are as you would expect:



With a flat input file we need to determine for ourselves when the key has changed and we will only know this when we read the next record. Therefore we change the flow of control in the LSMW program so that we can "read ahead" to the next record.


LSMW normally writes an output record in the END_OF_RECORD block and a transaction in the END_OF_TRANSACTION block. With the read ahead technique we do this in the BEGIN_OF_TRANSACTION block. At this point we still have the previous converted record and the next input record is also available so we can check whether there is a change of key. There are two things that have to be handled:

  • When processing the first input record we should not write any output
  • When we get to the end of the input file the last record hasn’t been written and we won’t come back to the begin block so the last record won't get written


Let’s now look at the code for each block. Since an offset booking has to be written at various places, the code for this has been put into a FORM routine.




* On change of company or currency code a new document is needed

* We write the balancing entry of the prior document here and

* a new header at end of record for BBKPF

if not prev_bukrs is initial and ( infile-bukrs ne prev_bukrs or

   infile-waers ne prev_waers or infile-balseg ne prev_balseg ).


  h_writehdr = 'X'.    "Check this at end of BBPKF record



We have defined some variable to contain the previous key field values: prev_bukrs, prev_waers and prev_balseg. When we read the first record these have an initial value. Otherwise if the value changes the we write the booking to the offset account and set a flag to write the header record for the new document.




* at_first_transfer_record.

if g_cnt_transactions_read = 1.


  1. endif.

if g_cnt_transactions_group = 5000.

  g_cnt_transactions_group = 0.




BGR00 is the batch input session record. This is the standard coding except that we replaced the “at first” test with a test on count of transactions read.




* On change of company, currency code or balancing segment

* start a new document

if h_writehdr = 'X' or prev_bukrs is initial.

* check prev_bukrs to get first header


  h_writehdr = ''.


* Set previous values here

  prev_bukrs = infile-bukrs.

  prev_waers = infile-waers.

  prev_balseg = infile-balseg.


BBKPF is the document header. We write a header for the first record and whenever the key changes. We also update the previous key values here.




if g_skip_record ne yes.


* Update running totals for the balancing item

  if INFILE-NEWBS = '40'.

    g_wrbtr_sum = g_wrbtr_sum + h_wrbtr.

    g_dmbtr_sum = g_dmbtr_sum + h_dmbtr.

  else. "Posting key 50

    g_wrbtr_sum = g_wrbtr_sum - h_wrbtr.

    g_dmbtr_sum = g_dmbtr_sum - h_dmbtr.


  g_item_count = g_item_count + 1.

  if g_item_count = 949.   "Split the document after 949 items


    g_item_count = 0. "Reset the item count after writing record

    transfer_this_record 'BBKPF'.  "Write header for next block




If the record is valid (our program contains various validity checks) then an output record is written and the cumulative value in local and foreign currency is updated. This coding block also contains a document split. If there are more than 949 items then a balancing entry is written followed by a new document header.




if g_flg_end_of_file = 'X'.




This is were we handle the problem of the last record. LSMW contains a number of global variables and a useful one that is not included in the LSMW documentation is g_flg_end_of_file. When this has value X we have reached the last record and a final offset booking should be written




  if g_wrbtr_sum ne 0 or g_dmbtr_sum ne 0.


* Offset entry not required if the document balances!

    bbseg-newko = p_offset.  "Use suspense account

    bbseg-zuonr = 'DATA MIGRATION'.

    bbseg-sgtxt = 'Balancing entry'.

    bbseg-prctr = h_prctr.

    bbseg-xref2 = '/'.  "Ensure this is empty here

    bbseg-valut = '/'.  "Empty on the offset booking

    bbseg-mwskz = '/'.  "Empty on the offset booking

*  bbseg-xref1 = '/'.  "Empty on the offset booking

    if g_wrbtr_sum ge 0.

      bbseg-newbs = '50'.    "Credit entry

      bbseg-wrbtr = g_wrbtr_sum.


      bbseg-newbs = '40'.    "Debit entry

      bbseg-wrbtr = - g_wrbtr_sum.


    if g_dmbtr_sum ne 0.

      if g_dmbtr_sum ge 0.

        bbseg-dmbtr = g_dmbtr_sum.


        bbseg-dmbtr = - g_dmbtr_sum.



    translate bbseg-wrbtr using '.,'.

    translate bbseg-dmbtr using '.,'.

    g_wrbtr_sum = 0.

    g_dmbtr_sum = 0.

    g_skip_record = no.  "LSMW carries over status of previous rec!!

    transfer_this_record 'BBSEG'.




There is no need for an offset booking if by chance the document already is in balance. Otherwise we create the offset booking. The offset account is an input parameter in our program and some other fields have fixed values. Our system is configured to use decimal comma so we need to change the value fields to what is expected on an input screen. At the end we write the balancing record and the transaction.



This is a simple technique that can be useful in a variety of situations.

As SAPPHIRE starts in Orlando Tomorrow, we, customers and partners, will be bombarded with information about S4/HANA. It is clear it is dead center in the SAP strategy for the next years.


The value of a renewed Business Suite will become clearer and cleared the more we hear about Simple Finance and now Simple Logistics. The roadmap looks very exciting with Fiori apps covering great scope and with the upcoming of Business Suite merge (Different components of the business suite, like CRM and SCM will now merge back with ERP to form a single system).


This transformation is still in its early stages. Financials are far ahead while logistics is coming soon. The roadmap for the “repatriation” of external business suite functions back to the ERP Core is just starting. I foresee a roadmap of 3-5 yrs until we see a complete fusion.


But what is clear is that SAP has addressed 2 of its most important issues: Simplification of SAP footprint AND User-Interface. We will soon see customers running ERP on HANA (S4/HANA) with complete business suite functions like Global ATP and eWM back in the core ERP. No more parallel SCM and/or CRM landscapes. No data replication. Shorter implementation, lower TCO.


So, all this is great, but there is a catch: Current SAP customers looking at jumping into S4/HANA need to revise their current SAP solution and consider return (as much as possible) back to standard functionality.


Customer have been running SAP for many years (some for decades) so they built on the Business Suite for 2 reasons: Either implement something that was not available at the time (early customers) or to implement customer specific requirements.

What we see more clearly now than before, is that the price to implement and maintain these customer “customizations” is much higher than just the implementation development hours. It may hinder adoption of future functionality. This is exactly what is happening now. Here are 2 major examples:


  1. Custom code – Customers often support thousands of custom built programs. Migrating to ERP on HANA requires a revision of this code so it can run properly (not talking about performance here, some DB practices differ and bad ABAP code of today will NOT run correctly on HANA. They NEED fixing)
  2. The more custom code and advanced configuration, the farther apart a customer will be from adopting 2 of the most important values of the S4HANA proposition. Guided Configuration and Standard Fiori apps.

So, now that it is more evident than ever that it pays to adopt standard practices and reduce customization to the max, isn’t it time to adopt SAP pace of innovation and stop trying to build IT solutions ourselves? Wasn’t THAT the original proposition of buying an ERP software in the first place?

Hi fellows,


I want to share with you an experience about milestones after Go Live in business where I work.


We gone out Golive 9 months ago, since then we had many problems with: Users, Internal Communication, Process Speed and Productivity. Additionally the business was changed their natural process for another process. As the days the problems was increasing.


The CEO of the factory, is a person very very committed with the process and with SAP System, and always towards meetings for the purpose of communicate to all managers the importance to use SAP correctly, however after the meeting clutter return


In that moment was when I take awareness to the importance of Change Management, we had all day all days supporting different areas such as FI, CO, PS, MM, SD, PP, PM, ETM, HCM, QM.


After that and towards days, we start a program to had as core the Change management, with this tool was evident the improve in:


- Process

-Staff Communication

-Performance Workability

-Knowledge of process & business


Simply wanten to share this experience about impotance of tools as Change Management that are very interesting and helpful for our work


Currently we are designing Roll-Out to another centre of the Company and going to start to implementing WPB, surely considering The Change Management in book Lessons Learned



In December 2014, I had the chance to attend a press/analysts presentation at Cirque Du Soleil’s HQ on their Cloud adoption and overall IT strategy. Thanks SAP for the invite.



Even if I am from Montreal and that Cirque is a client of ours, I was very surprised to see the level of maturity of the client and their path forward.



First it was presented the current Success Factors adoption by the client.  SuccessFactors apps were being deployed fast and across departments. The reason were 2: a fit with the out of the box solution and the simplification of cloud consumption.



Another very important point presented was the adoption of the Ariba sourcing tool. I can’t recall the number exactly but it was a very large part of the high dollar RFQ sourcing now processed in the Ariba tool. Most of Freight and hotel sourcing. This is huge!  It was by far the highest adoption I saw to date.



But what caught my attention the most was that the customer has reached a point in their evolution that it realizes the importance of sticking to standard processes. What we often refer to “Vanilla” in SAP, actually yields much faster adoption, leaner/simpler deployments and best position to benefit from future functionality. In the SAP market, SAP ERP, with its incredible flexibility, has allowed us to design solutions very “tailor made”, processes very custom and unique solutions. Do the custom processes, often referred to as “competitive advantage”, really pay off? Customers often look only to the development cost of building a custom solution. But it actually costs a lot more than that. We should factor in the upgrade hickups/retests/adjustments and more importantly, the fact that the customer may, often, end up on the wrong track (bad designs etc.).



The head of Cirque’s Procurement clearly manifested the importance of returning to Standard. They are impressed by how much value can be taken from little investment by using the out of the box cloud apps but also they think that returning to standard processes will allow them to further integrate with the Ariba Network.



I wanted to bring the point about return to standard because SAP is introducing a lot of innovation these days. S/4HANA is a huge game changer. Even the base ERP on HANA brings a lot of benefits. But both will prove challenging to migrate to for very customized customers. In the case of S/4HANA, the benefits of Guided Configuration and the newly delivered Fiori apps will be much better adopted by customers close to standard and best practices.



Will 2015-2016 be the year where customers will start seeing the challenges of running very customized solutions? Wasn’t the adoption of out-of-the-box solutions the initial motivation to jump into SAP in the first place?


DISCLAIMER: This is not a challenge ONLY for SAP customers. In fact, any customer that runs a packaged software will face the dilemma of Standard vs customized. It is just that SAP is introducing so much innovation that these customers will be the first to realize it.

In many projects, we have had the need to have status management for our application. Many a times, we end up creating a new framework or functionality for the same, not knowing there is a standard feature for the same. Here is what you will need to do to configure and set up a new one.


  1. Define a new Object type (Tcode BS12)

Create a new object type which identifies a status object.



2. Maintain status profile (BS02)

Create a new status profile, which can be made up of one or more statuses. Each status has a number and you shall also specify the lowest and the highest status you can navigate to from here. It is not possible to specify minor status transitions like from A you can move to C and not D, but if you place the statuses in the right order, it should be possible to carefully define such transitions.



3. Maintain Business transactions (BS32)

The business transactions are like actions in the system. Some of the actions are possible only in certain statuses, and some of them can also cause a status change.





4. Business transactions valid for Object type

In TcodeBS12, double click on the object type, select all the business transactions that are eligible for this object type.



5. Transaction Control

In BS02, double click on each of the status configured to define the transaction control

It is possible to specify which business transactions are possible in a given status and which are not.

As a second step, it is also possible to specify that a certain transaction sets a certain status.




ABAP Code for status transition:

**Run business transction in simulation/update mode



check_only           = iv_simulation_flag

objnr                = lv_objnr

vrgng                = lv_biz_transaction


activity_not_allowed = 1

object_not_found     = 2

status_inconsistent  = 3

status_not_allowed   = 4

wrong_input          = 5

warning_occured      = 6

OTHERS               = 7.

IF sy-subrc <> 0.

MESSAGE e075(zawct_msg) WITH lv_biz_transaction INTO lv_message.

PERFORM bapi_message_collect CHANGING et_messages.



I see many questions posted in SCN could be resolved by a single note or KBA.  Considering the amount of notes/KBAs, I know that it could be quite difficult to find out the right note/KBA to resolve a specific problem.  So I'd like to share some tricks and experiences to search for the suitable notes/KBAs.


Where is the Note/KBA search tool?

  1. Go to the link :Home | SAP Support Portal
  2. Click the link "Note and KBA  search tool" on the right
    support portal.png
  3. Click the button "Launch the SAP Note and KBA search"
    search tool.png
  4. The search tool will be opened.
    search criterias.png

What search options can be used?

  1. There are 3 checkboxes for option "Language": German, English and Japanese

    If the deveopers are located in Germany, they tend to write original notes in German and then translate the notes to English or Japanese.  Since translation may need time, if you search with the option English or Japanese, you may not get a full list of the newest notes/KBAs.
    So my suggestion is :
    If you can read German and you know the developers for the application area is located in Germany, use the language "German" to search. 
    If you cannot read German, but you suspect this is a new bug/error which should have been covered by a note, you can still search with the language "German", but search with key words like program name, field name, error message number etc.
  2. Choose your search terms carefully.

    Use error message number instead of short text.
    If an error occurs, you can find the error message number in the long text by double clicking the error.  For example, search by "CP466" or "CP 466".
    Use field technical name instead of field description
    For example, use ELIKZ instead of "delivery completed".

    Use full name instead of abbreviation.
    eg. purchase requisition instead of PR, planned independent requisition instead of PIR

    Use function modules/programs if you know which is called and is behaving strangely
    eg. error occurs when converting planned orders to production orders, the function module CO_SD_PLANNED_ORDER_CONVERT is called.

    Use different t-codes
    eg. although error occurs in CO02, you can also try to use t-code CO01 during search.

    For short dumps, use the key words suggested in the dump.
    You can usually see the following statement in the dump:
    If the error occures in a non-modified SAP program, you may be able to find an interim solution in an SAP Note. If you have access to SAP Notes, carry out a search with the followingkeywords:
  3. Choose a right application area to restrict the selected note/KBA numbers, or do not enter the application area to expand the selection to all components (this could be useful if you don't know the right component or the issue crosses many applications).
    You can simply use a wildcard * after a main application (eg. PP* or PP-SFC*) if you are not sure about the sub-components.
  4. Choose the validity of your system release in the field "Restriction" by selecting "Restrict by Software Components".
    If the problem only appears after a system upgrade, it's better to specify your system release to filter out old notes.
    For example, you can find your own system release by t-code SPAM.  Click the button "Package level".  Usually for application areas like PP, QM, you should look for the component SAP_APPL.
    package level.png
    During search, input the software component, release from, release to. ( "From support package" is optional.)
    Here is the result from above search criterias:
  5. Choose a note/KBA category.
    If you are just looking for some explanation of system logic, you can choose category consulting and FAQ.  They usually don't include any coding.
    If you suspect a system error(for example issue only appears after system upgrade), you can choose category "program error " "modification".  They usually contain correction codes.



Common questions regarding notes/KBAs:


The note contains code corrections.  How can I see the codes?

Take the note 1327813 for example.

In the section " Correction Instructions", you can find a correction number corresponding to the valid system release.  Click  the number. A new window will be opened.  You can then click the object name to see the codes.correction instructions.png



How do I know if the note is valid for my system?

Take the note 1327813 for example.

See the section "Validity".  It shows the main SAP_APPL release for which the note is valid.
See the section "Support Packages & Patches".  It lists the exact SP and patch that imports the note.  If your system release is lower than the listed SP and patch level, the note should be able to be implemented by SNOTE.
sp and patch.png

The note includes Z reports.  How to implement it?

Many notes contain Z reports to correct inconsistencies. For example the notes listed in blog Often used PP reports in SAP Notes

These Z reports usually cannot be imported by SNOTE.  You have to create them manually in t-code SE38 and copy the source code in the notes.


Where are the related notes?

In the section "References".  You can see which notes are referenced to and referenced by.  Sometimes they can be quite useful to see other related notes.


Also refer to the following KBAs about searching notes/KBAs:


2081285 - How to enter good search terms to an SAP search?

1540080 - How to search for KBs, KBAs, SAP Notes, product documentation, and SCN


Amazing tool for automatic note search:


I have to mention the amazing tool which enables automatic note search.  It enables the user to find out which note correction is missing on your system automatically.  See the following note:

1818192 - FAQ: Automated Note Search Tool


Be aware that this tool can only find notes with correction codes.  So if you are looking for consulting and explanatory notes/KBAs, you still have to use the tips above to search by yourself.


Tips to search on SCN, see document:

My tips on searching knowledge content the SCN

We are present the installation of SAP System with a high availability for ABAP Instance:

This installation will be on Windows Server 2012 R2 as a platform and a SQL Server 2012 SP1 as adatabase level.

we well use Software provision manager 0.1 SP 06 and Kernel 7.2:

also we are divided this installation to the following Parts:

Part 1- Install SQL Server Cluster.

Installing SAP HA on SQL Part1: Install SQL Server Cluster.

Installing SAP HA on SQL Part1: Install SQL Server Cluster step2

Part 2- Install First Cluster Node.

Install First Cluster Node.

Part 3- Install DB instance.

Install DB Instance

Part 4- Install Additional Cluster Node.

Part 5- Install Central Instance.

Part 6- Install dialog Instance.

We will start directly now in Part 4- Install DB instance.


By using Software provision manager with last update (Uncar it) and Kernel last update for the same version an start by:

Choose system copy --> MS SQL Server --> Target System Installation --> High-Availability System --> Based on AS ABAP --> Install Additional Cluster Node.

Additional Cluster Node 01.PNG

At the first step, you must choose cluster group which you are created it in Install First Cluster Node. and choose the local driver on additional cluster driver to install local instance on it.

Additional Cluster Node 02.PNG

Enter the password for SAP System administrator and service user.

Additional Cluster Node 03.PNG

Specify the Unicode Kernel Netweaver location and select it.

Additional Cluster Node 04.PNG


Additional Cluster Node 05.PNG

Additional Cluster Node 06.PNG

In this step you must configure your Swap Size as recommended.

Additional Cluster Node 07.PNG


Additional Cluster Node 08.PNG

Here you can choose the domain which the sap system accounts for SAP Host Agent created on it.

Additional Cluster Node 09.PNG

Enter the password for the operating system users on additional node.

Additional Cluster Node 10.PNG

In this step, You must enter the instance number for Enqueue Replication Server.

Additional Cluster Node 11.PNG

This is the summery for all customizing installation selection before start in installation.

Additional Cluster Node 12.PNG

Start installation additional Cluster Node.

Additional Cluster Node 13.PNG


Additional Cluster Node 16.PNG

Said Shepl

Install DB Instance

Posted by Said Shepl Dec 9, 2014

We are present the installation of SAP System with a high availability for ABAP Instance:

This installation will be on Windows Server 2012 R2 as a platform and a SQL Server 2012 SP1 as adatabase level.

we well use Software provision manager 0.1 SP 06 and Kernel 7.2:

also we are divided this installation to the following Parts:

Part 1- Install SQL Server Cluster.

Installing SAP HA on SQL Part1: Install SQL Server Cluster.

Installing SAP HA on SQL Part1: Install SQL Server Cluster step2

Part 2- Install First Cluster Node.

Install First Cluster Node.

Part 3- Install DB instance.

Part 4- Install Additional Cluster Node.

Part 5- Install Central Instance.

Part 6- Install dialog Instance.

We will start directly now in Part 3- Install DB instance.

We must to be prepare our DB source, however DB backup or Export DB by uesing SWPM export tool.

You must download Software provision manager with last update (Uncar it) and Kernel last update for the same version an start by

Choose system copy --> MS SQL Server --> Target System Installation --> High-Availability System --> Based on AS ABAP --> Install DB instance.

DB Instance 01.PNG


DB Instance 02.PNG

We will choose standard system copy/Migration (load-based) because we use export DB tool

If you restore your DB using SQL Server, you can use another choice Homogeneous system copy


DB Instance 03.PNG

Choose MS SQL server instance name which you are created it during installing SQL Server Cluster.

DB Instance 04.PNG


DB Instance 05.PNG

In this step, we will provide sap installation with SAP Kernel CD location.

DB Instance 06.PNG

Choose LABELIX.ASC file for the required kernel.

DB Instance 07.PNG


DB Instance 08.PNG

In this step, we will Provide SAP installation by the location of Export DB which we are import it from the target system.

DB Instance 10.PNG

In this step, you can enter the password of SAP DB schema.

DB Instance 11.PNG

You can choose the required Number of data file according to the number of CPU cores for your server.

as illustrate for large system (16-32 CPU Cores), Medium System (8-16 CPU Cores) and Small System (4-8 CPU Cores).

DB Instance 12.PNG


DB Instance 13.PNG


DB Instance 14.PNG

In this step, you can enter the number of parallel jobs in the same time.

DB Instance 15.PNG

In this step, you select the kernel database .sar file to unpack it in the kernel directory.

DB Instance 16.PNG


DB Instance 17.PNG


DB Instance 18.PNG

We are received the following error during installation

DB Instance 20.PNG

We are use the following link and SAP Note 455195 - R3load: Use of TSK files to solve this issue:

SWPM: Program &amp;#039;Migration Monitor&amp;#039; exits with error code 103&lt;/title&gt;&lt;script type=&quot;text/j…

DB Instance 25.PNG

The problem is solved and installation is go on.

DB Instance 27.PNG


DB Instance 29.PNG

In this step, illustrate that the installation has complete successfully.

DB Instance 30.PNG



Said Shepl

We are present the installation of SAP System with a high availability for ABAP Instance:

This installation will be on Windows Server 2012 R2 as a platform and a SQL Server 2012 SP1 as adatabase level.

we well use Software provision manager 0.1 SP 06 and Kernel 7.2:

also we are divided this installation to the following Parts:

Part 1- Install SQL Server Cluster.

Installing SAP HA on SQL Part1: Install SQL Server Cluster.

Installing SAP HA on SQL Part1: Install SQL Server Cluster step2

Part 2- Install First Cluster Node.

Part 3- Install DB instance.

Part 4- Install Additional Cluster Node.

Part 5- nstall Centeral Instance.

Part 6- Install dialog Instance.



we will start directly now in Part 2- Install First Cluster Node.

you must download Software provision manager with last update (Uncar it) and Kernel last update for the same version an start by


choose system copy --> MS SQL Server --> Target System Installation --> High-Availability System --> Based on AS ABAP --> First Cluster Node

First Cluster Node 01.PNG


First Cluster Node 02.PNG


We are received the following error:

we are solve this issue by follow SAP Note: 1676665

First Cluster Node 02 Sol.PNG

Download and install Vcredist_x64 and install it

In this case, install the Microsoft Visual C++ 2005 Service Pack 1 Redistributable Package ATL Security Update, which is available at: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=14431

Retry the installation with SWPM.


First Cluster Node 03.PNG


First Cluster Node 04.PNG


First Cluster Node 05.PNG


Note: In this Step, you must Create A record in your DNS by SAP Virtual instance host name:

First Cluster Node 06.PNG


First Cluster Node 07.PNG


First Cluster Node 08.PNGFirst Cluster Node 09.PNG


First Cluster Node 10.PNG

First Cluster Node 11.PNG


We receive this error because we choose Cluster Disk 1 which is specified to SQL Server, We are re-choose Cluster Disk 3 which is available storage as the following:

First Cluster Node 12.PNG


First Cluster Node 13.PNG

First Cluster Node 14.PNG


First Cluster Node 15.PNG

Select Kernel drive LABELIDX.ASC

First Cluster Node 16.PNG


First Cluster Node 17.PNG


First Cluster Node 18.PNG

Reconfigure SWAP in OS Platform


First Cluster Node 20.PNG


First Cluster Node 09.PNG

First Cluster Node 22.PNG


First Cluster Node 23.PNG


First Cluster Node 24.PNG


First Cluster Node 25.PNG


First Cluster Node 26.PNG


First Cluster Node 27.PNG


First Cluster Node 28.PNG

LSMW.  I wonder where that 4 letter acronym ranks in terms of frequency of use here on SCN.  I'm sure it's in the top 10 even with stiff competition from SAP, ERP, BAPI, ABAP, and some others.


Why is that?  Well, it's a very useful tool and comes up frequently in the functional forums.  I remember when I got an email from a fellow SAP colleague introducing me to it.  That was back sometime in the fall of 1999 but I know version 1.0 came out a year earlier and was supported as far back as R/3 3.0F.  I dove into it and the guide that SAP had published and it was really great.  I could see immediately that for basic data conversions, I could handle the entire conversion process without the help of a developer.  Back in 1998, that was a fairly big deal and one that I'm sure the ABAPers had no problem ceding territory in.


Just a year earlier I was using CATT to do legacy conversion.  It had a similar transaction code based recording mechanism, a way to define import parameters, and a loading mechanism to map a .txt file to those parameters.  But CATT was not designed specifically for data conversion so it could be a pain to deal with.  In particular, tracking load errors was very tedious which required you to do a large number of mock loads on your data to ensure that it was perfect.


My History with LSMW

Back in 1999, it was obvious to me that LSMW was a big improvement over CATT for a few reasons:

  • I could incorporate standard load programs and BAPIs. Using screen recordings was no longer the only way to load data.  I hate screen recordings.  They eventually break and you have to handle them with kid gloves at times... you have to trick them into handling certain OK codes or work around validations/substitutions.
  • LSMW allowed you to use screen recordings as a way to define your target structures.  I love screen recordings!  Why?  Because, as a last resort, they let me change any record in the system using an SAP supported dialog process.  If you can get to it manually at a transaction code for a single record, than you can create/change/delete that same data in batch using a custom screen recording.
  • I could do the transformation within SAP rather in Excel.  That saved a lot of time especially if I had certain transformations (i.e., a cost center lookup) that were used in different loads.  Define once, use multiple times.
  • I could load multiple structures of data.  Again, this saved time because I didn't have to rearrange the data in Excel to force it into a particular structure format which might contain numerous fields that I had no interest in populating.  That left my source Excel file relatively clean which was far easier to manage.
  • Organization.  LSMW had a way to categorize each load by Project, Sub-Project, and Object.
  • No more developers!  While the tool allows you to insert custom logic, it's not required to do so.  If you know your data well enough and you have a typical legacy source file, there's no reason why a functional person such as myself can't load everything on his own.



Once word spread about LSMW inside SAP, it seemed that every functional consultant I worked with was using it.  Eventually we started using it for purposes other than legacy data conversion.  Mass changes, mass creation of new data that wasn't legacy related, etc.  Other non-functional areas used it too; I've seen security teams upload mass changes to userID records.



This is how I Really Feel

But... I didn't write this to praise LSMW.  Now, in the year 2014, I can't stand working with it.  It's limitations have been bugging me for years and SAP hasn't done anything to improve it.  My gripes:


  1. Poor organization.  The simple Project / Sub-Project / Object classification is too limiting.  It seems to be a quasi hierarchy of the individual LSMW objects... but why not release a fully functional hierarchy?  If we had a real hierarchy we could use multiple levels, parent-child relationships, drag-n-drop, etc.  There are some customers that don't use it that much and may only need a single level deep hierarchy.  Others might need 5 or more.  Either party is currently forced into using the existing 2 deep classification of Project / Sub-Project.  What I most often see is a horrible organization of the underlying LSMW objects.  That fault lies with the customers for not enforcing and administering this hierarchy.  But if the tool made it easier to classify and organize the various scripts, maybe it wouldn't be as messy as I've come to expect.
  2. The prompts are inconsistent. This is a minor gripe but the function keys are different per screen.  To read/convert your data file you navigate to a selection screen (a very limited one) and press F8 to execute.  To read the contents of those data files within SAP, you get a pop-up window and have to hit Enter to execute it.  No one limits the reading to a selection of records (or, very rarely do they) so I could do away with that prompt entirely.
  3. Another personal gripe but I'm so tired of the constant Read Data, Convert Data, Load Data...  Whoops!  Error!  Change in Excel, save to .txt, Read Data, Convert Data, etc.  The process has too many steps and I have to flip between SAP, Excel, my text editor, and my file manager (Directory Opus).  Or, why can't I link directly to my Excel file and view it within SAP?
  4. There isn't a good way to quickly test or validate some basics of the data.  I get that each area and load mechanism is different (i.e., BAPI versus screen recording) but there should be a quick way within the tool to better validate the data in a test format so that we know right away if the first 10 records are OK.
  5. Speed.  I had some tweets with Tammy Powlas this past weekend.  She used RDS for an upload (Initial Data Migration Over, The Fun Has Just Begun).  The upload of 600k records took an hour but I highly doubt that LSMW could beat that.
  6. The solution was great back in 1998 for the reasons I noted above.  Back then I would happily double click between my source and target fields, assign rules, create lookup tables, etc.  But it's 2014.  I'd rather use a Visio type of tool to maintain my data relationships.
  7. Lack of Development.  Here's the version we are running at my customer.  2004...  seriously?  No changes in 10 years?  I recall the early versions of LSMW... v1, v1.6, v1.7... but I don't remember there being a v2 or v3.  So how did we jump from v1.7 to v4 and what are the delta changes?  Seems like some upper management mandated creative version management to me.  My guess is that LSMW has been upgraded based on changes to WAS and to keep it along with ERP 5.0 to 6.0... but the product itself hasn't changed in terms of functionality.  LSMW still feels like a v2 product to me.


screenshot - 2014.11.12 - 08.31.12.png




My Biggest Gripe

But my biggest gripe isn't with the tool.  It's how it's used by the SAP community.


It seems that every consultant I know uses LSMW as their go-to source for all data changes.  I've walked into customers that have been using an LSMW to maintain some object for 10+ years!!!!  How the heck can something like that happen?  This is an area where LSMW's flexibility works against it... or rather, works against the customer's long term success with SAP.  The problem here is that it allows us functional folks to quickly develop a 'tool' to maintain data.  It's the quickest way to develop a solution on the Excel-to-SAP highway that accountants et al. need throughout the year.  For a truly ad-hoc requirement to do just about any process in SAP based on data in Excel, it works fine.  I don't have an issue with that and would recommend LSMW in those appropriate cases.  But it's not a long term solution.  Period, end of story.



Other Solutions

Mass Maintenance Tool

If you have a recurring need to mass change master data, you should be using the mass maintenance tool.  Just about every module has developed a solution using this tool to change the most important master data records in the system.


screenshot - 2014.11.12 - 08.56.29.png



Be Friendly to your ABAPer

Anyone heard of a BAPI?  If you have a recurring need to upload transaction data or make changes to certain POs, sales orders, etc, or have a master record not in the list above, there is a BAPI that will do that for you.  Get with your ABAPer, develop a suitable selection screen, get a test-run parameter on there, get a nice ALV based output report, and then get the tcode created.  Done...  that's a good solution using an SAP supported protocol that is far better, safer, consistent, and easier to work with than a screen based recording that dumps your data into BDC.  In my opinion, if part of your solution has the letters 'SM35' in it, you've done something wrong.


Why would anyone recommend to a customer that they should use this crummy process (read data, convert data, display converted data...) as the long term solution for making changes like this?  That's not a solution, it's a lame recommendation.

Final Word

LSMW and other similar screen based recording tools (Winrunner et al.) are flexible and it's tempting for people... and I'm talking primarily to the consultants out there that over-use and over-recommend LSMW... to keep going back to it.  It's a useful tool but there are problems when you don't have enough tools in your toolbox to use... you're limited in options and you keep going back to what you know.


Have you heard of the phrase "When you have a hammer, everything looks like a nail".  It came from noted psychologist Abraham H. Maslow in his 1966 book The Psychology of Science.


Maslow quote.png



His quote is also part of something called the Law of the Instrument.  A related concept of this is the notion of the Golden Hammer which was written about in AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis: William J. Brown, Raphael C. Malveau, Hays W.…  The book covers the factors that come up repeatedly in bad software projects.  Among them is what they call the Golden Hammer which is "a single technology that is used for every conceivable programming problem".


LSMW's time as my hammer of choice passed a long time ago.  It's a useful tool and should be in everyone's toolbox but we shouldn't use it unless there is an actual nail sticking out.



Data conversion in SAP project - continuation

Introduction :

I have recently written a blog about data conversion process, in which I've specified the major basics steps of this important process (Data conversion in SAP project).

It's advisable to start reading that blog before this one. As my PM project advances successfully forward, I have discovered new very important step in this critical process. I wish to share this step: Data cleansing.


Data cleansing step:

After you have fulfilled Analyzing errors in EXCEL file, you will discover that there are some conversion failures due to "garbage" data in customers legacy system.

It's important to outline that during this step, probably, the legacy system is still active (users are still using it).

As I've explained in previous blog - Excel file stores the output result from Process runner next to each record that was upload to SAP. Meaning, you can see next to each record the reason for failure.

Step 1: Out line and filter all records which consist "garbage"data reasons.

Example from my project : I tried to upload 2000 records of customers machines and tools which are PM equipments. two major reasons for failures were: 1.material X  doesn't exist in SAP (material number was a field data in entire legacy equipment record and it has to be correct in SAP).

2. Short text description is too long (I have mapped in my program that equipment short text will be transferred into SAP equipment short text, problem is, SAP equipment short text is limited to 40 chars only).

Step 2:  Send your customer those EXCEL records which were filtered in previous step, so they could understand why those records wouldn't be transferred to SAP in future.

Remark: This conversion process including cleansing step begins in SAP Dev environment. Afterwards moves to other SAP environments: Trn , Qa , Pre prod and Prod. each time this process begins from beginning through the end - in cycle. the data that is extracted from legacy system, extracted from production environments only, each and every time.

Step 3: Your customer should decide what to do with those records. They might direct you to erase those records from the EXCEL file , they might decide to cleanse the relevant data in legacy system. the customer only is responsible for cleansing the data in legacy system - You can't do it for him.

Step 4: After they decide what to do with those records, you should repeat steps 6, 4 , 5 , steps mentioned in previous blog : Delete and archive all data that was uploaded in this cycle (in order to "make room" for same but "cleaner data" to be uploaded), activate extraction program again to extract "cleaner" data from legacy system, upload cleaner data to SAP, analyze new failures and vice versa.

You should execute cleansing step along with steps 4 - 6 two or three times in average until the data is uploaded to SAP without failures.

Data conversion in SAP project - conversion from legacy system to SAP ECC.

Introduction :

I would like to share my experience in data conversion process with the SAP community. Data conversion is one of the most critical processes in successful SAP implementation projects. This process is a part of the realization step in the "ASAP" methodology (step 1 : project preparation. step 2: blueprint. step 3: realization. Step 4: final preparation. Step 5: go live). SAP advisors and local implementers are usually responsible to carry out data conversion from legacy system to the SAP ECC. I have also heard of SAP projects in which the Basis team has carried out this process.

The data converted is used only in order to set up master data in ECC . it is not used to set up historical transactional data from legacy system.

there are different tools which convert data: 1. SAP ECC built in tool via LSMW transaction code. 2. External tool named Process runner which communicates easily with ECC. I used Process runner which was purchased by my company.


Two of the most important qualities which are required in order to succeed in this process are : 1. Thoroughness 2. Communicating & understanding your Customer needs.


As mentioned above, data conversion process is part of the realization step.The realization step begins after the advisors (or local implementers) have finished writing down and submitting the blueprint documents for customer's approval. After the approval, the implementers start to customize and writing down specification documents for new developments in the Development area in ECC. Only then, its possible to start the data conversion process.

There are sub steps in data conversions:

1. Mapping the necessary fields in the ECC object that will be filled with data (I.E: Equipment object in PM module)

Here you need to be well aware of what is written in blueprint documents regarding your SAP objects.

It's recommended to differentiate between value obligatory fields of this object and value non obligatory fields. Some times object classification is needed. This happens when object regular fields are not enough to store entire data from legacy system. I used classification in equipment object that represented electric generators.

2. Creating one instance master data manually

The purpose of this step is to verify that the implementer is able to create master data manually before conducting the recording.

3. Recording master data set up via Process Runner (or LSMW).

In case the recording is not accurate, or changes in setting up the master data are need to be done after recording, the recording has to start all over again. Thus it is important for you to be certain how to set up the objects master data correctly.In case the recording was accurate and you saved it, Process runner creates an EXCEL file with proper columns to be filled (according to the fields you have entered in the recording) in order to set up several instances automatically.

For example : You have recorded setting up master data of one piece of equipment with certain data. After you have saved the recording, Process runner will create the proper structure in EXCEL for future recording. Then, you will be able to fill the EXCEL file proper columns with as much pieces of equipment data as needed, and execute the recording again when you wish to set up those pieces. In this way, multiple pieces of equipment will be created via Process Runner.

4. Creating Extraction program to extract data from legacy system

In this step you need to specify to the legacy system administrator (he is usually a MF programmer) in accurate manner which fields, and what tables you need the data from. Second thing you need to consider: what is the data population to be extracted (I.E: only active pieces of equipment / data which was created after certain date. your customer will know the answers to this question ). The system administrator should then prepare the program in the legacy system for future use.  In my project, the legacy system was MF system which was written in ADABAS NATURAL. I sent specification documents to the administrator specifying fields to extract and what data population to extract.

If there is necessity to do some kind of data manipulation (I.E : 1. Equipment type in legacy contains values: A , B , C while ECC equipment type was customized to contain values AA , BB , CC respectively 2. changing format of date values etc.. ), the administrator has to code it in the program.

It's very advisable that this program sorts the output columns identically to the columns order in the EXCEL file from previous step. The administrator should sort the columns in the right way. Eventually, the extraction program creates EXCEL file full of extraction data which fits the EXCEL file structure and format from previous step.

5. Analyzing errors log file and fixing extraction program

In this step the EXCEL file is full of data to be loaded to SAP ECC. try loading 50 percent of all rows in the file. Process runner will create output results. If there are any mistakes while the program is trying to create master data, It will indicate the reasons for it.You should analyze and fix the program respectively.

6. Preparing deletion and archiving program in SAP ECC

Eventually there is a chance you will need to delete any of the data that was loaded due to any reason. So first, you will need to distinguish the data that was converted and loaded to SAP ECC area from other data that was created manually by users. the best way to do it is using SAP standard report and specifying in the selection screen of the report the SAP user that created the data. For example in my project a certain programmer was using Process runner to load the data. The entire data he loaded was created under his user code. Thus, it was easy to distinguish the data. After the report extracted that data, mark what ever is necessary for deletion and use SARA tcode to Archive the data (I will post separately specific guide how to archive data in SAP using SARA tcode).


Hope  this information will help you working with SAP. For any question regarding this process fill free to ask me.


I've been an SCN member since 2006 and watched the involvement from others increase over the passing years.  This is both good and bad at the same time.  It's good to see more people get involved but I'm not sure the collective quality of SAP knowledge on the site increases at the same rate.  I suspose this isn't unexpected given SCN's growth rate.  However, with the increased size, scope, and viewership of SCN, I think there is a risk to SAP customers that rely on the information being presented here.


I'm blogging today because lately I've seen an increasingly growing number of recommendations from community members that the OP should solve their problem by either 1) running an SAP correction program or 2) debugging their way into a table update. Hacking table updates has been covered a few times already.  Just search on the appropriate key terms (I'd rather not list them) and you'll see plenty of discussions on it.


The point of this blog is to talk about the other technique (correction programs) and their consequences.



What are correction programs?

Correction programs are used to fix table inconsistencies that can not otherwise be fixed through a normal dialog transaction.  The programs are developed by SAP and the intent is to solve specific errors.  This is a critial point because these programs can not be used in all circumstances.  It's also important to note the audience of these programs.  They were developed to be used by SAP Developers... i.e., the folks in Germany, or now, in AGS worldwide.  These aren't originally intended to be customer facing tools.



What's the big deal?

Most, if not all of these programs, are direct table updates with little validation of the data in the existing table or the parameters entered at the time of execution.  There is little, if any, program documentation.  Most of them are crude utilities... and I'm not saying that to be critical.  Instead, I want to make the point that these are not sophisticated programs that can be used in a variety of scenarios and will safely stop an update from occurring if it doesn't make sense to do so (from a functional perspective).


Because of this, there is an element of risk to executing them.  The original data can not usually be recovered.  If the programs are executed incorrectly it's possible for inaccurate results to occurr.  SAP doesn't advertise or document these programs because their stance is that they should only be executed by SAP resources (or under their guidance).  That means if you run a program and cause a bigger problem, SAP isn't obligated to help you out of that situation.



When is it appropriate to run a correction program?

A correction program should only be executed after you've gone through the following 4 point checklist.


  1. If you get specific instructions from SAP via the help portal.
  2. A correction program should only be executed after thoroughly testing in a quality/test system.  This can be difficult because the unusual nature of these problems is such that it is difficult to replicate them.  However, if at all possible, I would do a test on the program as best I can and substantiate it with appropriate screenshots, table downloads, reports, etc.
  3. You should always try and solve the problem using normal transactions.  If there is a way to solve a GL issue using re-postings and such, then I'd always go that route then utilize a crude utility such as a correction program.
  4. Only as a last resort



When is it not appropriate to run a correction program?

Most importantly... and I can't stress this enough...  these programs should not be executed without a thorough understanding of the problem at hand, the tables impacted, and the updates being performed by the program.  If you can't read code or weren't guided by SAP about the updates being performed, I wouldn't run it.  If you can't talk in complete detail about the error and have proof that the error is triggered by a table inconsistency, and have the knowledge or tools to fix a potentially bigger problem if the correction program causes one, I wouldn't run it.




I'll show a few examples but I'll stay away from the more dangerous ones.


The first one has a clear warning message.  Most of the newer programs that I've seen have similar warnings even on the selection screen.


screenshot - 2014.08.11 - 14.56.53.png


Here's an old one.  No program documentation, no selection criteria, and very little information in the title.  If you can't read ABAP, how will you know what this program does.  What exactly does 'debug' mean in this context?


screenshot - 2014.08.13 - 17.10.31.png




The problem with topics such as this one is that a lot of people want to blast out the information to show off what they know.  My gripe is that we all need to realize that the responsibility (and blame) from running a correction program without proper consent or guidance from SAP is quite high.  Do so at your own risk.



How many times have you seen SAP projects generating disappointing feedback in the user community after months of those successes "Go Lives"?





Once the first invoice is issued and the first picture is shared and thousands of emails are sent full of congrats for everyone, the real business problems appear.




Don't we have to change how we consider success in our implementations?

Where key concepts as sustainability, Business benefits, Savings, etc… are?

Should we only consider a success if an invoice can be printed or if the first production order can be confirmed or if the first purchase order is received?




Implementing an SAP (ERP system) does not mean that process are being redefined or even yet improved. The redefinition of the business processes should be one more item of our implementations project. This is the key step that will ensure the company is using SAP in all its potential following the best practices. By redefining processes we will be ensuring real benefits, sustainability and savings that will let the companies see in SAP a good way to make process improvements getting operational excellence.




Does it mean that there is an opportunity to enhance the ASAP Methodology in order to add this business re-engineering as a pre-preparation?


I was associated with one the world’s largest SAP Implementation project in Oil and Gas Domain for more than 7 years. I had this unique opportunity & experience of working across different functions of project organization – worked as a Functional Analyst, Integration/Test Management, Global Deployment, Offshore Management. I have tried here to outline my experiences and share some best practises in the context of large SAP Rollout Programme.


Objective of the Programme - reduce complexity and bring change in three ways – through simple business models, standard global processes and
common IT systems. 

Journey:  10 Years - 35 countries & 39000+ users


Pathfinder - Low risk & diversified implementations done to test methodology

Ramping up - Building global design and processes

Complete Global Design - Complete global design and focus on deployment to multiple countries

Embedding - Retrofit the earlier go live countries (pathfinder & ramping up) to current level of design (global template); deploying change requests to live countries

Extending Business Value - Bring in country with high business value & deploye improvement change requests


Global Rollout Approach:

  1. Global Template Developement


  2.  Local Rollout Deployment



Programme Managment Framework -


Critical Sucess Factors :

  • Robust Governing Model - To deliver a programme of this size and scope - robust governing model is required with regular reviews and connects across teams and across management chain
  • Effective Planning (Milestone driven) - Advanced planning (couple of years in advance) with details of milestones and activities on a single spreadsheet. To give greater visibility for all staff on various activities of the programme
  • Diverse Team - Central Team was based in one location but a good mix of resources across globe to cover differenct time zones and local coordinations.
  • Dynamic delivery organisaton - changed as per programme need. Started off with one central team responsible for both design and deploy. Later division into design and deploy to drive global deployments. Adding new team to cater to live country requirements. This lead to siloed working and demarkation, dilution of knowledge across teams. Eventually teams merged and organised in hubs across globe to remove inefficiencies.
  • Resoucing - building right skillsets and domain knowldege
  • Process Adherence
  • Innovation and value additions


Definition of Sucessful Go Live : On time, on budget and with minimal disruption to the business


  • Business Continuity at Go Live - businesses are fully compliant and continue to serve customers smoothly and efficiently
  • Stable system that enables busines to carry out day to day activities
  • Customers and vendors are aligned with new processes and policies and experience the full benefits of the new processes and policies
  • Accurate and meaningful and transparent management information
  • Staff find it easier to work within the new processes and systems and ultimately have more time with their customers
  • There is a plan in place to drive through the benefits – everyone knows the part they play in realising the benefits and continuing to increase efficiency
  • Legally and fiscally compliant


Filter Blog

By author:
By date:
By tag: