Introduction

 

This is another blog to share my experiences on using LSMW for data migration. In a recent project there was a requirement to migrate SEPA mandates from an existing legacy system. This was achieved using two LSMW programs, in preference to creating and processing ABAP programs.

 

For some background on SEPA mandates I recommend that you read this document which includes useful information on the transactions and BAPIs that were used in LSMW.

 

I am assuming that you are already familiar with LSMW. If you are a beginner then there are plenty of other SCN posts that will help you.


Basic functional requirements

 

Since the mandates already existed in a legacy system an external mandate reference was used.

 

It is assumed that more than one mandate is allowed per customer (for example if the customer bank account changes).

 

When a mandate is added the collection authorization indicator should be flagged on the relevant bank account.

 

A customer may have more than one bank account.

 

It must be possible to distinguish between mandate first use and recurring.


Approach overview

 

Before migrating mandates, the customers including their bank account details are created.

 

The creation of the mandates is based on recordings of FSEPA_M1. If a customer has more than one bank account, then a pop-up appears. Therefore two recordings of FSEPA_M1 are required.

 

If the mandate is successfully created then the collection authorization indicator is updated using a recording of XD02.

 

The above 3 recordings are processed in a single LSMW program.

 

In a second step a dummy usage record is created for mandates that have already been used. This is achieved by using a dummy LSMW recording and a direct update using a BAPI function. This is explained below under “Details of add usage record”.

 

In both LSMWs various function modules are used.


Details of initial mandate creation

 

Most of the fields in a mandate are filled by default from the customer master and from the company code which is the vendor, so very little real input is needed.

In our project the following was sufficient:

 

Please note that you need to specify the BIC (SWIFT) code for the FSEPA_M1 recordings to work correctly.

 

As stated earlier, three recordings were used:

 

In the BEGIN_OF_TRANSACTION block we do the following:

1. Check that the customer already exists. In our example the legacy customer number was encoded in the mandate id.

2. Check if the mandate already exists:

* Customer exists, now check if the mandate already exists

* It is assumed that more than 1 mandate is allowed

h_sel_criteria-snd_type = 'BUS3007'.

    h_sel_criteria-snd_id = h_kunnr.

    h_sel_criteria-anwnd = 'F'.

    CALL FUNCTION 'SEPA_MANDATES_API_GET'

      EXPORTING

        I_SEL_CRITERIA     = h_sel_criteria

      IMPORTING

        ET_MANDATES        = h_mandates

        E_MESSAGE          = h_emessage

        ET_MANDATES_FAILED = h_mandatesfail.

    if not h_mandates is initial.

      loop at h_mandates into wa_mandates.

        if wa_mandates-mndid = infile-mndid.

          write: /001 'Legacy Number:',h_altkn,'SAP Customer:',

                h_kunnr, 'mandate already in SAP:', wa_mandates-mndid.

g_skip_transaction = yes.

        endif.

      endloop.

    endif.

 

3. Check how many bank accounts the customer has. If there is more than 1 bank account then a pop-up appears in FSEPA_M1 so a separate recording will be used. You need to set a flag and then test this at the beginning of each FSEPA_M1 recording.

4. Determine which bank account in the customer master needs to be updated. If there is more than one, then the bank accounts in the table on the “payment transactions” tab are sorted by country, bank key, and bank account. Of course your input will be the IBAN!

 

* check which customer bank a/c matches the IBAN, if any

* load the bank accounts into an internal table and then search it

  refresh it_cbanks.

  select KUNNR BANKS BANKL BANKN BKONT

  into table it_cbanks

  from knbk where kunnr eq h_kunnr.

  if g_skip_transaction ne yes.

    if it_cbanks[] is initial.

      write: /001 'Customer:',h_altkn,'SAP Customer:',h_kunnr,

         'Mandate:',infile-mndid, 'Customer has no SAP bank accounts.'.

      g_skip_transaction = yes.

    else. "Locate the bank account to update

      w_found = ''.

      w_index = 0.

      LOOP AT it_cbanks INTO wa_cbanks.

        select single iban from tiban into h_iban

                where BANKS = wa_cbanks-BANKS

                 and  BANKL = wa_cbanks-BANKL

                 and  BANKN = wa_cbanks-BANKN

                 and  BKONT = wa_cbanks-BKONT.

        if sy-subrc eq 0 and h_iban = infile-SND_IBAN.

          w_index = sy-tabix.

          w_found = 'X'.

        endif.

      ENDLOOP.

      if w_found ne 'X'.

        write: /001 'Customer:',h_altkn,'SAP Customer:',h_kunnr,

              'IBAN:',infile-snd_iban,

              'IBAN not available for this Customer in SAP.'.

        skip_transaction.

      endif.

    endif.

  endif.

        endif.

 

The value of w_index determined in the code above is then used in the conversion rules of the XD02 recording for each collection authorization field in the recording. The screen shot just shows the first two:

 

 

Details of add usage record

 

After all of the mandates have been created a second LSMW is used to add a dummy usage record when the LASTUSEDATE in the input file is not empty. The same input file is used.

 

There isn’t a SAP transaction to add a dummy usage record so we need add it using a function module. We can process this through LSMW using a little trick.

 

We create a dummy recording. I made a recording of XD03 (display customer) and saved it as DUMMY. Since I won’t actually be using the recording, it doesn’t matter which transaction I use. In the “field mapping and conversion rules” the ABAP code is inserted to perform a direct update of the SAP table.

 

This means that when the LSMW program is used, you should only process up to the convert step.

 

It is also important that you carefully test your LSMW! In this case standard function modules are used so the danger of corrupting the SAP database is very low.

 

Below I have included the complete code used in our project. Please note the value of USE_DOCID where 0668 is a company code value.

 

* __GLOBAL_DATA__

data: h_mndid type sepa_mndid.

selection-screen: begin of block 1 with frame title title1.

selection-screen: begin of line, comment 1(31) para1, position 33.

PARAMETERS: p_year(4) type c OBLIGATORY default '2014'.

selection-screen: end of line.

selection-screen: begin of line, comment 1(31) para2, position 33.

PARAMETERS: p_test as checkbox.

selection-screen: end of line.

selection-screen: end of block 1.

at selection-screen output.

  title1 = 'User processing parameters'.

  para1 = 'Ref year for dummy usage document'.

  para2 = 'Data validation only'.

start-of-selection.

 

* __BEGIN_OF_TRANSACTION__

if infile-status = 'R'. "status revoked

skip_transaction.

   write: /001 'Mandate:',infile-sepa_mndid,'Revoked mandate skipped'.

  1. else.

tables: SEPA_MANDATE.

data: p_usage type SEPA_MANDATE_USE,

h_sel_criteria like SEPA_GET_CRITERIA_MANDATE,

h_mandates type SEPA_TAB_DATA_MANDATE_DATA,

wa_mandates LIKE LINE OF h_mandates,

h_emessage like BAPIRET1,

h_mandatesfail type SEPA_TAB_MANDATE_KEY_EXTERNAL,

h_usedate(8) type c.

 

if not infile-usedate is initial. "input is YYYY-MM-DD

  replace all occurrences of regex '[^0-9]' in infile-usedate with ''.

concatenate infile-usedate+4(4)

            infile-usedate+2(2)

            infile-usedate+0(2)

            into infile-usedate.

*write: /001 'last used date:',infile-usedate.

  h_usedate = infile-usedate.

* Get the GUID of the mandate

  h_sel_criteria-snd_type = 'BUS3007'.

  h_sel_criteria-mndid = h_mndid.

  h_sel_criteria-anwnd = 'F'.

  CALL FUNCTION 'SEPA_MANDATES_API_GET'

    EXPORTING

      I_SEL_CRITERIA     = h_sel_criteria

    IMPORTING

      ET_MANDATES        = h_mandates

      E_MESSAGE          = h_emessage

      ET_MANDATES_FAILED = h_mandatesfail.

  if h_mandates is initial.

    write: /001 'Mandate:',h_mndid,

          'not migrated to SAP'.

  else.

    read table h_mandates into wa_mandates index 1.

    p_usage-MANDT = '360'.

    p_usage-MGUID = wa_mandates-mguid.

    p_usage-USE_DATE = h_usedate.

    p_usage-USE_DOCTYPE = 'BKPF'.

    concatenate '0668 9999999999 ' p_year into

      p_usage-USE_DOCID.

      if p_test <> 'X'.

        call function 'SEPA_MANDATE_ADD_USAGE'

          EXPORTING

            i_usage = p_usage.

        commit work.

      else.

        write: /001 h_mndid,p_usage-MGUID,h_usedate,

        p_usage-USE_DOCID.

      endif.

    endif.

  endif.

endif.

 

Conclusion

 

With a few recordings and some standard function modules, it is possible to migrate SEPA mandates (including usage information) with LSMW.

This blog also gives an example of how to use a dummy recording and direct update in LSMW

Introduction

 

Although SAP currently promotes “best practice data migration” using Business Objects Data Services, LSMW is still the tool of choice for many projects.

LSMW is free and simple to use and handles many things more or less automatically.

In this and other blogs I want to share some of my experiences from recent projects. I am not a programmer so any ABAP code that I show will not necessarily be the best that could be used. However, data migration is a one-off so it usually isn’t important if the code is beautiful and efficient (unless of course you have very large data volumes). I am assuming that you are already familiar with LSMW. If you are a beginner then there are plenty of other SCN posts that will help you.

 

How many LSMW programs to create a SAP project?


There is no standard SAP program, IDOC or BAPI available for the creation of a SAP project and its WBS elements, so recordings have to be used. The project builder transaction CJ20N is an “Enjoy Transaction” and is therefore not recordable.

 

The common approach is therefore to use CJ01 to create a project header and then to use CJ02 to add WBS elements. The question that then arises is how many recording and how many LSMW programs to use.

 

I adopted the classic approach of first creating the project header in LSMW and then separately adding the WBS elements.

The problem with adding WBS elements is the positioning of the cursor. The elements are added in a table and with an LSMW recording you can’t reposition the cursor with program code. Typically when adding records to a table, you need as a minimum to have a recording to add the first element and a recording to add subsequent elements. For WBS elements you only need these two recordings if you set them up correctly. These two recordings can be combined into a single LSMW program.

 

The reason that I am writing this blog is that I have seen examples with 3 or 4 separate recordings and LSMW programs being used and this seems unnecessarily complicated to me.

 

So, in order to add WBS elements to the projects created by a previous LSMW program, you need two recordings of CJ02:

  • Recording to add the first WBS element to a project. This has to be separate because there is no parent WBS
  • Recording to add subsequent WBS elements

 

Both recordings are included in a single LSMW program. In LSMW on the Maintain Object Attributes screen use the More Recordings button on the line where you add the recording. In the pop-up box you can then add extra recordings:

 

 

The “top” recording to add the first WBS element is straightforward. Here are some example screen shots:

 

 

 

The second CJ02 recording is what gives the most trouble. The way to set it up is as follows:

  • On the first screen enter then project and the parent WBS of the WBS element that you want to insert and then press enter, for example:
    • Project = ZB.999999
    • Parent WBS = ZB.999999.001
  • On the next screen which shows some WBS elements press the last page button followed by the next page button. This ensures that the second row of the table is always the place where you will enter the data for the new WBS. (When you do the recording with only your top WBS element present this seems a bit strange, but it’s the way to do it)
  • On the (empty) second line of the table enter the level, the WBS element and the description. Then go to the WBS detail screen either via the menu Details – WBS Detail Screen or CTRL F9
  • Now you can enter data on each of the tabs that you are using. Make sure that you analysed all of the tabs needed before you make the recording to avoid having to redo the recording in the future!
  • When you have entered all the data press the save button

 

You now need to be a little bit careful with the assignment of field names to the recording. You can use the “default all” button but you must then change some of the field names afterwards. The reason is that POSID on the initial screen is the parent WBS but on subsequent screens it is the WBS that you are adding. You should also be aware that in the separate recording to add the top WBS element, POSID is consistently the WBS that you are adding.

 

The actual field names that you use don’t really matter as long as you are consistent. In my LSMW I also had some help fields as follows:

  • h_pspid is the PROJECT identification
  • h_posid is the parent WBS
  • h_ident is the WBS to be added

and in the recordings I used the same convention.

 

Here are some example screen shots of the recording:

 

 

 

 

After defining the recordings you need to define the input file. The input file for the program should be sorted into the sequence of the WBS elements:

STUFE   C(003)   Level

PSPID    C(024)   Project Definition

POSID   C(024)   WBS Element

 

The remaining input fields will of course depend on the attributes being used.

 

Since multiple recordings are being used, we need some ABAP code in the BEGIN_OF_TRANSACTION block:

  • Determine which recording to use. This is easy: If STUFE = 1 then it’s a top level WBS
  • Determine the value for the parent WBS element. Obviously this depends on the template for the project type. For example in my case the level 3 element ZB.999999.001.01 would have a parent element of  ZB.999999.001
  • It is easier to work with these codes without the “.” Symbols. There is a function module CONVERSION_EXIT_PROJN_INPUT which can be used

 

In my case all of the project types used had the same mask for the WBS structure, so it was fairly easy to code.

 

We also decided to determine in the LSMW program the values of the operative indicators using logical rules based on the project type.

 

 

The indicators were only set at the lowest level WBS element and in order to know if the current element was at the lowest level, I implemented “read ahead” logic. I explained how to do this in a previous blog. With read ahead logic it is easy to decide if we are at the lowest level: if the current level is higher than the previous level, then the previous record was at the lowest level. When read ahead logic is used we test this condition in the BEGIN_OF_TRANSACTION block and write the previous record there.

 

The LSMW was also designed to handle multiple project types where the attributes that need to be filled (and sometimes their values) differ per project type. In the BEGIN_OF_TRANSACTION block the project profile was retrieved from table PROJ. This can then be used in the coding per field of the recordings. The input file was included the full set of attributes that might be used. In the spreadsheet that users used to provide the input, it was indicated which columns where needed per project type.

 

Conclusion

When set up correctly you only need 3 recordings and two LSMW programs to create projects and WBS elements. This is easier to manage during data migration.

R. Bailey

LSMW read ahead technique

Posted by R. Bailey May 5, 2015

Introduction

Although SAP currently promotes “best practice data migration” using Business Objects Data Services, LSMW is still the tool of choice for many projects.

LSMW is free and simple to use and handles many things more or less automatically.

 

In this and subsequent blogs I want to share some of my experiences from recent projects. I am not a programmer so any ABAP code that I show will not necessarily be the best that could be used. However, data migration is a one-off so it usually isn’t important if the code is beautiful and efficient (unless of course you have very large data volumes). I am assuming that you are already familiar with LSMW. If you are a beginner then there are plenty of other SCN posts that will help you.

 

One of the advantages of LSMW can sometimes be a disadvantage. LSMW controls the flow of data in the program generated in the convert step. It reads the input records, applies the conversion logic, writes (for each segment if there are multiple segments) a record to the output buffer and, after processing all records relating to a transaction, it writes the transaction. However, sometimes you would like to know the content of the next input record and use this information while processing the current record. Unfortunately when LSMW reads a record, the previous record has already been written and is no longer available. This can be solved by using a “read ahead” technique.

 

Use cases

Here are some examples of where a read ahead technique might be used:

  • Processing GL bookings with RFBIBL00 using flat file input. Normally you have a header record followed by items and LSMW automatically detects each new header.
  • Processing WBS elements where operational indicators should be set on the lowest level. If the depth of the WBS structure is not fixed then you only know you reached the lowest level when you read the next record.
  • Processing vendor records from a legacy system where there are multiple records per vendor and you need to process all of the records before writing an output record.

All of the above occurred in a recent project of mine. I’ll now explain the technique using the RFBIBL00 example.


Worked Example

If you want to process GL bookings, AR open items or AP open items then SAP provides the standard batch input program RFBIBL00 which you can select in

LSMW:

 

 

For the transfer of opening balances in our project the input file provided from the legacy system was a flat file containing a limited number of fields. The Oracle Balancing Segment in the input file is used to determine a Profit Centre. The input account is actually an Oracle GL account which is converted using a lookup table in LSMW.

 

 

The input file is sorted by Company Code, Currency Key and Oracle Balancing Segment. A separate GL document is written for each combination of these values. The document is balanced by a booking to an offset account. If the balances have been loaded correctly then the balance of the offset account will of course be zero. During testing the GL conversion table was incomplete so some code was added to allow processing even if some input records were invalid – in this case the offset account will have a balance but we can see what is processed.

 

The structure relations are as you would expect:

 

 

With a flat input file we need to determine for ourselves when the key has changed and we will only know this when we read the next record. Therefore we change the flow of control in the LSMW program so that we can "read ahead" to the next record.

 

LSMW normally writes an output record in the END_OF_RECORD block and a transaction in the END_OF_TRANSACTION block. With the read ahead technique we do this in the BEGIN_OF_TRANSACTION block. At this point we still have the previous converted record and the next input record is also available so we can check whether there is a change of key. There are two things that have to be handled:

  • When processing the first input record we should not write any output
  • When we get to the end of the input file the last record hasn’t been written and we won’t come back to the begin block so the last record won't get written

 

Let’s now look at the code for each block. Since an offset booking has to be written at various places, the code for this has been put into a FORM routine.

 

BEGIN_OF_TRANSACTION

 

* On change of company or currency code a new document is needed

* We write the balancing entry of the prior document here and

* a new header at end of record for BBKPF

if not prev_bukrs is initial and ( infile-bukrs ne prev_bukrs or

   infile-waers ne prev_waers or infile-balseg ne prev_balseg ).

  PERFORM OFFSET_ENTRY.

  h_writehdr = 'X'.    "Check this at end of BBPKF record

endif.

 

We have defined some variable to contain the previous key field values: prev_bukrs, prev_waers and prev_balseg. When we read the first record these have an initial value. Otherwise if the value changes the we write the booking to the offset account and set a flag to write the header record for the new document.

 

END_OF_RECORD (BGR00)

 

* at_first_transfer_record.

if g_cnt_transactions_read = 1.

  transfer_record.

  1. endif.

if g_cnt_transactions_group = 5000.

  g_cnt_transactions_group = 0.

  transfer_record.

endif.

 

BGR00 is the batch input session record. This is the standard coding except that we replaced the “at first” test with a test on count of transactions read.

 

END_OF_RECORD (BBKPF)

 

* On change of company, currency code or balancing segment

* start a new document

if h_writehdr = 'X' or prev_bukrs is initial.

* check prev_bukrs to get first header

  transfer_record.

  h_writehdr = ''.

endif.

* Set previous values here

  prev_bukrs = infile-bukrs.

  prev_waers = infile-waers.

  prev_balseg = infile-balseg.

 

BBKPF is the document header. We write a header for the first record and whenever the key changes. We also update the previous key values here.

 

END_OF_RECORD (BBSEG)

 

if g_skip_record ne yes.

  transfer_record.

* Update running totals for the balancing item

  if INFILE-NEWBS = '40'.

    g_wrbtr_sum = g_wrbtr_sum + h_wrbtr.

    g_dmbtr_sum = g_dmbtr_sum + h_dmbtr.

  else. "Posting key 50

    g_wrbtr_sum = g_wrbtr_sum - h_wrbtr.

    g_dmbtr_sum = g_dmbtr_sum - h_dmbtr.

  endif.

  g_item_count = g_item_count + 1.

  if g_item_count = 949.   "Split the document after 949 items

    PERFORM OFFSET_ENTRY.

    g_item_count = 0. "Reset the item count after writing record

    transfer_this_record 'BBKPF'.  "Write header for next block

  endif.

endif.

 

If the record is valid (our program contains various validity checks) then an output record is written and the cumulative value in local and foreign currency is updated. This coding block also contains a document split. If there are more than 949 items then a balancing entry is written followed by a new document header.

 

END_OF_TRANSACTION

 

if g_flg_end_of_file = 'X'.

  PERFORM OFFSET_ENTRY.

endif.

 

This is were we handle the problem of the last record. LSMW contains a number of global variables and a useful one that is not included in the LSMW documentation is g_flg_end_of_file. When this has value X we have reached the last record and a final offset booking should be written

 

FORM OFFSET_ENTRY

 

  if g_wrbtr_sum ne 0 or g_dmbtr_sum ne 0.

 

* Offset entry not required if the document balances!

    bbseg-newko = p_offset.  "Use suspense account

    bbseg-zuonr = 'DATA MIGRATION'.

    bbseg-sgtxt = 'Balancing entry'.

    bbseg-prctr = h_prctr.

    bbseg-xref2 = '/'.  "Ensure this is empty here

    bbseg-valut = '/'.  "Empty on the offset booking

    bbseg-mwskz = '/'.  "Empty on the offset booking

*  bbseg-xref1 = '/'.  "Empty on the offset booking

    if g_wrbtr_sum ge 0.

      bbseg-newbs = '50'.    "Credit entry

      bbseg-wrbtr = g_wrbtr_sum.

    else.

      bbseg-newbs = '40'.    "Debit entry

      bbseg-wrbtr = - g_wrbtr_sum.

    endif.

    if g_dmbtr_sum ne 0.

      if g_dmbtr_sum ge 0.

        bbseg-dmbtr = g_dmbtr_sum.

      else.

        bbseg-dmbtr = - g_dmbtr_sum.

      endif.

    endif.

    translate bbseg-wrbtr using '.,'.

    translate bbseg-dmbtr using '.,'.

    g_wrbtr_sum = 0.

    g_dmbtr_sum = 0.

    g_skip_record = no.  "LSMW carries over status of previous rec!!

    transfer_this_record 'BBSEG'.

  endif.

  transfer_transaction.

 

There is no need for an offset booking if by chance the document already is in balance. Otherwise we create the offset booking. The offset account is an input parameter in our program and some other fields have fixed values. Our system is configured to use decimal comma so we need to change the value fields to what is expected on an input screen. At the end we write the balancing record and the transaction.

 

Conclusion

This is a simple technique that can be useful in a variety of situations.

As SAPPHIRE starts in Orlando Tomorrow, we, customers and partners, will be bombarded with information about S4/HANA. It is clear it is dead center in the SAP strategy for the next years.

 

The value of a renewed Business Suite will become clearer and cleared the more we hear about Simple Finance and now Simple Logistics. The roadmap looks very exciting with Fiori apps covering great scope and with the upcoming of Business Suite merge (Different components of the business suite, like CRM and SCM will now merge back with ERP to form a single system).

 

This transformation is still in its early stages. Financials are far ahead while logistics is coming soon. The roadmap for the “repatriation” of external business suite functions back to the ERP Core is just starting. I foresee a roadmap of 3-5 yrs until we see a complete fusion.

 

But what is clear is that SAP has addressed 2 of its most important issues: Simplification of SAP footprint AND User-Interface. We will soon see customers running ERP on HANA (S4/HANA) with complete business suite functions like Global ATP and eWM back in the core ERP. No more parallel SCM and/or CRM landscapes. No data replication. Shorter implementation, lower TCO.

 

So, all this is great, but there is a catch: Current SAP customers looking at jumping into S4/HANA need to revise their current SAP solution and consider return (as much as possible) back to standard functionality.

 

Customer have been running SAP for many years (some for decades) so they built on the Business Suite for 2 reasons: Either implement something that was not available at the time (early customers) or to implement customer specific requirements.

What we see more clearly now than before, is that the price to implement and maintain these customer “customizations” is much higher than just the implementation development hours. It may hinder adoption of future functionality. This is exactly what is happening now. Here are 2 major examples:

 

  1. Custom code – Customers often support thousands of custom built programs. Migrating to ERP on HANA requires a revision of this code so it can run properly (not talking about performance here, some DB practices differ and bad ABAP code of today will NOT run correctly on HANA. They NEED fixing)
  2. The more custom code and advanced configuration, the farther apart a customer will be from adopting 2 of the most important values of the S4HANA proposition. Guided Configuration and Standard Fiori apps.


So, now that it is more evident than ever that it pays to adopt standard practices and reduce customization to the max, isn’t it time to adopt SAP pace of innovation and stop trying to build IT solutions ourselves? Wasn’t THAT the original proposition of buying an ERP software in the first place?

Hi fellows,

 

I want to share with you an experience about milestones after Go Live in business where I work.

 

We gone out Golive 9 months ago, since then we had many problems with: Users, Internal Communication, Process Speed and Productivity. Additionally the business was changed their natural process for another process. As the days the problems was increasing.

 

The CEO of the factory, is a person very very committed with the process and with SAP System, and always towards meetings for the purpose of communicate to all managers the importance to use SAP correctly, however after the meeting clutter return

 

In that moment was when I take awareness to the importance of Change Management, we had all day all days supporting different areas such as FI, CO, PS, MM, SD, PP, PM, ETM, HCM, QM.

 

After that and towards days, we start a program to had as core the Change management, with this tool was evident the improve in:

 

- Process

-Staff Communication

-Performance Workability

-Knowledge of process & business

 

Simply wanten to share this experience about impotance of tools as Change Management that are very interesting and helpful for our work

 

Currently we are designing Roll-Out to another centre of the Company and going to start to implementing WPB, surely considering The Change Management in book Lessons Learned

 

Thanks

In December 2014, I had the chance to attend a press/analysts presentation at Cirque Du Soleil’s HQ on their Cloud adoption and overall IT strategy. Thanks SAP for the invite.

 

 

Even if I am from Montreal and that Cirque is a client of ours, I was very surprised to see the level of maturity of the client and their path forward.

 

 

First it was presented the current Success Factors adoption by the client.  SuccessFactors apps were being deployed fast and across departments. The reason were 2: a fit with the out of the box solution and the simplification of cloud consumption.

 

 

Another very important point presented was the adoption of the Ariba sourcing tool. I can’t recall the number exactly but it was a very large part of the high dollar RFQ sourcing now processed in the Ariba tool. Most of Freight and hotel sourcing. This is huge!  It was by far the highest adoption I saw to date.

 

 

But what caught my attention the most was that the customer has reached a point in their evolution that it realizes the importance of sticking to standard processes. What we often refer to “Vanilla” in SAP, actually yields much faster adoption, leaner/simpler deployments and best position to benefit from future functionality. In the SAP market, SAP ERP, with its incredible flexibility, has allowed us to design solutions very “tailor made”, processes very custom and unique solutions. Do the custom processes, often referred to as “competitive advantage”, really pay off? Customers often look only to the development cost of building a custom solution. But it actually costs a lot more than that. We should factor in the upgrade hickups/retests/adjustments and more importantly, the fact that the customer may, often, end up on the wrong track (bad designs etc.).

 

 

The head of Cirque’s Procurement clearly manifested the importance of returning to Standard. They are impressed by how much value can be taken from little investment by using the out of the box cloud apps but also they think that returning to standard processes will allow them to further integrate with the Ariba Network.

 

 

I wanted to bring the point about return to standard because SAP is introducing a lot of innovation these days. S/4HANA is a huge game changer. Even the base ERP on HANA brings a lot of benefits. But both will prove challenging to migrate to for very customized customers. In the case of S/4HANA, the benefits of Guided Configuration and the newly delivered Fiori apps will be much better adopted by customers close to standard and best practices.

 

 

Will 2015-2016 be the year where customers will start seeing the challenges of running very customized solutions? Wasn’t the adoption of out-of-the-box solutions the initial motivation to jump into SAP in the first place?

 


DISCLAIMER: This is not a challenge ONLY for SAP customers. In fact, any customer that runs a packaged software will face the dilemma of Standard vs customized. It is just that SAP is introducing so much innovation that these customers will be the first to realize it.

In many projects, we have had the need to have status management for our application. Many a times, we end up creating a new framework or functionality for the same, not knowing there is a standard feature for the same. Here is what you will need to do to configure and set up a new one.

 

  1. Define a new Object type (Tcode BS12)

Create a new object type which identifies a status object.

bs12.png

 

2. Maintain status profile (BS02)

Create a new status profile, which can be made up of one or more statuses. Each status has a number and you shall also specify the lowest and the highest status you can navigate to from here. It is not possible to specify minor status transitions like from A you can move to C and not D, but if you place the statuses in the right order, it should be possible to carefully define such transitions.

bs02.png

 

3. Maintain Business transactions (BS32)

The business transactions are like actions in the system. Some of the actions are possible only in certain statuses, and some of them can also cause a status change.

 

bs32.png

 

 

4. Business transactions valid for Object type

In TcodeBS12, double click on the object type, select all the business transactions that are eligible for this object type.

img4.png

 

5. Transaction Control

In BS02, double click on each of the status configured to define the transaction control

It is possible to specify which business transactions are possible in a given status and which are not.

As a second step, it is also possible to specify that a certain transaction sets a certain status.

 

img5.png

 

ABAP Code for status transition:

**Run business transction in simulation/update mode

CALL FUNCTION 'STATUS_CHANGE_FOR_ACTIVITY'

EXPORTING

check_only           = iv_simulation_flag

objnr                = lv_objnr

vrgng                = lv_biz_transaction

EXCEPTIONS

activity_not_allowed = 1

object_not_found     = 2

status_inconsistent  = 3

status_not_allowed   = 4

wrong_input          = 5

warning_occured      = 6

OTHERS               = 7.

IF sy-subrc <> 0.

MESSAGE e075(zawct_msg) WITH lv_biz_transaction INTO lv_message.

PERFORM bapi_message_collect CHANGING et_messages.

RETURN.

ENDIF.

I see many questions posted in SCN could be resolved by a single note or KBA.  Considering the amount of notes/KBAs, I know that it could be quite difficult to find out the right note/KBA to resolve a specific problem.  So I'd like to share some tricks and experiences to search for the suitable notes/KBAs.

 

Where is the Note/KBA search tool?

  1. Go to the link :Home | SAP Support Portal
  2. Click the link "Note and KBA  search tool" on the right
    support portal.png
  3. Click the button "Launch the SAP Note and KBA search"
    search tool.png
  4. The search tool will be opened.
    search criterias.png


What search options can be used?

  1. There are 3 checkboxes for option "Language": German, English and Japanese

    If the deveopers are located in Germany, they tend to write original notes in German and then translate the notes to English or Japanese.  Since translation may need time, if you search with the option English or Japanese, you may not get a full list of the newest notes/KBAs.
    So my suggestion is :
    If you can read German and you know the developers for the application area is located in Germany, use the language "German" to search. 
    If you cannot read German, but you suspect this is a new bug/error which should have been covered by a note, you can still search with the language "German", but search with key words like program name, field name, error message number etc.
  2. Choose your search terms carefully.

    Use error message number instead of short text.
    If an error occurs, you can find the error message number in the long text by double clicking the error.  For example, search by "CP466" or "CP 466".
    error.png
    Use field technical name instead of field description
    For example, use ELIKZ instead of "delivery completed".

    Use full name instead of abbreviation.
    eg. purchase requisition instead of PR, planned independent requisition instead of PIR

    Use function modules/programs if you know which is called and is behaving strangely
    eg. error occurs when converting planned orders to production orders, the function module CO_SD_PLANNED_ORDER_CONVERT is called.

    Use different t-codes
    eg. although error occurs in CO02, you can also try to use t-code CO01 during search.

    For short dumps, use the key words suggested in the dump.
    You can usually see the following statement in the dump:
    If the error occures in a non-modified SAP program, you may be able to find an interim solution in an SAP Note. If you have access to SAP Notes, carry out a search with the followingkeywords:
  3. Choose a right application area to restrict the selected note/KBA numbers, or do not enter the application area to expand the selection to all components (this could be useful if you don't know the right component or the issue crosses many applications).
    You can simply use a wildcard * after a main application (eg. PP* or PP-SFC*) if you are not sure about the sub-components.
  4. Choose the validity of your system release in the field "Restriction" by selecting "Restrict by Software Components".
    If the problem only appears after a system upgrade, it's better to specify your system release to filter out old notes.
    For example, you can find your own system release by t-code SPAM.  Click the button "Package level".  Usually for application areas like PP, QM, you should look for the component SAP_APPL.
    package level.png
    During search, input the software component, release from, release to. ( "From support package" is optional.)
    search.png
    Here is the result from above search criterias:
    result.png
  5. Choose a note/KBA category.
    If you are just looking for some explanation of system logic, you can choose category consulting and FAQ.  They usually don't include any coding.
    If you suspect a system error(for example issue only appears after system upgrade), you can choose category "program error " "modification".  They usually contain correction codes.

 

 

Common questions regarding notes/KBAs:

 

The note contains code corrections.  How can I see the codes?

Take the note 1327813 for example.

In the section " Correction Instructions", you can find a correction number corresponding to the valid system release.  Click  the number. A new window will be opened.  You can then click the object name to see the codes.correction instructions.png

 

 

How do I know if the note is valid for my system?

Take the note 1327813 for example.

See the section "Validity".  It shows the main SAP_APPL release for which the note is valid.
validity.png
See the section "Support Packages & Patches".  It lists the exact SP and patch that imports the note.  If your system release is lower than the listed SP and patch level, the note should be able to be implemented by SNOTE.
sp and patch.png

The note includes Z reports.  How to implement it?

Many notes contain Z reports to correct inconsistencies. For example the notes listed in blog Often used PP reports in SAP Notes

These Z reports usually cannot be imported by SNOTE.  You have to create them manually in t-code SE38 and copy the source code in the notes.

 

Where are the related notes?

In the section "References".  You can see which notes are referenced to and referenced by.  Sometimes they can be quite useful to see other related notes.

 

Also refer to the following KBAs about searching notes/KBAs:

 

2081285 - How to enter good search terms to an SAP search?

1540080 - How to search for KBs, KBAs, SAP Notes, product documentation, and SCN

 

Amazing tool for automatic note search:

 

I have to mention the amazing tool which enables automatic note search.  It enables the user to find out which note correction is missing on your system automatically.  See the following note:

1818192 - FAQ: Automated Note Search Tool

 

Be aware that this tool can only find notes with correction codes.  So if you are looking for consulting and explanatory notes/KBAs, you still have to use the tips above to search by yourself.

 

Tips to search on SCN, see document:

My tips on searching knowledge content the SCN

We are present the installation of SAP System with a high availability for ABAP Instance:

This installation will be on Windows Server 2012 R2 as a platform and a SQL Server 2012 SP1 as adatabase level.

we well use Software provision manager 0.1 SP 06 and Kernel 7.2:

also we are divided this installation to the following Parts:

Part 1- Install SQL Server Cluster.

Installing SAP HA on SQL Part1: Install SQL Server Cluster.

Installing SAP HA on SQL Part1: Install SQL Server Cluster step2

Part 2- Install First Cluster Node.

Install First Cluster Node.

Part 3- Install DB instance.

Install DB Instance

Part 4- Install Additional Cluster Node.

Part 5- Install Central Instance.

Part 6- Install dialog Instance.

We will start directly now in Part 4- Install DB instance.

 

By using Software provision manager with last update (Uncar it) and Kernel last update for the same version an start by:

Choose system copy --> MS SQL Server --> Target System Installation --> High-Availability System --> Based on AS ABAP --> Install Additional Cluster Node.

Additional Cluster Node 01.PNG

At the first step, you must choose cluster group which you are created it in Install First Cluster Node. and choose the local driver on additional cluster driver to install local instance on it.

Additional Cluster Node 02.PNG

Enter the password for SAP System administrator and service user.

Additional Cluster Node 03.PNG

Specify the Unicode Kernel Netweaver location and select it.

Additional Cluster Node 04.PNG

 

Additional Cluster Node 05.PNG

Additional Cluster Node 06.PNG

In this step you must configure your Swap Size as recommended.

Additional Cluster Node 07.PNG

 

Additional Cluster Node 08.PNG

Here you can choose the domain which the sap system accounts for SAP Host Agent created on it.

Additional Cluster Node 09.PNG

Enter the password for the operating system users on additional node.

Additional Cluster Node 10.PNG

In this step, You must enter the instance number for Enqueue Replication Server.

Additional Cluster Node 11.PNG

This is the summery for all customizing installation selection before start in installation.

Additional Cluster Node 12.PNG

Start installation additional Cluster Node.

Additional Cluster Node 13.PNG

 

Additional Cluster Node 16.PNG

Said Shepl

Install DB Instance

Posted by Said Shepl Dec 9, 2014

We are present the installation of SAP System with a high availability for ABAP Instance:

This installation will be on Windows Server 2012 R2 as a platform and a SQL Server 2012 SP1 as adatabase level.

we well use Software provision manager 0.1 SP 06 and Kernel 7.2:

also we are divided this installation to the following Parts:

Part 1- Install SQL Server Cluster.

Installing SAP HA on SQL Part1: Install SQL Server Cluster.

Installing SAP HA on SQL Part1: Install SQL Server Cluster step2

Part 2- Install First Cluster Node.

Install First Cluster Node.

Part 3- Install DB instance.

Part 4- Install Additional Cluster Node.

Part 5- Install Central Instance.

Part 6- Install dialog Instance.

We will start directly now in Part 3- Install DB instance.

We must to be prepare our DB source, however DB backup or Export DB by uesing SWPM export tool.

You must download Software provision manager with last update (Uncar it) and Kernel last update for the same version an start by

Choose system copy --> MS SQL Server --> Target System Installation --> High-Availability System --> Based on AS ABAP --> Install DB instance.

DB Instance 01.PNG

 

DB Instance 02.PNG

We will choose standard system copy/Migration (load-based) because we use export DB tool

If you restore your DB using SQL Server, you can use another choice Homogeneous system copy

 

DB Instance 03.PNG

Choose MS SQL server instance name which you are created it during installing SQL Server Cluster.

DB Instance 04.PNG

 

DB Instance 05.PNG

In this step, we will provide sap installation with SAP Kernel CD location.

DB Instance 06.PNG

Choose LABELIX.ASC file for the required kernel.

DB Instance 07.PNG

 

DB Instance 08.PNG

In this step, we will Provide SAP installation by the location of Export DB which we are import it from the target system.

DB Instance 10.PNG

In this step, you can enter the password of SAP DB schema.

DB Instance 11.PNG

You can choose the required Number of data file according to the number of CPU cores for your server.

as illustrate for large system (16-32 CPU Cores), Medium System (8-16 CPU Cores) and Small System (4-8 CPU Cores).

DB Instance 12.PNG

 

DB Instance 13.PNG

 

DB Instance 14.PNG

In this step, you can enter the number of parallel jobs in the same time.

DB Instance 15.PNG

In this step, you select the kernel database .sar file to unpack it in the kernel directory.

DB Instance 16.PNG

 

DB Instance 17.PNG

 

DB Instance 18.PNG

We are received the following error during installation

DB Instance 20.PNG

We are use the following link and SAP Note 455195 - R3load: Use of TSK files to solve this issue:

SWPM: Program &amp;#039;Migration Monitor&amp;#039; exits with error code 103&lt;/title&gt;&lt;script type=&quot;text/j…

DB Instance 25.PNG

The problem is solved and installation is go on.

DB Instance 27.PNG

 

DB Instance 29.PNG

In this step, illustrate that the installation has complete successfully.

DB Instance 30.PNG

 

Regard

Said Shepl

We are present the installation of SAP System with a high availability for ABAP Instance:

This installation will be on Windows Server 2012 R2 as a platform and a SQL Server 2012 SP1 as adatabase level.

we well use Software provision manager 0.1 SP 06 and Kernel 7.2:

also we are divided this installation to the following Parts:

Part 1- Install SQL Server Cluster.

Installing SAP HA on SQL Part1: Install SQL Server Cluster.

Installing SAP HA on SQL Part1: Install SQL Server Cluster step2

Part 2- Install First Cluster Node.

Part 3- Install DB instance.

Part 4- Install Additional Cluster Node.

Part 5- nstall Centeral Instance.

Part 6- Install dialog Instance.

 

 

we will start directly now in Part 2- Install First Cluster Node.

you must download Software provision manager with last update (Uncar it) and Kernel last update for the same version an start by

 

choose system copy --> MS SQL Server --> Target System Installation --> High-Availability System --> Based on AS ABAP --> First Cluster Node

First Cluster Node 01.PNG

 

First Cluster Node 02.PNG

 

We are received the following error:

we are solve this issue by follow SAP Note: 1676665

First Cluster Node 02 Sol.PNG

Download and install Vcredist_x64 and install it

In this case, install the Microsoft Visual C++ 2005 Service Pack 1 Redistributable Package ATL Security Update, which is available at: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=14431

Retry the installation with SWPM.

 

First Cluster Node 03.PNG

 

First Cluster Node 04.PNG

 

First Cluster Node 05.PNG

 

Note: In this Step, you must Create A record in your DNS by SAP Virtual instance host name:

First Cluster Node 06.PNG

 

First Cluster Node 07.PNG

 

First Cluster Node 08.PNGFirst Cluster Node 09.PNG

 

First Cluster Node 10.PNG

First Cluster Node 11.PNG

 

We receive this error because we choose Cluster Disk 1 which is specified to SQL Server, We are re-choose Cluster Disk 3 which is available storage as the following:

First Cluster Node 12.PNG

 

First Cluster Node 13.PNG

First Cluster Node 14.PNG

 

First Cluster Node 15.PNG

Select Kernel drive LABELIDX.ASC

First Cluster Node 16.PNG

 

First Cluster Node 17.PNG

 

First Cluster Node 18.PNG

Reconfigure SWAP in OS Platform

 

First Cluster Node 20.PNG

 

First Cluster Node 09.PNG

First Cluster Node 22.PNG

 

First Cluster Node 23.PNG

 

First Cluster Node 24.PNG

 

First Cluster Node 25.PNG

 

First Cluster Node 26.PNG

 

First Cluster Node 27.PNG

 

First Cluster Node 28.PNG

LSMW.  I wonder where that 4 letter acronym ranks in terms of frequency of use here on SCN.  I'm sure it's in the top 10 even with stiff competition from SAP, ERP, BAPI, ABAP, and some others.

 

Why is that?  Well, it's a very useful tool and comes up frequently in the functional forums.  I remember when I got an email from a fellow SAP colleague introducing me to it.  That was back sometime in the fall of 1999 but I know version 1.0 came out a year earlier and was supported as far back as R/3 3.0F.  I dove into it and the guide that SAP had published and it was really great.  I could see immediately that for basic data conversions, I could handle the entire conversion process without the help of a developer.  Back in 1998, that was a fairly big deal and one that I'm sure the ABAPers had no problem ceding territory in.

 

Just a year earlier I was using CATT to do legacy conversion.  It had a similar transaction code based recording mechanism, a way to define import parameters, and a loading mechanism to map a .txt file to those parameters.  But CATT was not designed specifically for data conversion so it could be a pain to deal with.  In particular, tracking load errors was very tedious which required you to do a large number of mock loads on your data to ensure that it was perfect.

 

My History with LSMW

Back in 1999, it was obvious to me that LSMW was a big improvement over CATT for a few reasons:

  • I could incorporate standard load programs and BAPIs. Using screen recordings was no longer the only way to load data.  I hate screen recordings.  They eventually break and you have to handle them with kid gloves at times... you have to trick them into handling certain OK codes or work around validations/substitutions.
  • LSMW allowed you to use screen recordings as a way to define your target structures.  I love screen recordings!  Why?  Because, as a last resort, they let me change any record in the system using an SAP supported dialog process.  If you can get to it manually at a transaction code for a single record, than you can create/change/delete that same data in batch using a custom screen recording.
  • I could do the transformation within SAP rather in Excel.  That saved a lot of time especially if I had certain transformations (i.e., a cost center lookup) that were used in different loads.  Define once, use multiple times.
  • I could load multiple structures of data.  Again, this saved time because I didn't have to rearrange the data in Excel to force it into a particular structure format which might contain numerous fields that I had no interest in populating.  That left my source Excel file relatively clean which was far easier to manage.
  • Organization.  LSMW had a way to categorize each load by Project, Sub-Project, and Object.
  • No more developers!  While the tool allows you to insert custom logic, it's not required to do so.  If you know your data well enough and you have a typical legacy source file, there's no reason why a functional person such as myself can't load everything on his own.

 

 

Once word spread about LSMW inside SAP, it seemed that every functional consultant I worked with was using it.  Eventually we started using it for purposes other than legacy data conversion.  Mass changes, mass creation of new data that wasn't legacy related, etc.  Other non-functional areas used it too; I've seen security teams upload mass changes to userID records.

 

 

This is how I Really Feel

But... I didn't write this to praise LSMW.  Now, in the year 2014, I can't stand working with it.  It's limitations have been bugging me for years and SAP hasn't done anything to improve it.  My gripes:

 

  1. Poor organization.  The simple Project / Sub-Project / Object classification is too limiting.  It seems to be a quasi hierarchy of the individual LSMW objects... but why not release a fully functional hierarchy?  If we had a real hierarchy we could use multiple levels, parent-child relationships, drag-n-drop, etc.  There are some customers that don't use it that much and may only need a single level deep hierarchy.  Others might need 5 or more.  Either party is currently forced into using the existing 2 deep classification of Project / Sub-Project.  What I most often see is a horrible organization of the underlying LSMW objects.  That fault lies with the customers for not enforcing and administering this hierarchy.  But if the tool made it easier to classify and organize the various scripts, maybe it wouldn't be as messy as I've come to expect.
  2. The prompts are inconsistent. This is a minor gripe but the function keys are different per screen.  To read/convert your data file you navigate to a selection screen (a very limited one) and press F8 to execute.  To read the contents of those data files within SAP, you get a pop-up window and have to hit Enter to execute it.  No one limits the reading to a selection of records (or, very rarely do they) so I could do away with that prompt entirely.
  3. Another personal gripe but I'm so tired of the constant Read Data, Convert Data, Load Data...  Whoops!  Error!  Change in Excel, save to .txt, Read Data, Convert Data, etc.  The process has too many steps and I have to flip between SAP, Excel, my text editor, and my file manager (Directory Opus).  Or, why can't I link directly to my Excel file and view it within SAP?
  4. There isn't a good way to quickly test or validate some basics of the data.  I get that each area and load mechanism is different (i.e., BAPI versus screen recording) but there should be a quick way within the tool to better validate the data in a test format so that we know right away if the first 10 records are OK.
  5. Speed.  I had some tweets with Tammy Powlas this past weekend.  She used RDS for an upload (Initial Data Migration Over, The Fun Has Just Begun).  The upload of 600k records took an hour but I highly doubt that LSMW could beat that.
  6. The solution was great back in 1998 for the reasons I noted above.  Back then I would happily double click between my source and target fields, assign rules, create lookup tables, etc.  But it's 2014.  I'd rather use a Visio type of tool to maintain my data relationships.
  7. Lack of Development.  Here's the version we are running at my customer.  2004...  seriously?  No changes in 10 years?  I recall the early versions of LSMW... v1, v1.6, v1.7... but I don't remember there being a v2 or v3.  So how did we jump from v1.7 to v4 and what are the delta changes?  Seems like some upper management mandated creative version management to me.  My guess is that LSMW has been upgraded based on changes to WAS and to keep it along with ERP 5.0 to 6.0... but the product itself hasn't changed in terms of functionality.  LSMW still feels like a v2 product to me.

 

screenshot - 2014.11.12 - 08.31.12.png

 

 

 

My Biggest Gripe

But my biggest gripe isn't with the tool.  It's how it's used by the SAP community.

 

It seems that every consultant I know uses LSMW as their go-to source for all data changes.  I've walked into customers that have been using an LSMW to maintain some object for 10+ years!!!!  How the heck can something like that happen?  This is an area where LSMW's flexibility works against it... or rather, works against the customer's long term success with SAP.  The problem here is that it allows us functional folks to quickly develop a 'tool' to maintain data.  It's the quickest way to develop a solution on the Excel-to-SAP highway that accountants et al. need throughout the year.  For a truly ad-hoc requirement to do just about any process in SAP based on data in Excel, it works fine.  I don't have an issue with that and would recommend LSMW in those appropriate cases.  But it's not a long term solution.  Period, end of story.

 

 

Other Solutions

Mass Maintenance Tool

If you have a recurring need to mass change master data, you should be using the mass maintenance tool.  Just about every module has developed a solution using this tool to change the most important master data records in the system.

 

screenshot - 2014.11.12 - 08.56.29.png

 

 

Be Friendly to your ABAPer

Anyone heard of a BAPI?  If you have a recurring need to upload transaction data or make changes to certain POs, sales orders, etc, or have a master record not in the list above, there is a BAPI that will do that for you.  Get with your ABAPer, develop a suitable selection screen, get a test-run parameter on there, get a nice ALV based output report, and then get the tcode created.  Done...  that's a good solution using an SAP supported protocol that is far better, safer, consistent, and easier to work with than a screen based recording that dumps your data into BDC.  In my opinion, if part of your solution has the letters 'SM35' in it, you've done something wrong.

 

Why would anyone recommend to a customer that they should use this crummy process (read data, convert data, display converted data...) as the long term solution for making changes like this?  That's not a solution, it's a lame recommendation.



Final Word

LSMW and other similar screen based recording tools (Winrunner et al.) are flexible and it's tempting for people... and I'm talking primarily to the consultants out there that over-use and over-recommend LSMW... to keep going back to it.  It's a useful tool but there are problems when you don't have enough tools in your toolbox to use... you're limited in options and you keep going back to what you know.

 

Have you heard of the phrase "When you have a hammer, everything looks like a nail".  It came from noted psychologist Abraham H. Maslow in his 1966 book The Psychology of Science.

 

Maslow quote.png

 

 

His quote is also part of something called the Law of the Instrument.  A related concept of this is the notion of the Golden Hammer which was written about in AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis: William J. Brown, Raphael C. Malveau, Hays W.…  The book covers the factors that come up repeatedly in bad software projects.  Among them is what they call the Golden Hammer which is "a single technology that is used for every conceivable programming problem".

 

LSMW's time as my hammer of choice passed a long time ago.  It's a useful tool and should be in everyone's toolbox but we shouldn't use it unless there is an actual nail sticking out.

 

Golden_Hammer_April_2014.png

Data conversion in SAP project - continuation


Introduction :

I have recently written a blog about data conversion process, in which I've specified the major basics steps of this important process (Data conversion in SAP project).

It's advisable to start reading that blog before this one. As my PM project advances successfully forward, I have discovered new very important step in this critical process. I wish to share this step: Data cleansing.

 

Data cleansing step:

After you have fulfilled Analyzing errors in EXCEL file, you will discover that there are some conversion failures due to "garbage" data in customers legacy system.

It's important to outline that during this step, probably, the legacy system is still active (users are still using it).

As I've explained in previous blog - Excel file stores the output result from Process runner next to each record that was upload to SAP. Meaning, you can see next to each record the reason for failure.

Step 1: Out line and filter all records which consist "garbage"data reasons.

Example from my project : I tried to upload 2000 records of customers machines and tools which are PM equipments. two major reasons for failures were: 1.material X  doesn't exist in SAP (material number was a field data in entire legacy equipment record and it has to be correct in SAP).

2. Short text description is too long (I have mapped in my program that equipment short text will be transferred into SAP equipment short text, problem is, SAP equipment short text is limited to 40 chars only).

Step 2:  Send your customer those EXCEL records which were filtered in previous step, so they could understand why those records wouldn't be transferred to SAP in future.

Remark: This conversion process including cleansing step begins in SAP Dev environment. Afterwards moves to other SAP environments: Trn , Qa , Pre prod and Prod. each time this process begins from beginning through the end - in cycle. the data that is extracted from legacy system, extracted from production environments only, each and every time.

Step 3: Your customer should decide what to do with those records. They might direct you to erase those records from the EXCEL file , they might decide to cleanse the relevant data in legacy system. the customer only is responsible for cleansing the data in legacy system - You can't do it for him.

Step 4: After they decide what to do with those records, you should repeat steps 6, 4 , 5 , steps mentioned in previous blog : Delete and archive all data that was uploaded in this cycle (in order to "make room" for same but "cleaner data" to be uploaded), activate extraction program again to extract "cleaner" data from legacy system, upload cleaner data to SAP, analyze new failures and vice versa.

You should execute cleansing step along with steps 4 - 6 two or three times in average until the data is uploaded to SAP without failures.

Data conversion in SAP project - conversion from legacy system to SAP ECC.


Introduction :

I would like to share my experience in data conversion process with the SAP community. Data conversion is one of the most critical processes in successful SAP implementation projects. This process is a part of the realization step in the "ASAP" methodology (step 1 : project preparation. step 2: blueprint. step 3: realization. Step 4: final preparation. Step 5: go live). SAP advisors and local implementers are usually responsible to carry out data conversion from legacy system to the SAP ECC. I have also heard of SAP projects in which the Basis team has carried out this process.

The data converted is used only in order to set up master data in ECC . it is not used to set up historical transactional data from legacy system.

there are different tools which convert data: 1. SAP ECC built in tool via LSMW transaction code. 2. External tool named Process runner which communicates easily with ECC. I used Process runner which was purchased by my company.

 

Two of the most important qualities which are required in order to succeed in this process are : 1. Thoroughness 2. Communicating & understanding your Customer needs.


Body:

As mentioned above, data conversion process is part of the realization step.The realization step begins after the advisors (or local implementers) have finished writing down and submitting the blueprint documents for customer's approval. After the approval, the implementers start to customize and writing down specification documents for new developments in the Development area in ECC. Only then, its possible to start the data conversion process.

There are sub steps in data conversions:


1. Mapping the necessary fields in the ECC object that will be filled with data (I.E: Equipment object in PM module)

Here you need to be well aware of what is written in blueprint documents regarding your SAP objects.

It's recommended to differentiate between value obligatory fields of this object and value non obligatory fields. Some times object classification is needed. This happens when object regular fields are not enough to store entire data from legacy system. I used classification in equipment object that represented electric generators.

2. Creating one instance master data manually

The purpose of this step is to verify that the implementer is able to create master data manually before conducting the recording.

3. Recording master data set up via Process Runner (or LSMW).

In case the recording is not accurate, or changes in setting up the master data are need to be done after recording, the recording has to start all over again. Thus it is important for you to be certain how to set up the objects master data correctly.In case the recording was accurate and you saved it, Process runner creates an EXCEL file with proper columns to be filled (according to the fields you have entered in the recording) in order to set up several instances automatically.

For example : You have recorded setting up master data of one piece of equipment with certain data. After you have saved the recording, Process runner will create the proper structure in EXCEL for future recording. Then, you will be able to fill the EXCEL file proper columns with as much pieces of equipment data as needed, and execute the recording again when you wish to set up those pieces. In this way, multiple pieces of equipment will be created via Process Runner.

4. Creating Extraction program to extract data from legacy system

In this step you need to specify to the legacy system administrator (he is usually a MF programmer) in accurate manner which fields, and what tables you need the data from. Second thing you need to consider: what is the data population to be extracted (I.E: only active pieces of equipment / data which was created after certain date. your customer will know the answers to this question ). The system administrator should then prepare the program in the legacy system for future use.  In my project, the legacy system was MF system which was written in ADABAS NATURAL. I sent specification documents to the administrator specifying fields to extract and what data population to extract.

If there is necessity to do some kind of data manipulation (I.E : 1. Equipment type in legacy contains values: A , B , C while ECC equipment type was customized to contain values AA , BB , CC respectively 2. changing format of date values etc.. ), the administrator has to code it in the program.

It's very advisable that this program sorts the output columns identically to the columns order in the EXCEL file from previous step. The administrator should sort the columns in the right way. Eventually, the extraction program creates EXCEL file full of extraction data which fits the EXCEL file structure and format from previous step.

5. Analyzing errors log file and fixing extraction program

In this step the EXCEL file is full of data to be loaded to SAP ECC. try loading 50 percent of all rows in the file. Process runner will create output results. If there are any mistakes while the program is trying to create master data, It will indicate the reasons for it.You should analyze and fix the program respectively.

6. Preparing deletion and archiving program in SAP ECC

Eventually there is a chance you will need to delete any of the data that was loaded due to any reason. So first, you will need to distinguish the data that was converted and loaded to SAP ECC area from other data that was created manually by users. the best way to do it is using SAP standard report and specifying in the selection screen of the report the SAP user that created the data. For example in my project a certain programmer was using Process runner to load the data. The entire data he loaded was created under his user code. Thus, it was easy to distinguish the data. After the report extracted that data, mark what ever is necessary for deletion and use SARA tcode to Archive the data (I will post separately specific guide how to archive data in SAP using SARA tcode).

 

Hope  this information will help you working with SAP. For any question regarding this process fill free to ask me.

 

I've been an SCN member since 2006 and watched the involvement from others increase over the passing years.  This is both good and bad at the same time.  It's good to see more people get involved but I'm not sure the collective quality of SAP knowledge on the site increases at the same rate.  I suspose this isn't unexpected given SCN's growth rate.  However, with the increased size, scope, and viewership of SCN, I think there is a risk to SAP customers that rely on the information being presented here.

 

I'm blogging today because lately I've seen an increasingly growing number of recommendations from community members that the OP should solve their problem by either 1) running an SAP correction program or 2) debugging their way into a table update. Hacking table updates has been covered a few times already.  Just search on the appropriate key terms (I'd rather not list them) and you'll see plenty of discussions on it.

 

The point of this blog is to talk about the other technique (correction programs) and their consequences.

 

 

What are correction programs?

Correction programs are used to fix table inconsistencies that can not otherwise be fixed through a normal dialog transaction.  The programs are developed by SAP and the intent is to solve specific errors.  This is a critial point because these programs can not be used in all circumstances.  It's also important to note the audience of these programs.  They were developed to be used by SAP Developers... i.e., the folks in Germany, or now, in AGS worldwide.  These aren't originally intended to be customer facing tools.

 

 

What's the big deal?

Most, if not all of these programs, are direct table updates with little validation of the data in the existing table or the parameters entered at the time of execution.  There is little, if any, program documentation.  Most of them are crude utilities... and I'm not saying that to be critical.  Instead, I want to make the point that these are not sophisticated programs that can be used in a variety of scenarios and will safely stop an update from occurring if it doesn't make sense to do so (from a functional perspective).

 

Because of this, there is an element of risk to executing them.  The original data can not usually be recovered.  If the programs are executed incorrectly it's possible for inaccurate results to occurr.  SAP doesn't advertise or document these programs because their stance is that they should only be executed by SAP resources (or under their guidance).  That means if you run a program and cause a bigger problem, SAP isn't obligated to help you out of that situation.

 

 

When is it appropriate to run a correction program?

A correction program should only be executed after you've gone through the following 4 point checklist.

 

  1. If you get specific instructions from SAP via the help portal.
  2. A correction program should only be executed after thoroughly testing in a quality/test system.  This can be difficult because the unusual nature of these problems is such that it is difficult to replicate them.  However, if at all possible, I would do a test on the program as best I can and substantiate it with appropriate screenshots, table downloads, reports, etc.
  3. You should always try and solve the problem using normal transactions.  If there is a way to solve a GL issue using re-postings and such, then I'd always go that route then utilize a crude utility such as a correction program.
  4. Only as a last resort

 

 

When is it not appropriate to run a correction program?

Most importantly... and I can't stress this enough...  these programs should not be executed without a thorough understanding of the problem at hand, the tables impacted, and the updates being performed by the program.  If you can't read code or weren't guided by SAP about the updates being performed, I wouldn't run it.  If you can't talk in complete detail about the error and have proof that the error is triggered by a table inconsistency, and have the knowledge or tools to fix a potentially bigger problem if the correction program causes one, I wouldn't run it.

 

 

Examples?

I'll show a few examples but I'll stay away from the more dangerous ones.

 

The first one has a clear warning message.  Most of the newer programs that I've seen have similar warnings even on the selection screen.

 

screenshot - 2014.08.11 - 14.56.53.png

 

Here's an old one.  No program documentation, no selection criteria, and very little information in the title.  If you can't read ABAP, how will you know what this program does.  What exactly does 'debug' mean in this context?

 

screenshot - 2014.08.13 - 17.10.31.png

 

 

Conclusion

The problem with topics such as this one is that a lot of people want to blast out the information to show off what they know.  My gripe is that we all need to realize that the responsibility (and blame) from running a correction program without proper consent or guidance from SAP is quite high.  Do so at your own risk.

.

Actions

Filter Blog

By author:
By date:
By tag: