1 2 3 21 Previous Next

SAP Business Warehouse

303 Posts

Hello Fans of BW,


it has been a bit of a challenge to find out what has changed with a new SAP BW release or what features were introduced with certain support packages. In general, the most essential changes are described in the solution and product roadmap and corresponding webinar sessions at http://service.sap.com/saproadmaps (SAP BW is under "Database & Technology").


On a more detailed level, you will always find the changes completely documented in our online documentation "What’s New – Release Notes" page at http://help.sap.com/nw74. This area describes all new features, enhancements to existing features, changes to existing features, and - in some cases - also features that have been deleted. When we publish a new support package, then the online documentation will be updated accordingly. Naturally, the higher the support package, the less changes will be included.


If you ever tried to get a complete overview - for example for SAP BW 7.4 from SP 0 to the current SP 9 - then you will quickly have noticed that it's quite cumbersome to browse though all the pages, click, click, click.


We often get the question, which features are just available with SAP BW powered by SAP HANA and which also work on other platforms. Did you know for example that 61% of the new features and enhancements are also relevant for non-HANA systems (86 of 142). So there are plenty of reasons for all customers to upgrade to 7.4!


Or you would like to know what exactly was changed for a particular component in SAP BW like Business Planning. Or you need to know the differences between two support package levels...


And this is where I have come up with a simplified solution: Drum roll... A consolidated spreadsheet!




The spreadsheet contains all release notes for 7.4


  • By support package
  • By component
  • With links to the documentation
  • A short description (so you don’t even have to click the links)
  • An indicator whether a feature is new, changed, deleted, or an enhancement
  • A flag for HANA
  • Of course you can filter, sort, and search quickly (no problem for you Excel Wizards, right?)


The magic of spreadsheets makes it possible to answer all those questions in a matter of seconds. For example, if you want to know what’s new for OLAP but does not require HANA, just filter component on “BW-BEX-OT-OLAP” and HANA-only on “No”. Or you can find out quickly the difference between for example SP 6 and SP 8. It’s simple and easy and should be a great help for you, customers, partners, and of course our own field organization.


The first version of the spreadsheet has been published as an attachment to the "SAPBWNews" for BW 7.40 SP 9 which you can find in SAP Note 2030800. Going forward, an update spreadsheet will be included with the corresponding SAP Note for each support package.


I hope you like the new format but nothing is perfect. Please let me know any feedback or if you have ideas on how to make it better.




Product Management SAP EDW (BW/HANA)

I have come across a typical situation and requirement to identify new records(G/L Account master data) . The actual business requirement was everyday we use to bring full load G/L master data into analysis system . After update mapping department will download all data from G/L Account P table and going cross check with any mapping done for each G/L account.


Here mapping department( due to business ethics I am not able to discuss what is mapping), needs to download and executing some manual steps and SQL statements they will come back with list G/L's which are not mapped.


Everyday they need to download huge amount data and compare this job will consume lot time hence they asked me what are the possibilities of identifying only new G/L records.


After doing some research I thought of create generic data source on SID table of G/L Account infoobject.


Everybody knows that in SID table SID value get generate only when new values come and update, taking this option created a generic datasource on SID table with delta numeric pointer on SID field.


Here are the few screenshots which I attached for reference and clear understanding of delta on SID field.



Created Generic Datasource on SID table of G/L Account infoobject.




Delta capability selected as numeric pointer on SID field since SID value get created when new G/L Account created , we can extract only new G/L Accounts.





Note: From 7.4 on-wards not required any generic delta because we have option to see records 0RECORDMODE values as we see in DSO.


In SPA BW on HANA 7.4 on wards we have 0RECODMODE included by default if you check below option.



Thanks for reading this Blog. I do welcome for any suggestions.






          In our BW projects, we are frequently required to know the "BEx Query last execution" details from BI Statistics. In this blog, I am going to show " How to deal with this common requirement in a simple manner"

Let's suppose we require Last Executed date, Last modified date, Last Modified by and last but not least Last Executed user. There is no standard BI Statistics Query to give last executed user details. We need to either hit Statistics Cubes or tables.


The first table which comes to our mind is RSDDSTAT_OLAP. This is basically a view out of following 3 tables below :-




We can find Last Executed User name from this table(below image). But this table can hold data up to specific time limit mentioned in RSADMIN table for Object: TCT_KEEP_OLAP_DM_DATA_N_DAYS. Usually, the value will be 30 days. So we will be able to get Last Executed user details of the BEx Queries which were executed in last 30 days only.


What if our requirement is to get User names who executed 6 months ago ?


Even the tables RSZCOMPDIR and RSRREPDIR tables do not store Last Executed User details. These tables store info like Last Executed date, Last modified date, Last Modified by etc


This is the situation where we need to hit BI Statistics cubes. If your BI Statistics process chains are being run regularly, then the historical statistics data might got uploaded to Cubes.


Create a simple BEx Query on Cube, Front-End and OLAP Statistics (Details) : 0TCT_C02 like below :


Query 1.jpg

Query 2.jpg

The reason why I have used highlighted Infoobjects in rows pane is, they are Compounding Infoobjects to Query Runtime object.

If we do not use these 3 infoobjects in the Query, then "Query Runtime Object" field will show data separated by / .


To avoid this and show just Query name in the column, we need to use them like above image and choose "No Display" like below. All this is for better visibility in the report.


Query 3.jpg


Drag all the 3 KFs into KF pane from the left side(Cube). This is just to complete the definition. Create a new Formula variable locally in New formula for Latest date. This should be a replacement path variable by Calday.


Since our objective is to show last executed user, we will simply create a Condition on Latest date KF as TOP N = 1. This will show the record which was the latest up to 6 months back(as we have set a filter in query) query 4.jpg


To run this Query, you should paste the query technical names in selection screen(right side) like below. If we try to enter in direct input, they will not be found because Query runtime object field expects values in compounded form separated with slashes( / ).


Query 5.jpg

Upon executing the query, you will get a report like below which shows Query Name, Calday and Last executed User(6 months back).


Query 6.jpg


Thanks for reading!!

String Functions in SAP BW

while we are extracting data form ECC/Third party tolls  to BW we got the some time special char errors . if we use the sting functions we aviod the special char issue errors.







1 SHIFT – shift will move the char string by default left by 1 position.


V1 =
V2 =
V3 =


WRITE:/ V1,V2,V3.

Program output

2. REPLACE - Replace will replace the set of chars with another set of chars in a char string.

data : text1 type string,
type string,
16) type c.

text3 =
text2 =



Program output


Note -

Write statement – print the record.

SKIP – it will skip the one line.

3. TRANSLATE – translate to upper case to lowercase and lower case to upper case.


TEXT4 = 'abcdefgh'.
  text5 =
  TEXT6 =


Program output

4. CONCATENATE – it will merge more than char string into one char string.


  CTEXT1 =
  CTEXT2 =


Program output

5. CONDENSE – it will remove the leading spaces.


Program output


6. SPLIT – it will split one char string to many char string but should be the separated by unique special char.DATA : LV_TEXT6 TYPE STRING,

Program output


Note -

Write statement – print the record.

SKIP – it will skip the one line.

Hope it wil help.




I noted recently a new flag on IO’s attribute maintenance screen. It is available in tcodes like RSA1 or RSD1 while IO is displayed on its Attribute tab. Help key F1 doesn’t say much about it just the same of the flag name itself:

Delete Master Data with 0recordmode


In order to find out what it is about I tried to create new IO and check the flag on. What happened was that it added IO 0RECORDMODE as new attribute of the IO. Similarly if 0RECORDMODE is added manually into the list of IO’s attributes the flag gets turned ON automatically.

I tried to find out what is database table where the flag is stored. I thought about table like RSDIOBJ - directory of all InfoObjects but there was no indication at all that this flag is stored in here. I tried couple of other tables without any success. After same debugging I found out that there is no table which holds this flag. Presence of flag is determined on the fly while system is checking whether particular IO has 0RECORDMODE as its attribute. In case 0RECORDMODE is there the flag gets checked on. There is following standard ABAP method IS_RECORDMODE_CONTAINED of class CL_RSD_IOBJ_UTILITIES which does that via following statement:

p_recordmode_exists = cl_rsd_iobj_utilities=>is_recordmode_contained( i_iobjnm ).

The code of the method goes down to the DB table called RSDBCHATR and checks for field ATTRINM; if there is a value equals 0RECORDMODE. The field ATTRINM is retrieved by e.g. Function Module RSD_IOBJ_GET in its export table E_T_ATR.

OK, so I now knew same basic facts about the flag. But what it is really doing? As next I turned to SCN to check if there is something about this feature. I found few forum threads: here and here but they do not say much about it. I continued to search on SAP Notes/KBAs. Here I found following KBA “1599011 - BW master data loading: Changed time-dependent attribute is not updated into master data table from DSO”. And here it was.

The flag introduces delta handling in master data update. Having 0RECORDMODE in particular IO’s attributes enables to recognize new records (N) for the delta load. In case data flow of particular master data IO has an underlying DSO object which can provide IO 0RECORDMODE it can be leveraged to process deletion of master delta. As long as it is mapped to the same IO’s attribute.

As per the KBA this feature was introduced in SAP BW 7.11 (SAP NW BW7.1 EhP 1).

PS: This blog is cross published on my personal blog site.

As you might know SAP BW does not allow every possible character in character strings. An upload or activation can terminate in such cases. It occurs frequently with character strings where users (or interfaces) can freely enter text.

SAP delivered various Function Modules for validation of character strings. Unfortunately cleansing of character strings is not provided by SAP. It would be very helpful to have a comprehensive ABAP Add-on for cleansing of character strings.

In this blog I would like to introduce such an ABAP Add-on which I developed. Please refer to my document Implementing an Add-on for Cleansing Character Strings for detailed implementation instructions.

Issues with Character Strings

Let’s have a look to a typical example: a source field contains “forbidden hexadecimal characters” (between HEX00 and HEX1F) which are usually not permitted in BW.



Figure 1: Example error message in log



Figure 2: Detailed error message


The error message shows that the character at position 11 (hexadecimal HEX0D) is not permitted. The error message does not show that also the character at position 12 (hexadecimal HEX0A) is not permitted.

Please note by the way that ‘HEX23’ is a documentation error in message BRAIN 315.


Other typical examples are a source field that contains lowercase but the Characteristic only accepts uppercase, or a source field that contains “special characters” which are not allowed.

Allowed and Unallowed Characters

In SAP BW the default allowed characters are:

  • !"%&''()*+,-./:;<=>?_0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ
  • SPACE symbol


You can maintain permitted extra characters with t/code RSKC. You can also find it in the SAP Customizing Implementation Guide (IMG): SAP NetWeaver > Business Warehouse > General Settings > Maintain Permitted Extra Characters.



Figure 3: Maintenance of permitted extra characters


You can choose between the following three options:

  • ALL_CAPITAL (and nothing else): characters which are uppercase letters in the code page are permitted for data loading;
  • ALL_CAPITAL_PLUS_HEX (and nothing else): equal to ALL_CAPITAL pus hexadecimal HEX00 to HEX1F;
  • Specify “special characters” individually.


Option ALL_CAPITAL is mostly used and can be considered as the “best practice”. SAP does not recommend using ALL_CAPITAL_PLUS_HEX.


Validation of uppercase and lowercase are handled differently. The validation rules which were presented above are applicable for uppercase Characteristic values. The lowercase validation is less strict; only hexadecimal HEX00 to HEX1F are never allowed.


There are two additional exceptions:

  • A Characteristic values that only consists of the single character ‘#’ is not allowed;
  • A Characteristic value starting with ‘!’ is not allowed.

Cleansing Add-on

The Cleansing Add-on is an ABAP Objects Class and is intended to be used in a Transformation Rule. The logic is based on two standard SAP Function Modules in Function Group RSKC:

  • RSKC_CHAVL_CHECK - Check if characteristic value is permitted (uppercase)
  • RSKC_LOWCHAVL_CHECK - Check if characteristic value (with lowercase letters) is permitted


Those Function Modules only contain “validation” functionality. The “cleansing” functionality is added in an RSKC compliant way. The Cleansing Add-on has an easy-to-use interface and can be implemented in any Transformation Rule for a character string using a Routine.

Related Documentation

If you would like to read more about this topic, I can recommend reading the following documentation:



In this blog we discussed the possible issues with character strings. We had a look to the allowed and unallowed characters in the context of SAP BW. Furthermore, I introduced briefly the Cleansing Add-on. Please refer to my document Implementing an Add-on for Cleansing Character Strings for more information re. implementing this Cleansing Add-on and how to use it in Transformation Rules.

In order to overcome Database Growth issue following activities could be done which includes deletion of unwanted data from RSBERRORLOG table:




NOTE: Do these activities keeping Basis team informed.




        Please execute below programs in background.


  1. Program : RSB_ANALYZE_ERRORLOG  - Helps you to analyze error log table.



       This program is used to analyze the no. of DTP requests which are present in the RSBERRORLOG table.



     DTP Error - Analysis.png




   2.  Program : RSBM_ERRORLOG_DELETE - Helps to delete Error log table data.



              This program is used to delete the error logs generated by the various DTPs.     



              On executing the above program, the below screen appears wherein the technical name of the DTP for which logs are to be deleted is to be                  entered. Also the date range can be provided for which the logs needs to be deleted.


          DTP Error - Deletion.png


            After providing the proper input( DTP technical name and date range) the program needs to be executed in the background:


             DTP Error Log - Deletion 2.png




After RSBERRORLOG deletion activity ask Basis team to regenerate indexes of this table.





All of us, who've been around in SAP BW for a while, will recognize the picture below. The 'good old' meta data (repository) available within a SAP BW system and accessable via transaction RSA1

But how to access this repository from 'outside' SAP BW. Thus without RSA1 and via a (web) URL?


  • First of all we need to know which URL to use to access our BW system. To find out which URL to use, transaction SMICM comes in handy. After executing SMICM, go to the 'services' information (via SHIFT+F1) and the following information will be shown

0 smicm - shift F1 services.JPG

(For confidentiality purposes I've blurred/greyed out/removed some information in the above (and below) pictures)

  • When the Host Name is not completely shown (via SMICM ->services), transaction SE37 offers a solution. Start SE37 and execute/test a function module called RSBB_URL_PREFIX_GET


  • Input parameter I_HANDLERCLASS should be filled with the value CL_RSR_WWW_HTTP


  • When executing the function module with the above mentioned input parameter value, the following output will be provided


  • Output parameter E_URL_PREFIX (as marked with a red box in the picture above) will give you the entire/complete URL which can be used to access the BW system via a webbrowser.

Beware: The port number, in this case 8100, is not correct! The port number delivered by transaction SMICM, in this case 3299 (see the red box in the first picture), should be used

  • Use the URL_PREFIX in combination with the SMICM port number to generate the following string:


(**blur** : for confidentialty reasons I've removed parts of the URL)

(BW_O_TYPES: Enter this static text to receive an overview of active repository objects)

(BHDCLNT100: A concatenation of <systemID>, "CLNT" and  <clientnumber>)

  • The following information will be shown when entering the above URL in a webbrowser



  • When you're not interested in the entire BW metadata repository, but, for example, in the metadata of a particular multiprovider, small adaptions have to be made to the URL. 

Let's assume that I'm interested in the metadata of multiprovider MP_0039. In this case the following URL would do the trick



  • This URL can now be used to directly access the BW metadata from (almost) any webpage. For example when embedding SAP BW metadata within a sharepoint page



ps. As mentioned by Martin Maruskin : The service SAP/BW/DOC/METADATA needs to be activated in tcode SICF.

Execute transaction SICF. Navigate all the way down to SAP -> BW -> DOC -> METADATA, right click on METADATA and select activate


Note implementation is really helpful to fix issues without having to carry out big system updates and avoid hassles.

However, sometimes we don't actually need a code solution, but rather only the manual Solution steps provided by a note. A note can deliver the code correction and in the solution section. A configurable parameter, or delivers a report and instructions on how to run it, etc.


To easily identify if the code correction of a note is valid or not for a system

Each code correction note will have one or more Correction Instructions sections with a "Valid for" list. For example note 2068862:



  1. Check the line "Software component" to know what component the code change belongs to.
    In this case, it is SAP_APPL. The system you want to apply the note must have that component.
  2. Check the Release of the software component on the lines bellow. In this case, releases 600,602,603 etc. If the system you want to apply the note doesn't match any of the releases listed, then the code correction is not valid for that system.
  3. Check the text on the right of the release. It will have the id of the support package, in this case, for example "SAPKH60001". Where:
  • "SAPKH" means SAP_APPL
  • "600" means the version of the component
  • "01" means the support package level.



*Please note: the Support package ID "SAPKH60001" will change according to each component, it could have extra characters on it.


In the "Valid for" region, the text should be interpreted as following:

  • "SAPKH60001 - SAPKH60026": the code correction of that module is valid to be implemented in a system with SAP_APPL version 600 from Support Package 01 to SP 26.
  • "To SAPKH60216": the code correction of that module is valid to be implemented in a system with SAP_APPL version 602 from SP 00 to SP 16.(That means the code correction is included in SP17, so a system in SP17 already have that correction and does not need to implement the code correction)
  • "All Support Package Levels": This means the code correction is valid to be implemented on all Support Packages of a release version. This is usually used in pilot notes.
  • "Fm SAPKH60001": This is the same behavior as "All Support Package Levels", but from  a starting point. In this case, from SAP_APPL version 600 SP01. If the system has any version higher than it, the note should be valid.


Each note can carry several Correction instructions. Each "Correction Inst." will have the list mentioned above. In some cases, a correction instruction might not be valid for a support package version, but another correction instruction, in the same note, could be. If just one correction instruction is valid, then the note should be valid to be implemented on that system.


Support pacakges that deliver notes

You can also check the "Support Packages & Patches" section of the note. Example of note 2068862:


This section demonstrates on what Support Package the code correction and note implementation is delivered. So if the system is on the version listed or above, then the code correction has already been delivered.
That doesn't always means the Solution section of the note has been implemented, for example a parameter set up. Therefore you always have to carefully read the Solution section to make sure the solution has been correctly implemented.


Please consider this post just as a quick reference. If you are not sure if the note is valid or not, use transaction SNOTE to download the note and validate if it can or not be implemented on that system.


Please also consider SAP Notes can have versions, it is important to check if the version you have downloaded is the last version of the note.


Thank you for reading my post.

If you have any comments or other tips, please let me know on the comment section bellow.

Many a times we use the Rule Type 'Read Master Data' to perform a lookup on the master data infoobject to derive the value of one of its attribute.

However what if the attribute that we want to read from the Master data infoobject is time dependent? In such cases we can use the Time Dependent feature of Read master data.

So this is how it goes:

Suppose we have a scenario in which we want to read the Profit Center of a COPA document based on its posting date.The target does not contain the posting date field.

Now profit center (0PROFIT_CTR) is an attribute of 0COSTCENTER and it is time dependent.


What then can be done in this case? We can obviously write a start routine and store the values of the source table in an internal table declared globally and then read these values of posting date in the end routine from the internal table and compare them with the Valid To and Valid From fields from the Q table of the master data infoobject and get the corresponding profit center..

Field routines should not be preferred in such cases because they hit the database for every data record and that affects performance.


Now coming back to our read master data:

First we will map cost center to profit center .

Then under 'Rule Type' we will select Read Master data:


In From Attr.of we will insert 0COSTCENTER

Then we will click on the clock next to it which is Key date Determination.

And under Time Dependency Reading Master Data we will select Start



We will select Start and over there under from we will select the Time characteristic which in our case is Posting date.




Once this is done we will do the following changes in IO Assignment:


Once this is done we will check whether our 'Read master data' is really working.


Below is our Cost Center master data:

For Cost Center 99991234 the Profit Center is D7209 from 01.01.2007-30.06.2013

and it is E7209 from 01.07.2013-31.12.9999


Now we will test whether our rule is working.For that we will click on Test Rule and enter the following values:


And Voila! this works.As you can see above I added Posting date as 01.07.2007 which falls in the range 01.01.2007-30.06.2013 and its corresponding profit center is D7209.

So go ahead and give this a try.



We face many challenges in our BI projects in terms of creation of Analysis Authorizations in a bulk. Users always expect outstanding security of our BI reports. No matter what ever we do in the back end (Modeling and Query Designer). I am going to share very important tips & techniques which I have learned through out my experience which consists of some standard thumb rules too.


This blog is going to explain you about "Creation of bulk analysis authorizations". I am not going to talk about standard Steps to be carried out.  For example we need to create Profit center wise analysis authorizations and respective roles wherein the no.of analysis authorizations will at around 200 to 500.  In such situation i created a program in that we need to just upload the profit center numbers in CSV format.


The program will create the required Technical names of analysis authorization along with customized descriptions (whatever we want) within a fraction of second it will create all Analysis Auth. objects.


Please go through the screenshots mentioned below, as it contains how we create one Analysis authorization in our system.


1 :- Enter the Tcode 'RSECADMIN'  and later click on "Ind. Maint.".


2 :- Specify the Tech.Name of AA and click on "Create".


3 :- Specify the Short, Medium and Long Text and then click on infoprovider icon and select your info objects.  For example i want to create profit center wise AA objects including Cost center and Controlling Area.


4:-  Click on Profit Center intervals and maintain the value of Profit center.


5:- Maintain the info provider values as mentioned below.



6:- Save and activate.


The above six steps will take time at least a minute time to create one analysis authorization.  So for this i explored a lot and i found three tables are updated when we are creating one AA object.  RSECVAL, RSECBIAU, RSECTXT.


I just want to share this program to all of you, so that it can helps us to create bulk AA objects.


NOTE:-  Please be take care of this program before running.  If 'text.csv' file is not there in our C:\ then it will not do any harm to our AA objects.  As i did not add any checks before updating these values.


Conclusion :


It is always better to keep an eye on above points while developing / creating AA objects. This blog will help us to skip our 3 man days work and will give the output within seconds.  I have tried my level best to make it very clear to achieve this peculiar requirement. I am sure this is going to help us, as many clients does this kind of analysis authorizations.

Thank You

Every now and then we will see a thread posted in BW space seeking help in transformation routines.

Start routine, end routine, field routine and expert routine; which one to use, how to use, when to use. These questions can be answered only with respect to the logic that we need to apply in any particular case.


I would like to share here, how I see, approach and write start routine…


Start Routine:


In a data transfer process (DTP), start routine runs first. Then it is followed by transformation rules and end routine.

In medium and complex transformations, we will be having set of logic to be implemented.  Those logic include exclusions, look ups, conversion, calculation etc.


A plan shall be there with us for what to write in start routine. This will be decided based on the fact that start routines runs first.



When to write;


Scenarios which are good to be written in start routine are,


1. Delete out unwanted data.

Ex: You want to delete a record if its delivery flag is not set to X, in this case you have to use start routine.


2. Populate internal table by Select data from DB table, which will be used during lookups.


Ex: In schedule line Datasource currency field is not filled for some records, you want to fill them with company code currency. For this you have to look up Company code master. In start routine you can fill an internal table with all the company and currency details. The same can be done for transaction data also.

3. Sorting the records. Based on sorting further transformation rules can be written.


In a Goods movement Datasource, if you want to process you inward deliveries against the PO number chronologically, we can sort the source package in start routine and in transformation rules they can be processed serially.


How to write;


Simple filter



It is better to delete unwanted records in start routine, because it won’t be processed unnecessarily in subsequent steps and reduce the data loading time.

Populating Internal table

SELECT comp_code country currency
FROM /bi0/pcomp_code
INTO CORRESPONDING FIELDS OF TABLE it_compcd(internal table)
WHERE comp_code = SOURCE_PACKAGE-bukrs.

When you write a select in field routine, it means that you are writing a select inside a loop. For an every iteration of loop, SELECT statement will hit DB table. It will result in performance issue. So it is good to write the select statement in start routine and take all possible records from DB table into an internal table.

This internal table can be looked up using READ statement.



SORT SOURCE_PACKAGE BY vendor createon po_number.


  This code will sort source package by vendor, date and PO. this mean your oldest PO processed first in transformation rule.



Note: The content in this blog is applicable to retail systems only.

Hi All,


In my project, we faced issues in delta extraction for 0MATERIAL_ATTR and 0MAT_PLANT_ATTR. Thought of sharing the workaround for the same.


For Retail Systems, delta doesn’t work for 0MATERIAL_ATTR and 0MAT_PLANT_ATTR.

Hence, it is suggested to use 0ARTICLE_ATTR and 0ART_PLANT_ATTR instead.


Also, please note that the delta extraction for these datasources is based on the concept of change pointers. It involves different tables which are listed below:


1. Table ROOSGEN shows which message type (format: RSxxxx.) has been assigned to a datasource.

2. Table BDCPV contains the change pointers. Every time a change relevant to the datasource is performed, a change pointer with message type RSxxxx is added     to table BDCPV.

3. Table TDB62 shows fields of tables that are relevant for a message type. That is, only changes of these fields will generate a change pointer (even if there are       more fields in the datasource)

4. ROMDDELTA stores for each datasource, which change document object is used and which table is used for generation of the change documents. The                 information stored in table 'ROMDDELTA' are used to generate the entries in the table 'TBD62'. The entries in ROMDDELTA are normally created when                 activating a DataSource in RSA5.
    If it is not there, the DataSource should be re-activated (and replicated in BW to be sure of consistency).

    For a datasource: In table ROMDDELTA, field TABNAME should be the same as the TABNAME field for TDB62.

    Also, esnure that the 'Object' field in table ROMDDELTA has value 'MAT_FULL' for both the datasources.

    For 0ART_PLANT_ATTR, if the TABNAME field in table ROMDDELTA has value 'MARC', you need to run a ABAP report for changing these entries to                   DMARC. Below is the ABAP code that can be used.


UPDATE romddelta
SET tabname   = 'DMARC'



Hope this blog post helps you.




The original blog can be found here: ekessler.de

In this blog I describe the conversion of customer-specific implementation that may be necessary by changes to SAP standard data types.

The Note
1823174 - BW 7.4 conversions and customer-specific programs is already on this subject and also describes analysis and
solutions. The blog is intended to supplement give the note provides additional background information and assist in the search for the best way for the changeover.

1.1     Why and when is the change necessary?

Due to the expansion of key information objects previously max. 60 characters to 250 characters was now in BW 7.4, the data type of the domain RSCHAVL (which is used by the data element RSCHAVL) changed from CHAR 60 SSTRING.
The data type STRING is a dynamic data type of a variable length of up to 1333 characters (see http://help.sap.com/abapdocu_731/en/abenbuilt_in_types_dictionary.htm).
The data element RSCHAVL is used in several BW structures in the framework of selection options (Range tables). Figure 1.1 shows the two selection structures which are used to process BEx variables used in the context of the customer exits. The import parameter I_T_VAR_RANGE based on the structure RRRANGEEXIT (BW: Simplified structure for variable exit variable) and the export parameters E_T_RANGE based on the structure RRRANGESID (range expanded around SID). Internal access both structures back to the include structure RRRANGE (range table in brain).
Figure 1.1: Selection Structure in customer exit EXIT_SAPLRRS0_001
The change in the structures by changing the domain RSCHAVL does not necessarily cause the customer-specific implementations need to be adjusted. By changing the domain RSCHAVL all structures in which one or more components on the data element RSCHAVL based on deep structures. Figure 1.2 shows coding examples of the lead after the conversion to syntax errors.
Figure 1.2: Invalid ABAP Syntax in BW 7.4
The declaration of local structures using theTABLES statement is allowed only for tables based on flat structures. This also applies to the declaration by DATA and LIKE.
The LIKE operator can easily be replaced by the type-specific operatorTYPE here. For the declaration by
TABLES statement, the coding on the declaration byDATA must be changed.
The typical types CHAR use of offsets and lengths as ls_e_range-low + 6 (2) is based on string types are not allowed and will lead to syntax errors.
The use of offset and length specifications can be replaced by string operations (concatenation) or by the use of string templates, see Figure 1.2.
Other examples of implementations that run in NW 7.4 syntax errors are in Note 1823174 - BW 7.4 lists changes and customer-specific programs in the section syntax error.

1.1     What and where changes are necessary?

  • Customerexit forvariables
  • Customer-SpecificPrograms
  • Transformations / DTP / InfoPackages

1.2  How can the job be with the syntax error found

Okay, now we know what implementations cause syntax errors and in what areas we need to look.
  • But as we now find the job in customer own implementations?
  • Do all reports / Includes / function modules / classes / ... be reviewed manually?
  • What are the options for the correction of errors found?

1.2.1  Customer Exit for variables and user-defined programs

The answer here is not unique. If we looks at the three areas that need, we investigate we can for the first two, customer exit for variables and user-defined programs, make it clear that this can be checked automatically.
The Note 1823174 - BW represents 7.4 changes and customer-specific programs two Word documents are available in which is described as using the code
inspector (transaction SCI) customer-specific implementations can be examined for syntax errors. The reference here offers two variants of the code inspector.
The variant Code Inspector pre post can be performed before the upgrade and after the upgrade. Using the report ZSAP_SCI_DELTA the delta of the two races can be determined and compared.
The second version of the Code inspector CodeInspector_Post is described in the Note may be used if the syntax check before upgrading could not be carried out.

Figure 1.3 shows he result of the Code inspector variants
The syntax errors found can be accessed directly from the result of the report code inspector out using forward navigation and corrected.
When correcting errors that occur within the customer exit variables, customers are supported by SAP. In the blog New BAdI RSROA_VARIABLES_EXIT_BADI (7.3) I have the new BAdI presented for processing customer exit variable was introduced in BW 7.3. With SAP BW 7.4 has extended the delivery, the default implementations for an additional BAdI implementation. In addition to the standard BAdI implementation SMOD_EXIT_CALL (Implementation: BAdI for Filling Variables) with BW 7.4, the BAdI implementation CL_RSROA_VAR_SMOD_DIFF_TYPE (SMOD Exit Call with different Tables and Types) as inactive version is shipped. The default BAdI implementation SMOD_EXIT_CALL will continue to be delivered as active implementation.

  • What is the difference of the two implementations?
  • When to use which implementation has to be activated?
  • Can both implementations to be active?
Both implementations serve as a wrapper / mediator / conciliator call the customer exit. Customers who start on a fresh system and no customer exit implementations can have a "legacy" in their system and should include new implementations for the processing of exit variables have their own BAdI implementations implement, see New BAdI RSROA_VARIABLES_EXIT_BADI (7.3).
To explain the difference between the two implementations, we first look at the parameters of the customer exit EXIT_SAPLRRS0_001. In Figure 1.4 in the lower part shows the Importing and Exporting parameters of the function block. In addition to the two parameters I_T_VAR_RANGE and E_T_RANGE the two parameters I_T_VAR_RANGE_C and E_T_RANGE_C have been added. The two parameters I_T_VAR_RANGE and E_T_RANGE the components LOW and HIGH are based on the data element RSCHAVL (see above) and thus are of data type SSTRING fields.

The two parameters I_T_VAR_RANGE_C and E_T_RANGE_C the components LOW and HIGH are based on the data element RSCHAVL_MAXLEN and thus are of data type CHAR. The parameters I_T_VAR_RANGE_C E_T_RANGE_C and can therefore be used analogously to the above original parameters I_T_VAR_RANGE and E_T_RANGE, they are based on flat structures.
Figure 1.4: Optional Customer Exit Parameter
The parameter pairs I_T_VAR_RANGE_C, E_T_RANGE_C and I_T_VAR_RANGE, E_T_RANGE are two options. Which option is used depends on the currently active BAdI implementation. Figure 1.4 shows the relationship between the two of the SAP delivered BAdI implementations SMOD_EXIT_CALL and CL_RSROA_VAR_SMOD_DIFF_TYPE and the parameter pairs used. The BAdI implementation SMOD_EXIT_CALL (default) works with the parameters I_T_VAR_RANGE and E_T_RANGE and the BAdI implementation CL_RSROA_VAR_SMOD_DIFF_TYPE operate with the parameters I_T_VAR_RANGE_C and E_T_RANGE_C.

In the event that the code inspector has many syntax errors found are attributable to the conversion of the data element RSCHAVL recommends you use the optional BAdI implementation CL_RSROA_VAR_SMOD_DIFF_TYPE SAP. Figure 1.5 shows what steps are necessary to use the optional implementation. Start transaction SE18 (BAdI Builder), select the BAdI name option, enter the name RSROA_VARIABLES_EXIT_BADI BAdI and then select Display.

In the enhancement spot (Enhancement Spot) expand the entry RSROA_VARIABLES_EXIT_BADI. Double-clicking on implementations shows the list of the BAdI implementations (1).

The yellow light indicates that the default implementation SMOD_EXIT_CALL is active. The gray lamp indicates that the implementation CL_RSROA_VAR_SMOD_DIFF_TYPE is not active. The definition of the BAdI allows that may be active BAdI implementations several parallel. Therefore, the order of step (2) and (3) is arbitrary.

In step two, we first disable the default implementation. Double-clicking on the implementation in the list on the right of (1) opens the BAdI implementation (2). To disable the implementation of the indicator IMPLEMENTATION IS ACTIVE must be de-selected. Subsequently, the implementation must be enabled.

In step three, we activate the optional BAdI implementation CL_RSROA_VAR_SMOD_DIFF_TYPE. Double-clicking on the implementation in the list on the right of (1) opens the BAdI implementation (2). To activate the implementation of the indicator IMPLEMENTATION IS ACTIVE must be selected. Subsequently, the implementation must be enabled.

After the default implementation deactivated and the optional deployment is enabled, the colors of the lamps should have according to (4). If this is not the case, you must update the display policy.
Figure 1.5: Switch BAdI Implementation

Now that the optional BAdI implementation must now be adapted to the active user-defined coding. For this purpose, only the names of the objects used (structures, table types) must be replaced within the customer's own implementations, as listed in Table 1. For this, the editor function Search and Replace can be used.
Table 1 List of reimbursable object name
The adaptation and conversion of the implementation described here is used only makes sense if the code inspector (see above) Syntax error found was due to to the conversion of the data element RSCHAVL back.

Has the Code inspector found no such error or the number of errors is very low, it is recommended that the syntax error in the client implementations to correct and use the default BAdI implementation.

Principle for custom programs are the same as for custom implementations within the customer exits. Here is the fact that there are other structures in which the data element RSCHAVL is used as in Table 1 (see above) listed. For these structures no CHAR-based alternatives. If, for example, the structure RSDRD_S_RANGE (Single Selection for a at InfoObject in a deletion Criterion) in the context of a customer-defined program used must be determined if it comes here to syntax errors. This can be checked similarly to exit variables using the code inspector.

Eventually, the object list in the Code inspector must be adjusted. This is the case if the implementation for the variable exit is organized in another development package as for example the maintenance programs for Housekeeping.

1.2.2      Transformationen / DTPs / InfoPackages

In the third area, transformations / DTP / InfoPackages, it depends on how implementation is implemented here. From technical perspective transformation are reports also referred to as a generated program! Figure 1.6 shows how to get the corresponding generated program of the transformation. Next Figure 1.6 shows what happens when trying the generated program using the code inspector to investigate syntax error
Figure 1.6: Code Inspector and Transformations
The generated program is an SAP object and cannot be investigated by the code inspector.

For the sake of maintainability and reusability many customers are changing over implementations start-, end-, field- or experts routines. Here you will find different approaches to the simple use of includes too complex class models. Outsourced implementations, since they're all here is customized implementations using the Code Inspector (transaction SCI) are reviewed.

Is being implemented directly within the transformation take place it must be manually checked the corresponding transformation.
Generated program transformation
From a technical perspective transformation is a report in which a local ABAP OO class is defined. Basis for the report and the local class templates. These templates are the basis of the generated program. The program code is added within the definition of transformation as start, end, field or expert routine is generated in the local class.


To not be able to check each transformation individually and manually transformations using the report RSDG_TRFN_ACTIVATE be automatically activated. The report can be scheduled this as a background job.

About the selection screen of the report RSDG_TRFN_ACTIVATE can be controlled if a single transformation, a list of transformations or transformation with specific source or target objects to be activated. Figure 1.7 shows how the report is run for a transformation. If all transformations are to be tested by re-activation for syntax errors must be released all selection fields.

When selecting a transformation, the generated program is regenerated (this happens only when necessary) and checked for syntax errors. That contains a transformation faulty code cannot enable it features this.


Figure 1.7: Re-Aktivierung von Transformationen

The log-result on the right side of Figure 1.7 shows that become inactive by activation of the transformation of the first or the associated DTP. The report then activated the dependent DTP.

In the inactive transformations must now be checked manually why they could not be activated. As an entry for the reworking of the outcome of the application logs can be (transformations) is used (see Figure 1.7) or the table RSTRAN.


Code pushdown transformations and DTP
A positive side effect of the re-activation of the transformations is that thereby implicitly a code pushdown is performed transformations if this is possible. This of course assumes that the database is a SAP HANA.


Using the report RSDHA_TRFN_CHECK_HANA_PROCESS can check what transformations are potentially useful for a code pushdown.


Routines in the DTP filter and in the InfoPackage
The structures that are used in the context of selection by BEx variables of type Customer Exit or ABAP routines in DTP and / or information packages from the SAP does not currently affected. That is to say here is only when action within its own implementation of a structure is used for the one component to the data element RSCHAVL based.


2     SAP Notes


1823174 - BW 7.4 Umstellungen und kundeneigene Programme


1943752 - SYNTAX_ERROR Dump Occurs when Executing a BW Query with Customer Exit Variable after Upgrading to BW7.4



1943752  - SYNTAX_ERROR Dump Occurs when Executing a BW Query with Customer Exit Variable after Upgrading to BW7.4            


2098262 - SAP HANA Execution: Only RownumFunction() calculated attributes are allowed



3      Links




Programs for Activating BW Objects in a Productive System

In Berlin I attended  two SAP HANA on BW sessions. On Wednesday there was a session on the SAP BW product + roadmap session by Lothar Henkes and on Thursday there was an End to End scenario session by again Lothar Henkes but mainly Marc Harz. Additionally I started the opensap course this week after the Teched&&Dcode where I saw a familiar face… Hello Marc!

In this blog I share some of my thoughts on what I saw in Berlin. As I found some things in Opensap quite important in reiterating what SAP BW is supposed to do I included a paragraph on that subject.



Source: SAP.


In the image above you see the main points of the new things that have been developed in SAP BW 7.4 up until SP8.

In the first presentation the new developments where grouped in a couple of themes. As I was actually impressed at the time at the structured manner of the presentation I will follow that structure and reference the other sessions.


But before we dive into the things I heard in Berlin, I will point at the opensap course that just started a couple of days ago. As SAP BW is clearly going through some rapid changes it was good to go back and look at what the goal of the application is. In one of the first slides in week 1 this overview was givern:

Source: SAP.

SAP BW is an application on top of a database. What it wants to do is help you to manage the data warehouse.

As it is an application BW basically lays an abstraction layer over the database. In the past due to all kinds of technical constraints BW felt more like a technical exercise to get performance or to get it to work period.

Now that HANA is doing the heavy lifting, BW seems to get its focus back to what it is originally meant to do. Create a business layer over your database to easier build a data warehouse.


You can find the course here: SAP Business Warehouse powered by SAP HANA - Marc Hartz and Ulrich Christ

Try it. It is free and the first week looked very promising.



Virtual Datawarehouse

The virtual Data warehouse is a layer that is able to  work because of SAP HANA. Only with SAP HANA you are able to get the performance you need to use virtual layers. What SAP BW delivers is ways to create virtual objects that leverage the technology. Utilizing this technology you can think of creating separate views for different departments without having to copy the data. Additionally it creates more flexibility as in the past data reloading was a big part of the time needed to get changes done.

Source: SAP.

In SP8 you have two main objects for your virtual view. The Composite Provider and the Open ODS view. The latter is meant for virtual access to external sources.

The Composite provider looks like the main tool for modelling. It enables you to combine info providers with JOINS (merging) or UNIONS (appending). You can even use other composite providers as source. Note however that this currently is UNION only.

Basically this means that you theoretically can store data only once and build virtual layer upon layer on top of that.

Personally I think that you will keep some kind of staging area around, when you don’t know if the source system is going to retain the data, use transformations to create a persistent single version of the truth, things like cleansing and checking the data,  and from there go with virtual layers.



The picture seems clear enough. From a large number of objects we go back to only a couple :


Source: SAP.



I was really enthusiastic about this and now after a few days I still am. However I do need warn you that there still is a lot of complexity hidden within the objects. The Advanced Datastore Object (ADSO) for example has three checkboxes that can be set on independently of each other. This checkboxes determine which of the three tables underneath the application layer will actually be used. This means that you have 2^3 16 different setups to choose from. In the presentation there was a mention of templates for different situations. That should help in that case. From an architecture point of view You have to look at the options and determine which options should be used in which circumstances.

All in All it looks good. In the End to End session Marc Harz showed us a live demo where he showed the editor of the composite provider.


Source: SAP.

This looks a lot better than the old editors for multiproviders. Now with the ability to use compositeprovider as source for other compositeproviders you can create simple building blocks that together build your application.


Big Data

For Big data management SAP BW differentiates between three types of data based on the amount of usage: Hot, warm and cold. Hot data will be in HANA in memory, warm data will be in HANA, but on disk and finally cold data is stored in Near line storage on separate servers.

This should help you to achieve a more efficient usage as you’re only investing in expensive equipment for the hottest data and can keep a more modest budget for the rest.



In this image you see an example how you could manage this. Basically you have different persistent objects that do or don’t reside in memory. Based on the usage you move the less used data to the warm objects. From these objects you get a flow to the near-line storage based on age and /or usage.



To be short. Run on HANA and hold on for dear life ;-)

Basically SAP BW was a 2 tier system, which you had to manage carefully to keep performance. A lot of ABAP code was all about collecting a lot of data and changing it on the application layer. As a BW consultant you often used ABAP just to increase the performance a bit. For example before the improved master data lookup you actually avoided the standard transformation and used abap in the start routine to collect a lot of data in a variable so in the transformation you could use an abap function to read the variable.


Now with BW on HANA everything gets pushed down from the application server to HANA. This means that for performance you are best of to avoid your own coding as much as possible. Standard transformations can be pushed down to HANA. Your own creations less so. For these the old transfer to application layer and back still goes.


In the presentation note 2063449 was mentioned. This note will tell you what has been pushed down and what is still to do. But as a rule of thumb develop like it is already pushed down, eventually it will be pushed down and if you already did it the right way you won’t have to redo it to get all the performance.


Also here a pushdown to HANA is taking place. The PAK should be feature complete now in comparison to BW-IP. Furthermore the FOX formula handling is improved and you can use a composite provider for planning scenario based on unions.

That you are also able to enter comments is a very nice feature. Customers for design studio are often asking for precisely this feature.




SAP BW is reinventing itself and focusses on its core function. Offering an application or business layer over your database. HANA is the driving force behind this by providing the heavy lifting needed. In the future more and more functions will be done on HANA itself. I am just wondering how they will balance between the customers on HANA and those on other databases.


Filter Blog

By author:
By date:
By tag: