1 2 3 19 Previous Next

SAP Business Warehouse

279 Posts

Ever wondered where your DTP/Transformation spent it's time?


This is how you can find out:


At first you have to create a small programm like the following:




                " loading the dtp



         " here you have to place the technical name of your dtp

         I_DTP  = 'DTP_4YIEJ....'


         R_R_DTP = R_DTP.




" we have to run it in sync-mode (not in Batch)




Now run this programm through Transaction SE30 or SAT :se30.png

After the execution of the DTP you will receive the result. In our case it was an select on the psp element. We created an index and the dtp needed only 2 minutes instead of 30 minutes.



Of course you can also simply call the transaction RSA1 through SAT (using the processing mode "serially in the dialog process (for debugging)"). But doing it that way you have to filter out the overhead in the performance log created by rsa1 and the data will not be written to the target (it's only simulated) so you might miss sometimes a bottleneck in your dtp/Transformation.


thanks for reading ...

Questions about fast growing tables in SAP systems are very often asked in SCN forums. Such a tables consumes disk space and even more importantly processing large volumes of data in these tables slows down the system. This also true in area of BW systems. Therefore it is common for SAP Basis guys to do housekeeping on regular basis of these tables. There are SAP Notes available which deals with those tables and advise how to reorganize them. Perfect example of such a Note is 706478 - Preventing Basis tables from increasing considerably. There are many tables discussed in the Note per different areas and also BW areas.

One of them depicts table related to process chains log - RSPCLOGCHAIN.

The table holds logs of Process Chains. In large BW systems running many chains on daily basis table can increase its volume very easily. One of the tables involved in logging of process chains runs is RSPCLOGCHAIN. Regular way of how to get rid of the log of process chains run is to do it from transactions like RSPC or RSPC1. In the log view there is Delete functionality available in the menu:


To you this functionality in automated way an ABAP report RSPC_LOG_DELETE needs to be utilized.  You can set up the job running this report on regular basis for particular chains with selection on logs from date/time or Log ID.

I found this report quite useful in BW housekeeping tasks.


PS: This blog is cross published on my personal blog site.


APD Query Tip

Posted by V J Aug 21, 2014

While working on one of the APD requirement we came across a strange issue.


As per the requirement, we wanted to add few new key figures to the APD query and hide few old ones.


We went ahead and added the new key figures and hid the other ones. We did the mapping and activated the APD.


When we started the testing, we realized the data was not coming in the correct order , It was strange and we couldn't figure out the reason.


We used trial and error method and finally found the solution.


We moved all the hidden key figures to the bottom of column box and the APD output got corrected. It was totally new and unexpected discovery which we were delighted to found, hence sharing it here.


Conclusion : Whenever you have hidden columns in APD query, make sure you move it to bottom of the column for correct data.

Part 2 of this blog series will cover some more important aspects of NLS technology

with SAP BW


1) A prerequisite of NLS data archival is to compress the InfoCubes that you want to

archive upto the latest data load request before archival . In many landscapes

this is normally done as part of maintenance activities but if not then this might be

a time consuming process to implement on all cubes and might impact project

timelines .


2) Many times a question is asked about NLS whether archived data on NLS can be

again restored back to the live SAP BW system . The answer is "Yes"  this is

possible but might lead to inconsistencies in the system and will defeat the purpose

of archival . Data reload option should be used only if there is a critical business need .


3) From a BW security perspective there are some prerequisites as well .    

For BW developers to perform archival related activities under developer role,

authorization object S_ARCHIVE needs to be assigned and activity code , area

and object should be '*' .


For End Users: To read archived data from NLS , users should have authorization Object

S_RS_ADMWB with Activity Code 03,16 assigned and restricted on Object RSADMWBOBJ

These prerequisites might be specific to NLS vendors as well and you need to provide

additional authorizations if mentioned in documentation provided by your NLS vendors .

4) Another important step which is a part of NLS archival is the creation of virtual

InfoCubes . To read the data from NLS you will need to create new virtual cubes

which will actually connect to NLS . If a query is run which needs to read data from

NLS then these virtual cubes will do the trick . You would need to modify the existing

Multi Providers and queries to include these virtual cubes so that data archived on

NLS is available for reporting .

Will cover some more aspects in the next part of this series

Cheers !

Part 1 of this blog series will cover some important aspects of NLS archival technology along with SAP BW .


Some important points to keep in mind before starting a data archival project are :


1) Choosing a NLS vendor . There are many SAP certified NLS vendors and a vendor should be chosen based on

requirement and product offered .


2) Correct sizing of the NLS box is important so that current and future data archival needs can be met.


3) Data to be archived : Archived data is always online for reporting while using NLS , but the reporting speed is

not comparable to reports being run on BWA or HANA appliances . Only data which is rarely accessed should be

put on NLS .


The next step is to determine which InfoCubes and the data volume that should be archived .

Based on BW statistics this decision can be made along with inputs from the business users . InfoCubes which

are big in size and infrequently used  are the best candidates for data archival .


Below is quick chart of the logic that can be  used to determine how a particular InfoCube should be archived



Archival Strategy.jpg


NLS archival is done using time slices and it is important that InfoCubes have time characteristics which can

be used for data archival .


InfoCubes which are refreshed on a daily basis ( Full load ) need not be archived . If SAP ECC system or

source system is also undergoing archival then only live data post archival in the SAP ECC or source system

will get loaded as part of the daily full loads to such InfoCubes and data which has been archived will not be

available in the SAP BW system for reporting .


InfoCubes which are delta enabled and delta loads don't bring data for time slices which have already been

archived are the best candidates for NLS archival . The reason here is that if you archive data from a cube for

a particular time slice and if the daily delta brings in data for the same time slice the data loads will fail .

If your daily delta loads to InfoCubes bring in data for time slices which have already been archived it's better to

create copy cubes and move the data to copy cubes for those particular time slices and archive those copy

cubes instead of the original cubes. Post validation , data can be deleted from original InfoCubes . Existing

multi providers and reports will have to be enhanced to include the new copy InfoCubes .


Every business requirement,landscape is different and there are multiple ways data archival can be done using NLS .

Above is one approach that can be used .

The second part of this series will try to cover some more aspects of NLS archival technology


Cheers !

This blog has been translated with Google Translate. the original blog can be found here: ekessler.de



In a semantically partitioned object (SPO) was created based on a template object can cause an error during activation. Figure 1.1 shows the construction of a semantically partitioned DSOs. As a template DSO is used.



Figure 1.1: Construction of an SPO with template


When activating the SPOs, the error occurs "Not possible to create external SAP HANA view for object <SPO>00 / Message no. RS2HANA_VIEW001".


The error message is noted that attempts to generate an external SAP HANA View for an SPO (or a hybrid provider).




Figure 1.2: Error in activating


The cause of the error is in this case the templates DSO.In the templates DSO is the hallmark External SAP HANA View for reporting set, see Figure 1.3.




Figure 1.3: Indicator External SAP HANA View for reporting in the Template Obejkt


When creating the SPOs first the reference structure for the SPO is generated. The reference structure of SPOs used as a template for each partition of the SPOs.


At installation of the reference structure of the SPO object <SPO>00 the metadata of the original object are copied. The metadata of the template object also includes the indicator External SAP HANA View for reporting.


Currently does not support external SAP HANA views for semantically partitioned object (and HybridProvider). Therefore, the maintainance option for the indicator External SAP HANA View for reporting is for a semantically partitioned object (see Figure 1.4) not available.




Figure 1.4: Properties of the reference structure of the SOPs


We find the indicator HANAMODELFL (External SAP HANA view for BW object) in the table RSDODSO (Directory of all datastores), see Figure 1.5.




Figure 1.5: Indicator External SAP HANA view for BW object


To activate the SPO the indicator HANAMODELFL must be removed for the reference structure of the SPOs. The reference structure has the following naming convention <SPO-name>00. Alternatively, you can search the reference structure on the name of the SPOs in the table RSDODSO (for cubes see table RSDCUBE). Figure 1.6 shows both variants.




Figure 1.6: Search reference structure of the SPO in the RSDODSO


remove indicator HANAMODELFL (SAP HANA View), see Figure 1.7 and save.




Figure 1.7: Remove indicator HANAMODELFL (SAP HANA View)


Subsequently, the SPO could be activated without error.

Hi All,


Many of you might have alredy have this list of items that need to tested specific to your landscape when we do upgrade, But I thought of posting this blog with a complete list of all BW test items from my project experience.


You can make use of this list for projects like BW Upgrade, Service Pack Upgrade, Source System Upgrade, HANA Migration & BEx Upgrade


Below test items have been divided into 4 test components Workbench, BEx Queries, BEx Workbooks, BEx Analyzer, Web Templates & Others


Component Testing Items

Workbench Tcode - RSA1, RSPC, RSPC1, RSPCM, RSRT, RSANWB, RSCRM_BAPI, RSA2, RSA3, RSA7, RSA6, SM50, SM51, SM66, SE38, SM49, SE37, ST12, SE09, SE03, SM12, RSA1OLD.

Workbench Authorizations & Roles

Workbench Master Data Load - Attribute (Source System)

Workbench Master Data Load - Attribute (Flat File)

Workbench Master Data Load - Text (Source System)

Workbench Master Data Load - Text (Flat File)

Workbench Maintain Master Data

Workbench Maintain Master Data Text

Workbench Hierarchy Data Load - Source System

Workbench Hierarchy Data Load - Flat File

Workbench Transaction Data Load - Source System

Workbench Transaction Data Load - Flat File

Workbench Query Execution in RSRT

Workbench APD Execution - All types (Flat File, Info Cube, Multi Provider, DSO..)

Workbench Open Hub - All Types

Workbench Program Execution - Standard & Custom

Workbench Data Reconciliation

Workbench Batch Run - All variants

Workbench check/create indexes on cube

Workbench check/refresh cube statistics

Workbench Compress Cube

Workbench Delete PSA in process chain

Workbench Maintain cube/Contents to examine data in cube

Workbench Create Info Package

Workbench Create DTP

Workbench Create DSO

Workbench Create Cube

Workbench Create Multiprovider

Workbench Create Trasformation

Workbench Create Data Source

Workbench Create Web Template

Workbench Create Query

Workbench Create Process Chain

Workbench Create cube as copy

Workbench Create InfoObject as Reference

Workbench Capture objects in Trasports

Workbench Add characteristic to Cube

Workbench Add characteristic to ODS

Workbench Change Identification in M/P

Workbench Data modelling - Multiprovider integrity

Workbench Add characteristic to M/Provider

Workbench Change existing InfoObject

Workbench Changing Transfer/Update Rules

Workbench Change existing InfoSet

Workbench Create Aggregate (Not Applicable in case of HANA Migration)

Workbench Modify Aggregate (Not Applicable in case of HANA Migration)

Workbench BIA Index (Not Applicable in case of HANA Migration)

Workbench DSO Activation

Workbench MDX Statements

Workbench Interrupts Execution

Workbench Create a Job

Workbench Schedule a job/Release a Job

Workbench Change the Job Status manually

Others  Performance of the MDX Execution pings/runtimes

Others  Interface with PI/XI/DB Connect/UD COnnect/Flat File/Informatica/EDW/GDW/SSIS for data transfer

Query Designer Execute Query in Query Designer

Query Designer Variable Entry Screen

Query Designer F4 for help Window

Query Designer Variants in Variable Entry

Query Designer Report Output

Query Designer Context Menu

Query Designer Export to Excel

Query Designer Download to PDF

Query Designer Filter options

Query Designer Conditions/Exception

Query Designer Go To Option

Query Designer Drill Down/Across Option

Query Designer Create Bookmark

BEx Analyzer Execute Query in Analyzer

BEx Analyzer Variable Entry Screen

BEx Analyzer F4 for help Window

BEx Analyzer Variants in Variable Entry

BEx Analyzer Report Output

BEx Analyzer Context Menu

BEx Analyzer Export to Excel

BEx Analyzer Download to PDF

BEx Analyzer Filter options

BEx Analyzer Conditions/Exception

BEx Analyzer Go To Option

BEx Analyzer Drill Down/Across Option

BEx Workbook Execute Query in Analyzer

BEx Workbook Variable Entry Screen

BEx Workbook F4 for help Window

BEx Workbook Variants in Variable Entry

BEx Workbook Report Output

BEx Workbook Context Menu

BEx Workbook Export to Excel

BEx Workbook Download to PDF

BEx Workbook Filter options

BEx Workbook Conditions/Exception

BEx Workbook Go To Option

BEx Workbook Drill Down/Across Option

BEx Workbook Marco Addon

Web Templates Execute Query in Query Designer

Web Templates Variable Entry Screen

Web Templates F4 for help Window

Web Templates Variants in Variable Entry

Web Templates Report Output

Web Templates Context Menu

Web Templates Export to Excel

Web Templates Download to PDF

Web Templates Filter options

Web Templates Conditions/Exception

Web Templates Go To Option

Web Templates Drill Down/Across Option

Web Templates Create Bookmark



Abhishek Shanbhogue

What is interrupts?


Interrupt is a process type which is used in the process chains to trigger the chain after completion of specific steps either in BW any other source system. Interrupts are very helpful in automating the batchprocess in BW in case you don’t have any third party scheduler.


Business scenario


I am considering a classic example where we have SAP BWextracting data from SAP R3 (ECC) and assuming the batch process chains have to wait until the data pre-calculations and dependent business processes are complete in Source system






How do we achieve?


Step 1: We can find Interrupts in General Services 


Step 2: Create a new Process Chain and include the Interrupt Process soon after the Start Process



   Different Options in the Interrupt Scheduling


  • Immediate
  • Date/Time
  • AfterJob
  • After event
  • At operation mode
  • Factory calendar day



Step 4: Create Event in BW, Tcode SM64 will help you to create an event in SAP BW and ECC




Step 5: In my scenario I am using an event based trigger so that when an ECC job is complete event is triggered which in turn
triggers the BW process chain


So I will make use of this even in the process chain and ECC program



Step 6: To trigger an event in BW from ECC you will need a program for event raise (We have a custom program available so
I am making use of the same program here)


Tcode Se38 >Even raise Program > Maintain variant as Z_TEST which was created earlier




Step 7: Create a Job in ECC using Tcode SM36 and include the program which need to be scheduled followed by an event and
schedule this job as per the business needs


Define the background Job


Assume Z1 is the ECC program dependent on BW process chain so include Z1





Step 8: If you ECC job is scheduled daily at 8PM then the BW PC should also be scheduled at the same time. Once the ECC jobcomplete it will trigger the event which will in turn trigger the BW event


















SAP CI/DB split - Issues and Analysis:-


In SAP Production landscape, we are having CI (Central Instance) including all application servers and DB in a single Host .As HANA Database is there in market the SAP clients are splitting their CI/DB from the host and Keeping CI in one Host (Linux Host) and DB in old host. Later they can move the Database to HANA from IBM DB.



During this the OS of CI also have to change for eg. AIX to Linux.CI will be moved to Linux and DB2 retained on AIX itself. Here the CI should be moved to Linux because SAP HANA is not going to support in AIX environment.so moving CI to Linux is the first step to install SAP HANA database in future. After moving the DB to HANA they can again keep both CI/DB in new Linux host. Both CI and DB should have Linux OS.


Generally after CI/DB split in SAP Landscape we may face many issues regarding communication, Online/offline backups of SAP, performance of BW server etc.


Some of the common issues I have listed below and how the BW, BASIS and DB Team will work together .Actually after CI/DB split we may face performance issues on BW.Some time it may takes long time to open the Transaction code in BW.


In RSA1 go to modelling, and then click on Info provider after that the system will hang. This will happen for most of the work areas in SAP BW.


The first step is to update SAP Basis and DB to check from their end. Both Teams should work together as CI and DB are on different Host.


As per BASIS analysis and advice, the DB team can run “Update Statics” on BW. They will run Reorg and Runstats on BW .it all depends on number of tables present in database and how much data they are carrying.


REORG helps DB2 to ensure indexes should become aware of all new data and also no longer include deleted data and collapse all empty page space created by deletion of data and indexes


RUNSTATS gathers the updated statistics on the volume and distribution of data within tables and indexes. This information is stored in the system tables and is used to query the data.


As a result all tables have been Re-orged and Runstated.


The DB team can add Significant Memory to the DB2 buffer pools as much memory as possible. Then BW Team can validate BW server performance degradation. If performance is still slow, they can run an SAP trace to see where the time is being spent.


SAP BASIS can restart SAP.


Stopping the SAP System: Login into the OS level as a user with SAP administrator authorization (<SID> adm).


Enter the command: stopsap [DB|R3|ALL].


DB stops the database system: stopsap DB


R3 stops the instances & associated processes of the SAP System: stopsap R3


ALL stops both the database system & the SAP System. Stopsap ALL


Starting the SAP System: Log on in your OS level as a user with SAP administrator authorization (<SID>adm).


Enter the command: startsap [DB|R3|ALL].


The following applies to this command:


DB starts the database system:     startsap DB


R3 starts the instances & associated processes of the SAP System:     startsap R3


ALL starts both the database system & the SAP System:    startsap ALL


The DB team can do the settings so that more memory should be given to both SAP and DB2.


If the performance is still down then Basis can analyze the performance issues. They can clear user Buffer from their end. This below screen shows user BUFFER AREA, if we want to reset these area just do the following step




Authorization Values –> Reset User Buffer


Now BW Team can validate the performance.


In Some cases, we can see some issues like the Sequential read time of Transaction codes will be different for different user. Some user’s id are taking less time for Tcode RSA1 and some users Tcode will hang for more time.


We tried accessing RSA1 Tcode from other users (DDIC/USER1) it opens quickly. Whereas it is not so for USER2.

Response time log both the user can be compared. We tried accessing RSA1 Tcode from other users (DDIC/USER1) it opens quickly. Whereas it is not so for USER2.





On further Analysis, Now issue looks to be happening for all the user ids. It’s hanging while accessing table RSDVCHA



Tracing the transaction execution RSA1 is showing hanging on multiple tables.



We can go for quick restart of application and server as well. If BASIS unable to stop database on BW then DB team can stop it from their end. We can ask that Currently SAP application is down please stop the database. And Once DB is stopped, we can go for Restart the servers DB instance and CI instance.


Application has been stopped on server CI instance.

In some scenario if communication is not happening between CI and DB. We can check R3trans from DB servers, if processes are getting hanged.

R3trans Return Codes:-


R3trans sets a return code that shows whether or not the transport has succeeded. You can view details on transports in the log file. Here are the following return codes:

0: No errors or problems have occurred.

4: Warnings have occurred but they can be ignored.

8: Transport could not be finished completely. Problems occurred with certain objects.

12: Fatal errors have occurred, such as errors while reading or writing a file or unexpected errors within the database interface, in particular database problems.

16: Situations have occurred that should not have.


If it happens, the DB team can dig in to DB and it occurs mainly due to multiple locks in db. Which should be cleared up from DB end and they have to check DB2CLI driver for communication. If possible they can reinstall db2cli driver again to verify that db2cli is working perfect.


Accessing DB2 Database using DB2 CLI: -   A DB2 CLI uses a standard set of functions to execute SQL statements and related services at run time. DB2 Call Level Interface is IBM's callable SQL interface to the DB2 family of database servers. It uses function calls to pass dynamic SQL statements as function arguments. DB2 CLI does not require host variables or a pre compiler.


Install the DB2 CLI driver on database server.  To do this, we require free space of approximately 50 MB for each operating system of application servers.


We should also ensure that the main version of the DB2 CLI driver matches the main version of the DB2 software on the database server. The fix pack level of the DB2 CLI driver must be equal to or lower than the fix pack level of the DB2 software on the database server.


After installing again Check whether the SAP system and the SAP programs can then determine the DB2 CLI driver and use it.


In a random directory, execute the following command:


R3trans –x


This call should not return any errors and should report the following:

"R3trans finished (0000)."


BWadm 10> R3trans –d

This is R3trans version 6.24 (release 741 - 12.05.14 - 20:14:05).

Unicode enabled version

R3trans finished (0000).


By this we can ensure communication is happening between CI and DB.


In Another Scenario we can have R3trans is responding fine and application is also connecting fine. But the performance is again slow.


This may be due to online or offline backup’s. Tivoli Storage Manager (TSM) provides a powerful and centralized backup, archive, and storage management. Tivoli Storage Manager also plays a role in backup and recovery of DB2 Content Manager Databases in the event of hardware failure. The DBAs and the storage team can work together to get backup issues solved. Sometime the backup job is continually hanging on a TSM resource (Storage device resource) for long periods of time. And it will create numerous locks that can impact the all SAP jobs which will ends up in hanging BW Transactions codes. In this case we have to cancel the backup jobs and check the BW performance.


In DB daily online backups will be taken and during the case of Application upgrade and before CI/DB Split the offline backup should be taken .After CI/DB split we should monitor the backup jobs as well whether it is creating any locks.


SAP BW 7.4 -Analysis & issues:-

In SAP BW Release 7.4 SPS2 the domain RSCHAVL was converted from CHAR 60 to SSTRING 1333. In 7.4 versions the characteristic 0TCTIOBJVAL references the data element RSCHAVL_MAXLEN and has the type CHAR 250. Since the maximum permitted length of characteristic values is 250 characters, the values of all characteristics can be stored in 0TCTIOBJVAL.

If we compare with previous version, the characteristic 0TCTIOBJVAL directly referenced the data element RSCHVAL, which uses the domain RSCHAVL. So the characteristic 0TCTIOBJVAL and other objects that reference the characteristic have to be changed in 7.4 versions.

Data Element in SAP BW 7.4 Version:-




Data Element in SAP BW previous Version:-



There are characteristics 0TCTLOW and 0TCTHIGH referenced the characteristic 0TCTIOBJVAL. Since both characteristics are frequently used together in the key of a DSO, they can no longer reference 0TCTIOBJVAL.As a result, both together would be 500 characters long. However, the total key length of an ABAP Dictionary table in an SAP system must be shorter. Therefore, a new characteristic 0TCTIVAL60 of type CHAR 60 was introduced and the characteristics 0TCTLOW and 0TCTHIGH now both reference 0TCTIVAL60.


They previously had the type CHAR60 and still have same type post upgrade. As a result, all objects that use these two characteristics together are still executable. However, they work only for applications that have characteristic values no longer than 60 characters.

During SUM tool runs, the XPRA program RSD_XPRA_REPAIR_0TCTIOBJVL_740 which copies the contents of the SID table from 0TCTIOBJVAL (/BI0/STCTIOBJVAL) to the SID table of 0TCTIVAL60 (/BIO/STCTIVAL60).


The characteristics 0TCTLOW_ML and 0TCTHIGH_ML are created and they reference 0TCTIOBJVAL.




In previous version, We used the DSO 0PERS_VAR and it contains the characteristics 0TCTLOW and 0TCTHIGH in its key part. Since this DSO could not store characteristic values that are longer than 60 characters, they BW 7.4 came up with a new DSO 0PERS_VR1.The data part of this DSO contains the two characteristics 0TCTLOW and 0TCTHIGH.

The program RSD_XPRA_REPAIR_0TCTIOBJVL_740, activates the new DSO, and copies the contents of the previous DSO 0PERS_VAR to the new DSO 0PERS_VR1. Then, the personalization should work as usual.

The programs that run during the upgrade activate new objects and, if necessary, copy data from old objects to new objects. However, they do not delete obsolete objects. The DSO 0PERS_VAR for storing the personalized variable values is no longer used.


The database tables RSECVAL and RSECHIE that store analytical authorization objects are no longer required.

Database table RSECVAL


Database table RSECHIE


The program RSD_XPRA_REPAIR_RSCHAVL_740, copied to the table RSECVAL_STRING or RSECHIE_STRING. If that has been successfully executed, we can delete the contents of the tables RSECVAL and RSECHIE.


The tables RSRNEWSIDS and RSRHINTAB_OLAP are also no longer required. The program RSD_XPRA_REPAIR_RSCHAVL_740 also copies the contents of these tables to the new tables RSRNEWSIDS_740 or RSRHINTAB_OLAP_S. If that has been successfully executed, we can also delete the contents of these tables.









ABAP Program:-

In SAP BW 7.4, the domain RSCHAVL was changed from CHAR 60 to SSTRING 1333. As a result, data elements that use the domain RSCHAVL are "deep" types in an ABAP context. Therefore, some ABAP language constructs are no longer possible and it will give syntax errors or they cause runtime errors in customer-specific programs.


Texts with a length of up to 1,333 characters are possible for characteristic values. For this, the structure RSTXTSMXL was created which is a "deep" type in an ABAP context. In the internal method interfaces and function module interfaces that handle the texts of characteristic values, the type RSTXTSML was replaced with RSTXTSMXL. However, the RSRTXTSML structure remains unchanged and is required for the description of metadata.





The change should have little effect on our programs. We must expect problems where we operate on characteristic values that are assigned with a generic type in case of variable exits or where we call SAP-internal functions or methods whose interfaces were changed by SAP.


Most of the problems are syntax errors that result in a program termination. We can use the Code Inspector Tool to systematically detect and resolve these problems. We can run Code inspector program as a check both before and after the upgrade. It will show us the things that need to change as per 7.4 versions.




The include ZXRSRU01 and the function module EXIT_SAPLRRS0_001 will not analyzed by the Code Inspector. This include must be fixed by ABAPER from SE38.The point we should remember is that we should use keyword ‘Type’ instead of ‘Like’. There will be RRRANGEEXIT complex structure after the enhancement; so we should use TYPE instead of LIKE.




During pre upgrade the structure is given.



Post upgrade:-


Similar with Function Module too:-


These are the ABAP code change should be done by an ABAPER after BW 7.4 Post upgrade.


I have seen recently a lot of problems related to the transport of transformations in program CL_RSO_TLOGO_PERSISTENCY, and I thought, it would be a good idea to give a brief overview of this issue, and how this dump can be avoided. Furthermore, I would like to give you some hints, what should be checked to analyze the problem and finally in the last part of this blog entry, I would like to summarize the release specific properties. 









A versioning of the BW metadata has been implemented in the t-logo framework with BW 7.30. If transformations contain orphaned metadata, this dump occurs as a result. In many cases the issue occurs more specifically because there are some inconsistent entries for affected transformations in tables RSTRANSTEPROUT and RSTRANROUTMAP.

Running report RSTRAN_ROUT_RSFO_CHECK (to check formulas and routines with regards to completeness & consistency of metadata) will return the list of affected TRANIDs & their related errors.



Effects of note 1369395



You should only check the post manual steps from this note, where instructions from report RSTRAN_ROUT_RSFO_CHECK are described.

This report will check the formulas and routines with regard to completeness or consistency of metadata.


So you need run this report on "Check" mode for indentify the inconsistencies, and then in "Repair" mode for correct them.


Caution: You have to perform this manual activity separately in each system into which you transport the Note for implementation.



This action cannot be transported and must be executed in all target systems of the transportation landscape.

Another very relevant information is that you must transport the cleaned up transformation using a NEW transport. The old transport contains incorrect objects and can no longer be used. You should delete the old transport from all import buffers.


I several times the code correction from note can be missing on your system:

When you create a NEW transport request, make sure that all the necessary routines are included, for this check ON the option "All necessary objects".



List of referenced notes and documents:


#1801309 - Dump in in CL_RSO_TLOGO_PERSISTENCY during the transport of transformations in BW 7.30

#1760444 - Transport failure: The object name is not allowed to be empty (R7105)

#1880563 - Manual task for DUMP in CL_RSO_TLOGO_PERSISTENCY



I'd like to share some knowledge I recently stumbled upon while creating a transformation and having to debug it, since it did not work as expected.

Since I did not see any hints to this on the SCN or in the help, I think I'd share this:


When I use custom routines in transformation, I like to include proper exception handling and making monitor entries, since you can never be sure, what kind of data is being input by the users and you don't want to waste too much time searching for the document that actually caused the error.

The help specifies the exceptions, that can be used like this:


Exception handling by means of exception classes is used to control what is written to the target:

  •   CX_RSROUT_SKIP_RECORD: If a raise exception typecx_rsrout_skip_recordis triggered in the routine, the system stops processing the current row and continues with the next data record.
  •   CX_RSROUT_SKIP_VAL: If an exceptiontype cx_rsrout_skip_valis triggered in the routine, the target field is deleted.
  •   CX_RSROUT_ABORT: If a raise exception type cx rsrout_abortis triggered in the routine, the system terminates the entire load process. The request is highlighted in the extraction monitor as having been  Terminated. The system stops processing the current data package. This can be useful with serious errors.


(See Routine Parameters for Key Figures or Characteristics - Modeling - SAP Library)

What the help does not say is that there is one obstacle if you try to use it not with a key figure, but with a characteristic:


The thought behind seems to be that, if it is not a valid record (i.e. you raise the exception in a characteristic), the whole record is skipped.


So instead of a "clear", that happens when you raise the exception with a key figure (example):


You get a raise exception type cx_rsrout_skip_record:


If you search for it, you can find it easily in the generated program though.

Hope it helps with the proper usage of the routines and the exceptions.

Hi Everyone,


This blog gives you a brief idea about the 'General Services' option under Process Chain and few useful links/docs related to it  (esp. for Beginners).



It includes various options like


  • Start Process
  • Interrupt Process
  • AND(Last)
  • OR(Each)
  • EXOR(First)
  • ABAP Program
  • OS Command
  • Local Process Chain
  • Remote Process Chain
  • Workflow (Remote also)
  • Decision Between Multiple Alternatives
  • Is the Previous run in the chain still Active?
  • Start job in SAP Business Objects Data Services



Start Process :


As we all know that  this Process type let us to start the Process Chain.

  • Every Process Chain starts with this process type.
  • It is Mandatory in all the Process Chains.
  • It can be scheduled according to the Client needs.
  • It triggers the Process Chain.



Interrupt Process :



  • This Process type will interrupt the currently running Process chain and checks the condition mentioned in it.

Once you drag this Process type on to the work area, then you shall create a new variant for it. When you are creating a variant, you will find the below screen.



  • If it's 'Immediate', then the next Process connected to the 'Interrupt Process' is carried out instantly.
  • If it's 'Data & Time', then it waits till the mentioned data/time is reached and later the Process chain will resume back.
  • If it's 'Event/Job', then the particular job/event will be triggered. After it gets completed, the Process Chain will continue to run.



Useful Link:











AND - When we use AND operator, it will check whether all the above processes are completed successfully. If ALL the above process are successful, then it

will proceed to the following process which is connected to AND process type.



If  both '1 & 2' in the above snapshot are completed, then the 'Program' which is connected to AND process will be executed.



OR - When we use OR operator, it will check whether all the above processes are completed successfully. If EITHER  of the above process are successfull, then it will proceed to the following process which is connected to OR process type.



If  either '1' OR '2' in the above snapshot are completed successfully, then the 'Program' which is connected to OR process will be executed.



EXOR - When we use EXOR operator, it will check whether all the above processes are completed successfully. If EITHER  of the above process are

successfull, then it will proceed to the following process whereas if both the above process are successfull or failed then EXOR will not proceed further.




Only If, either '1' OR '2' in the above snapshot are completed successfully, then the 'Program' which is connected to EXOR process will be executed.

If both '1' & '2' in the above snapshot are completed successfully, then the 'Program' which is connected to EXOR process will not be executed.

If both '1' & '2' in the above snapshot are failed, then the 'Program' which is connected to EXOR process will not be executed.



ABAP Program :


  • This will let us to include both standard and customized  ABAP programs from various destinations like from the local workstation or from other source systems which are connected to our BW system through RFC connections.
  • You can also pull the ABAP program through the Events that you have created for it.


Useful Links:







OS Command:


  • If we want to execute certain scripts or OS instructions, then this process will  help us to execute the same.
  • It shall be created in the SM69 and can be executed in SM49.


Useful Link :





Local Process Chain :


  • This will let us to add a local Process chain which is present in our current BW system.
  • When the control moves to this Process, then the referenced process chain(Local Process Chain)  will be executed in the background and if it's successful, then the control resumes back to the current process chain.


When you drag the 'Local Process chain', you will get the following screen.



  • Click on the prompt window which will show all the Process chain in your BW system.
  • Select the one you want to add in the new process chain.
  • Once you have selected your process chain, then you will get a similar screen like below.




  • Then you can connect to other Process types in the Process Chain like the below snapshot.




Remote Process Chain :


  • This is similar to the Local Process Chain except that the referenced process chain is in different system which is connected to our current BW system through RFC connectivity.



Useful Link:






Decision Between Multiple Alternatives :


  • This process can be considered as Decision maker of the Process chain. While creating this process, we can write our 'If..else' condition in it.
  • Then, the subsequent process is carried out according to the condition filled.


You can refer the below useful document on 'Decision between Multiple Alternatives'.


Useful Links :








Is the Previous run in the chain still Active? :

  • It's a new Process type added in BW 7.0
  • This process type will check whether the previous run of the Process chain is successful or not. If it's successful, then it will continue with the subsequent processes.


Sample Scenario :


If the Process chain runs on every 'Friday' of the week. When it runs on the second friday of the current month, you need to check whether the Process chain has run successfully or not  in the previous Friday. If it's successful, then it shall run on the second friday of the current Month. To use this Logic, we shall use this Process Type.


There are many scenarios in which it shall be used effectively.



Useful Link :





Start job in SAP Business Objects Data Services :


  • This Process help us to run BODS jobs from BW system.
  • BODS (Business Objects Data Services) - It's a service by SAP which can extract the data from  both SAP and non SAP data sources.
  • The jobs will extract the  data from various data sources through 'Data Services' and pull them into the BW.



Useful Link  :







I hope that this blog is helpful for you.


Thanks a lot for reading this Blog



Gokulkumar RD

The purpose of this document is to help people who have BCS component installed in their landscape along with BW during BW upgrade. As, I have to do a thorough research before I came across the solution I thought of sharing it in a blog so that ,in future, if anyone faces the similar problem he/she can refer this blog. So, Lets look at the problem that my team faced during upgrade and what we did it to resolve it..

Recently, we did an upgrade in our BW environment form 7.0 to 7.4. During our BW post upgrade activities, we found error in one of our Bex query as shown below:



This Bex query was built on top of BCS cube. BCS is a SEM component based on BW. The Strategic Enterprise Management (SEM) is a SAP product that provides integrated software with comprehensive functionality that allows a company to significantly streamline the entire strategic management process.The BCS component is a part of SEM that provides complete functionality for the legally required and management consolidation by company.

Upon choosing the Individual display from the above error message we got below screen. This is the Data Model Synchronizer screen which highlights the difference in the data models between BCS and BW application. Here, in field MSEHI we can see that the difference exists between BCS and BW landscape as shown below which is the root cause for this issue.


So to resolve this issue, we tried to follow the details in the message as shown  in the window below,but with that every object(Cube,DSO etc) built on top of BCS got regenerated in BW. Due to which, all the existing modifications done in the these objects by BW team got lost.


So, in order to avoid that, we used program 'UGMD_BATCH_SYNC'. This program synchronized BW and BCS application without regenerating any thing. The details of this program can be found at below link:

Manual Data Synchronization - Business Consolidation (SEM-BCS) - SAP Library.

After executing this program, we need to mentioned the following things:

  • Application
  • Application Area
  • Field name


Application and application area can be found out as highlighted below:


We executed this program with selections as shown below and both the applications got synchronized without the regeneration of any of the BW objects.dms_error.png

BI_small.jpgSooo, have you thought about buying HANA? Ha-ha, just kidding! No, folks, this is not another sales pitch for HANA or some expensive “solution”, but a simple customer experience story about how we at Undisclosed Company were able to break free from the BI (*) system with no external cost while keeping our users report-happy. Mind you, this adventure is certainly not for every SAP customer (more on that below), so YMMV. The blame, err… credit for this blog goes to Julien Delvat who carelessly suggested that there might be some interest in the SAP community for this kind of information.


It might be time to part with your BI system if…


… every time you innocently suggest to the users “have you checked if this information is available in BI?” their eyes roll, faces turn red and/or they mumble incoherently what sounds like an ancient curse.

… you suspect very few users actually use the BI system.

… your have a huge pile of tickets demanding an explanation why report so-and-so in BI doesn’t match report so-and-so in SAP.

… your whole BI team quit.

… the bill for BI maintenance from your hosting provider is due and you can think of at least 10 better things to do with that money.


What went into our strategic decision


  • Tangible cost to run BI. Considering number of active users and value, we were not getting our money’s worth.
  • Relatively small database size. The Undisclosed Company is by no means a small mom-and-pop shop but due to the nature of our business we are fortunate to have not as many records as, say, a big retail company might have.
  • Reports already available in SAP. For example, it just happened that few months before the “BI talk” even started our financial team already made a plea for just one revenue report in SAP that they could actually rely on. Fortunately, we were able to give them all the information in (gasp!) one SQ01 query.
  • No emotional attachment to BI. As far as change management goes, we had the work cut out for us (see the eye rolling and curse-mumbling observation above). The users already hated BI and SAP team didn’t want anything to do with it either.


We're doing it!


Personally I have suggested that we simply shut down BI and see who screams, but for some reason this wasn’t taken by the management with as much excitement as I was expecting.


Instead we took a list of the users who logged into BI in the past few months (turned out to be a rather small group) and our heroic Service Delivery manager approached all of them to find out what reports they’re actually using in BI and how did they feel about it. Very soon we had an interesting matrix of the users and reporting requirements, which our SAP team began to analyze. Surprisingly, out of the vast BI universe the users actually cared about less than 15 reports.


For every item we identified a potential replacement option: an existing report in SAP (either custom or standard), a new query (little time to develop), a new custom ABAP report (more time to develop). With this we were able to come up with a projected date for when we could have those replacements ready in SAP and therefore could begin the BI shutdown. It was an important step because having a specific cut-off date puts fear into the users’ minds. Otherwise if you come asking them for the input or testing and no specific due date we all know it’s going to drag forever (there seems to be always “end of month” somewhere!).


Drum roll, please


So what did 15 BI reports come down to in ECC? We actually ended up with just 2 custom ABAP reports and 2 new queries, everything else was covered by standard SAP reports and just a couple of existing custom reports. Interestingly, we discovered that there were sometimes 3 different versions of essentially the same report delivered as 3 different reports in BI. In those cases we combined all the data into one report/query and trained the users on how to use the ALV layouts.


The affected functional areas were Sales, Finances and QM (some manufacturing reports were and are provided by our external MES system). There was very little moaning and groaning from the user side - it was definitely the easiest migration project I’ve ever worked on. Breaking free from BI felt like a breeze of fresh air.


Are you thinking what I’m thinking?


If you’ve already had doubts in the BI value for your organization or this blog just got you thinking “hmm”, here are some of our “lessons learned” and just random related observations and suggestions. (Note – HANA would likely make many of these points obsolete but we have yet to get there.)

  • If you feel you don’t get adequate value from your BI system it is likely because you didn’t really need it in the first place.
  • If you are already experiencing performance issues in the “core” SAP system, you might want to hold on to your BI for a bit longer (unless it’s BI extraction that is causing the issues). Adding more reporting workload to the already strained system is not a good idea.
  • Find the right words. If we just told our business folks that we’re shutting down BI the hell would break lose (“evil IT is taking away our reports!!!”). But when you start conversation with “how would you like to get a better report directly from SAP?” it’s a different story. And don’t forget to mention that they will still be able to download reports into Excel. Everybody loves Excel!
  • Always think ahead about your reporting needs. I can’t stress this point enough. For example, in our scenario one of the reporting key figures is originally located in the sales order variant configuration. If you’ve never dealt with VC, let me tell you – good luck pulling this data into a custom report. (The same problem with the texts, by the way – makes me shiver when some “expert” suggests on SCN to store any important data there). So our key VC value was simply copied to a custom sales order (VBAP table) field in a user exit. Just a few lines of code, but now we can easily do any kinds of sales reports with it. It only took a couple of hours of effort but if you don’t do it in the beginning, down the line you’ll end up with tons of data that you cannot report on easily.
  • Know your SAP tables. Many times custom SAP reports get a bad rep because they are simply not using the best data sources. E.g. going after the accounting documents in BSEG is unnecessary when you can use index tables like BSAD/BSID and in SD you can cut down on the amount of data significantly if you use status tables (VBUK/VBUP) and index tables like VAKMA/VAKPA. I’m sure there are many examples like that in every module – search for them on SCN and ask around!
  • Queries (SQ01) are awesome! (And we have heaps of material on SCN for them - see below.) If you have not been using them much, I’d strongly encourage you to check out this functionality. You can do authorization checks in them and even some custom code. And building the query itself takes just a few button clicks with no nail-biting decisions whether to use procedural or OO development. SAP does everything for you – finally!
  • Logistics Info System (LIS)  – not so much. Even though I wouldn’t completely discount it as an option for reporting (yet), it is usually plagued by the same problems as BI – inconsistent updates and “why this report different from that report” wild goose chase.
  • When it comes to reports – “think medium”. You’ve probably noticed that in our case number of reports reduced greatly in SAP compared to BI. Why was that? Turned out that we had many reports that essentially used the same data but were presenting it slightly differently. There is no need to break up the reports when display customization can be easily achieved by using the ALV layouts, for example. And on the other side of the spectrum are the “jumbo reports” that include data from 4 different modules because someone requested the report 10 years ago and thought it was good, so he/she told other users about it and other users liked it too BUT they needed to add “just these two fields” to make it perfect, then more and more users joined this “circle” and everyone kept asking for “just these two fields” but nothing was getting removed because the first guy left the company years ago and now no one even remembers what the original requirement was. So you end up with the ugly Leviathan of a report that has to be sent to the farm upstate eventually. Try to avoid those.
  • Be creative. If a “jumbo report” cannot be avoided (ugh!), you might want to consider creating a “micro data warehouse” in a custom table that can be populated in a background job daily (or more frequently, if needed). Such reports usually do not require up to the second information and we can get the best of both worlds – minimize impact on performance by pre-processing the data and allow the users to run the reports on their own. Another tip – if a report is used by different groups of users and includes certain fields that are more time-consuming than others, you can add an option to selection screen to exclude those fields when they’re not needed. Also simple training the users on ALV functionality can be very helpful. For example, we noticed that some users ran a report for one customer, then went back to the selection screen and ran it for another. But running the report for two customers and then using ALV filter would actually be more efficient.
  • Don’t let the big picture intimidate you. The big goal of taking down a large (as we thought!) productive system seemed pretty scary in the beginning, but, as you could see, we broke it down into pieces and just got it down one by one. And this was done by the team of just 5 people in 3.5 months while supporting two productive SAP systems and handling other small projects as well. If we did it, so can you!


Useful links


Next Generation ABAP Runtime Analysis (SAT) – How to analyze performance - great blog series on the SAT tool. My weapon of choice is still good old ST05, but in some cases it might be not enough.

There are many SCN posts regarding ABAP performance tuning, although quality of the posts varies greatly. This wiki page could be a good start, use Google to find more. Look for the newer/updated posts from the reputable SCN members. (Hint – I follow them! )

Some tips on ABAP query - this is Query 101 with step by step guide, great for the beginners.

10 Useful Tips for Infoset Queries - good collection of miscellaneous tips and tricks

Query Report Tips Part 2 - Mandatory Selection Field And Authorization Check - great tip on adding a simple authority check to a query.


(*) or BW? – I’m utterly confused at this point but had the picture already drawn so let’s just stick with BI


Filter Blog

By author:
By date:
By tag: