1 2 3 24 Previous Next

SAP Business Warehouse

355 Posts

Imagine you've created an (acquisition) DSO for which you'd like to browse its content. Imagine this DSO contains General Ledger Data and is called MC_O005



Of course "Display data" or SE16 can come in handy, but slicing and dicing on this data isn't easy. Wouldn't it be nice if there was a (hidden gem) feature which enables you to browse the content of this infoprovider via transaction RSRT (the query monitor transaction), WITHOUT an actual query being created on top of this infoprovider?



Execute transaction RSRT and add the following query name: <infoprovider name>/!!A<infoprovider name>In our example, the above would lead to query name MC_O005/!!AMC_O005 (as to be seen in the screengrab above) Pressing enter generates the following success message on the bottom left of the screen



When the above success message is shown, a temporary query has been generated which can be executed....by pressing the execute button. The result of this temporary query is similar to the execution of a regular query


Isn't this neat ;-)


(This blog has also been cross posted on http://www.thesventor.com)

Today I'm writing this blog post for reiterate the importance of chosse the right component when open an incident, and also on how to make a best search before open it.

SAP is working a lot on provide notes, KBA's and different documentation about our known errors, bugs and frequent customer questions, this is for improve our customer satisfaction and time.


For identify the relevant documentation to your inquiry, it is very important to know how to make the best search. For that the bellow note can help you:

2081285 - How to get best results from an SAP search?


Also, as you might know the component chosen to open your incident determines the expert team that will process your inquiry, and also the result of your document search. A wrong component chosen can cause unnecessary delays until your incident reaches the correct team and the relevant notes and documentation will not be identified.


As an information source on how SAP components may be defined, I'd like to recommend you the following SAP Wiki in which you'll find relevant information on how to define the component of your message and, therefore, obtain a faster response from our Experts.







This information is valid to all SAP Support component areas and I suggest you to review it to make sure that you are opening your incidents under the correct expertise area avoiding delays in the resolution of your future issues.




Greetings to All,

Hope you all are having a wonderful time. In this Blog I would like to explain the step by step procedure on how to load a flat file into Info cube in a BW system. Am in fact thrilled to write my first blog in this space.


Loading a flat file into BI system is one of the basic requirement  that all BI consultants should be knowing however due to the long process we at times may skip a step a or two which will result in failure of data load. In this blog I would like to come up with all the steps in detail along with screenshots, hope you find it interesting.

The following steps needs to be carried in completing our task,


  1. Create an Info Objects Catalog for both Characteristics and Key figures.
  2. Create Info Objects for both the Info Object Catalogs created in step 2.
  3. Create a Flat file Source System.
  4. Create an Application Component for the created Source System.
  5. Create a data source in the application component.
  6. Load data into PSA(Persistent Staging Area).
  7. Create an Info Package.
  8. Create an Info Cube by proper Assignment of info obects into fact and dimension table.
  9. Load data into Infocube through transformation and DTP(Data Transfer Process).

One need to follow all the above steps for proper data load into Infocube.Before we get into the steps let me brief you on the requirement. Consider we have a local file with five columns such as,

  1. Eid(Employee ID)
  2. Did(Department ID)
  3. Pid(Project ID)
  4. NOH(No Of Hours Worked)
  5. Sal(Salary)

Now we have to load the values in these file in to the BI system. For simplification purpose let us consider only five entries in the file as shown below, Please note that the file should be in excel CSV format.


Out of these five columns the first three columns Eid, Did, Pid will be categorized into characteristics since they are not subject to any change.

Whereas columns Noh, Sal will be grouped into key figures since they may change in future.

Now lets us see each step in detail,

Step 1 - Create an Info Area:

After logging into the system, execute the T-Code RSA1. You will end up to the below screen.  Go to Info Provider tab and right click on info provider and create an Info Area as shown below.


Once the above step is done you will get a pop-up, give the name of the Info Cube and its description as shown below and click continue.


Once InfoArea is created you can see the same in the Info provider List of as marked below.


Step 2- Create an Info Objects Catalog for both Characteristics and Key figures.

Now go to Info Objects tab and reach to the Info Area that is just created, if you don’t find the same refresh the  objects using the given(Marked in Green Below).

Right Click on the Info Area and Create an Info Object Catalog for Characteristics as shown below.


In the Pop up fill in all the details as shown below, Please do not forget to click on the characteristics radio button for object type since the catalog is created for characteristics. Then click the create icon and then activate(ICON marked in green).


Now you will find the catalog created as shown below,


Now Proceed to create another Catalogs for Key figures with only difference that you have to check Key Figures info object as marked below.


Once created again activate and you can find both the catalogs as shown below.


Now Right click on each catalog and create the respective info objects as shown below,


Now you will be getting a pop up as shown below, now we are creating info object for employee Id. Fill in the required details and click continue.


In the next screen provide the details as shown below, in this case since we are creating for Emp ID we select the character string data type with the length of 3 then click on activate button.


Now you will find the created info object of emp id under the characteristics catalog as shown below.


Similarly for fields department number and project id we select the same data type and length since we have similar data. We have selected such data just for simplification, in real time you may get complex data with complex data type. Once all three are created we will find all the info objects in the Test_IOC catalog as shown below.


Now proceed to right click on the Info Object catalog for key figures and create two info objects for salary and no of hours in the same as created above.

Here since it is a key figure the data types that must be selected will be of different set as shown below. Here we will select as number for our convenience as shown below.


Once both the key figures are created we will get a display as shown below.


So as of now we have created two info objects catalogs one for characteristics and other for key figures and we have also created their respective info objects which is 5 in our case.

Step 3- Create a Flat file Source System.

Go to source systems tab, right click on the source systems and click on create as shown below.


In the Next Pop up select the flat file radio button as shown below and click continue.


Give the name for the flat file as below and continue. Please note that this step will take some time so please be patient.


Once the flat file is created you can see it as shown below.


Step 4- Create an Application Component for the created Source System.

Double click on the flat file that was created in the above step, it will lead you to the data sources tab. Now right click on the top and try creating Application Component as shown below.


Now in the next Pop-Up give the details of the application component and click on continue.


Step 5-Create a data source in the application component.

Once the Application component is created in the above step you have to scroll down to last to see it, right click on it and create data source as shown below.


In the next pop up name the source system details and select the data source type, in our case we are trying to upload transaction data hence we select that and click on continue as shown below.


Step 6- Load data into PSA(Persistent Staging Area).


Now we have entered into loading data into PSA. As soon as you click on continue in the above step you will get the below tab called general details, enter the descriptions and go to the next tab as shown below.


In the next  tab is named as Extraction, this is the most important tab. Here fill all the values as shown in the below screen shot. For reference all important fields are briefed below.


Delta Process: Since we are doing a full load we have selected the option accordingly.


Adapter: Since the files are load from the local system, the option is selected accordingly.


File name: Browse the file from the local system and place it In the field.


Header rows to be ignored: Since the file has one row for header we provide as 1.


Data Format: Since it is a CSV file we select the option accordingly.


Data Separator: We provide “ , “ as the separator.

All the above mentioned details can be seen in the below screen shot.


In the next tab click on the load sample data button (marked in red below), you will able to see the sample data with comma separator as shown below.


In the next tab, you need not perform any operation just check the fields and data types are loaded correctly as shown below.


In the next tab click on Read preview data button as marked below, you will get a activate pop up button proceed to activate.


On loading data successfully you will get the below screen with the data.


Till now we have created and activated a data source and we have loaded data into data source via PSA.


Step 7 : Creation Of Info Package.

Now right click on the data source and create an Info Package as shown below.


In the next screen name the info package and continue as shown below and then save.


Now you will get the below screen, proceed to carry a check with option give below (Marked in Red)


Once the check is carried out, have a look at all the tabs whether all the values filled in earlier is similar. Reach to the final tab for scheduling, select the start immediately radio button and click on start as shown below.

On successful execution you will get a confirmation as data was requested below.


If you want to ensure that the data has been loaded into the system properly you can do the below steps:

  1. Double click on the data source.
  2. Select GOTO from menu bar, and select technical attributes.
  3. You will get a pop up as below, click on the table as marked below and check for the entries in the table. You should find the same data in the file here in this table




If all are fine till this step then proceed.

Step 8-Create an Info Cube:

Go to info Provider tab and right click on the info area that was created and create an info cube as shown below.



In the next screen name the Info cube and create as shown below.


In the next screen select the Info Object catalog icon as marked below.


Now you will be getting a pop with both Info Object catalog created in step 2. Double click the first catalog created for Characteristics. The pop up screen will be as below.


In the next screen you will get all the info object that are created under that catalog, just drag all those three info objects and drop it to Dimensions folder in the right(Marked in Red Below)

Remember you have to just a drag and drop each info objects from characteristics folder (In Green below) to Dimension node (In Red Below).


Now similarly click on the info objects icon again as did before, now select the catalog created for key figures and similarly drag and drop the key figures from key figures folder(Marked in Green) to the Key figures folder in the right (Marked in Red) as shown below. Then Click on activate icon.


Step 9-Creating Transformation:

  On activation you will get an info cube icon as shown below. Right click on the icon and select transformation as shown below.


In the next screen select the object type as data source and give the correct details of the data sources that we have created as shown below. Then click on continue.


In the next screen map the fields from the file to the fields in the data source that we have created. For mapping you have start from the field on the table and drag to the same field on the right table. After proper mapping activate (Ignore warnings if any) the same and you will get a screen as below.


Step 10-Create DTP (Data Transfer Process):

On Successful completion of above steps you will get an icon for DTP as below, right click on it and select create data transfer process as shown below.




In the next screen you will get a pop up as shown below, just proceed to continue without making any changes.


Now you will get a screen as below, under extraction tab select the extraction mode as full as shown below.


In the next tab called update, select the error handling method as shown below and proceed to next tab.


As soon as you reach the final execute tab first activate (Ignore warnings if any) the process, post which you can see the Execute button available (Before Activation it will greyed out).


Upon clicking on the execute button you will get a below pop up, select Yes and continue.


In the Next screen you will get a report page with all in green status (If the process is successful) as shown below. If the status is in yellow it means the process is still running in that case keep on refreshing until you get a green (Successful) or Red (failed).


The whole process till completes the successful load of flat file data into the info cubes. To check the data in info cube, right click on the info cube and select display data as shown below.


In the next screen select ‘fields for selection button’ near execute button as shown below,


Select the ‘select all’ button near execute button as shown below.


Now click on execute buttons to see the value in the Info cube as shown below,


Thus we have successfully loaded a data from the flat file into Info cube. Hope this blog will help you understand the concept clearly.


I would like to thank you all for patiently reading such a long Blog, hope this serves you better. Please do share your reviews and feedbacks which will serve as a encouragement for my future blogs.


Thanks and Regards,

Satish Kumar Balasubramanian

In the past I created a blog post describing the Infoobjects Level authorizations:


SAP BW Authorization - InfoObjects level authorization

Now I will focus on creating and assigning authorization to BW:


Creating authorization

To create analysis authorization perform the following steps:

1. Use TCode RSECADMIN, go to the Authorizations tab.

2. Press Maint. button and enter a name (e.g., Z_USR_A1) and press Create.

3. Fill required Short Text field.

4. Insert special characteristics: 0TCAACTVT, 0TCAIPROV, and 0TCAVALID by pressing Insert Special Characteristics button.


5. Insert authorization-relevant characteristics and navigational attributes (Insert Row -> press F4 -> choose item). I described how to set in my previous blog SAP BW Authorization - InfoObjects level authorization.

6. Press Details button to restrict values and hierarchy authorization of inserted items.

7. Save the authorization.


You must include special characteristics: 0TCAACTVT (activity), 0TCAIPROV (InfoProvider), and 0TCAVALID (validity) in at least one authorization for a user. They are used for:

  • 0TCAACTVT - to restrict the authorization to activities, default value: Display;
  • 0TCAIPROV - to restrict the authorization to InfoProviders, default value: all (*);
  • 0TCAVALID - to restrict the validity of the authorization, default value: always valid (*).

If you want to authorize access to key figures, add 0TCAKYFNM characteristic to the authorization. It is important to know that if this characteristic is authorization-relevant, it will be always checked during query execution.



0BI_ALL authorization

The 0BI_ALL authorization includes all authorization-relevant characteristics. It is automatically updated when you restrict an infoobject. Use this authorization if you have users that are allowed to execute all queries.

Assigning authorization to a user

You may assign authorization directly to a user or to a role. To assign authorization directly use TCode RSECADMIN, go to the User tab and press Assign. Now enter the user name, pressChange and select the authorization. To assign authorization to the role use TCode PFCG, enter the role name and press Change. Using Authorization tab change authorization data by adding S_RS_AUTH entry. The entry includes analysis authorization in roles. Enter here authorization that you previously created.


I encourage you to collect all requirements related to BW security, structure of the organization and authorization needs before starting authorization preparation. I have learned that it can save a lot of time. Organization's hierarchy can facilitate your work by providing structures and levels of authorization. Indirect authorization assignment can also save your time because it is more flexible and easier to maintain.

Author: Subhash Matta


Company: NTT DATA Global Delivery
Services Limited

Author Bio :

Subhash Matta is a Senior Consultant at NTT DATA from the
SAP Analytics Practice


This SCN can be useful in some special cases where a cube compression activity fails due to Index size becomes larger than table (more than table size). Compression is nothing but deleting the Request ID’s and moving the data from F Fact Table to E Fact table. This will enable for an improved Query performance. If the compression activity fails it will hinder the Query performance. 

Take a scenario, where the cube compression fails.

First thing you can do would be a few/single request is taken into the compression request and try compressing.

If this attempt fails due to an error, “Failed compression:”
SQL-ERROR: 942 ORA-00xxx: table or view does not exist
.  This could be result of the Database adjustments or the table index size issue.

This can be resolved as below.





Go to Transaction code, SE14 ABAP Dictionary: Database Utility. Give the table name
of the E Fact Table of an Infocube where in the error is showing.




Press Enter you will get the below screen.




Click on “Activates and Adjust database” please make sure
that the option is on “SAVE DATA”, else you can lose the data in the cube.
Please be careful, as this will generally be done in Production, we cannot
afford to lose data.










After the completion of “Activate and adjust” step, Goto -> Extras -> Force Conversion.





After this activtity
completes successfully the above shown message pops up. Please try repeating the compression again.

If this again fails, please check if the table is active, if not please activate the table and try to compress.

This should be successful.


NOTE: The solution suggested is applicable to BW version 7.23 and more. If you don’t have the
mentioned version please raise a request with SAP. Please perform this activity in background for monitoring the job.

Please note that this activity will take more time if the data is more in the table.


This document explains the conversion approach/steps to convert the Local Composite Provider/ Multi Provider to HCPR composite provider.



Backend BW version has to be BW 7.4 SPS9 and above to get the conversion. 

If the version is lesser than SPS11 then pre-requisite is to apply the OSS note – 2115198 - HCPR: Corrections for conversion.

SAP suggest to use Composite provider of type HCPR as HCPR can be developed using the Studio and not the COPR. All future developments/enhancements are happening on the type HCPR.

Steps Involved:

Step 1: Check the system version and consider the below action if the system is lesser than SPS11.

If the system is lesser than SPS11 then apply the OSS note – 2115198 in the system which will bring the required corrections in the system.

Download the OSS Note in SNOTE:


Implement the note:

While implementing this this is prompt to implement note- 2115198 corrections and updating of the new program RS_UDO_NOTE_2115198 in the system which has to execute post implementing the note – 2115198.


Once the note is implemented, then execute the report using SE38 in the system:


Execute with the sequence as per the instructions below:


Test run gives the below message:


Proceed to Step 2:


This will prompt for modifications to be stored in the transport. 

Execute the activity as a batch job.


The below message appears for the job.


Come back to SE38 to re-run the program again with the UPDATE mode.


     These results with the below log:


If all green then the note corrections are applied. If there are any errors, then perform the execution one more time.

Goto next step:


This list the overall log in Green color.


Once this is done then confirms the corrections are done in SNOTE.


Confirm the manual actions and then mark the action as done. With this Note is completely implemented.

Step 2: Execute the conversion program in BW system to convert the existing COPR/MPRO to HCPR.

SAP has delivered a program - RSO_CONVERT_IPRO_TO_HCPR to convert the already existing Multi provider (MPRO) / Local Composite provider (COPR) to type HCPR.

Execute the program in SE38 which will bring the below view:


The options represents the below:


Note: There are multiple additional options are available in BW7.5 which allows taking a backup of the Queries or restoring the queries.

Update the fields with the source and target provider data and run the execution in simulation mode.


Note: As in Local Composite provider, the Navigational attributes are marked as @3 as prefix which is replaced with prefix 4 to differential the type from COPR to HCPR.

Physical Conversion: Select the mode as "Convert Infoprovider"


Log displays with the results and the kind of changes.

Goto RSA1 and check for the new Composite Provider:


Double click on the composite:


Check the contents to validate the data further and reconcile the results matching with the COPR type.

Conversion using the same name space:

New entry with the same name space is created and available in RSA1.


Option 3:

To convert the below composite provider and its associated BEX query.


Execute the program with the below options:


Above selection will create the HCPR with the same name but creates as a copy and overwrite is not possible.

Post execution, this will prompt to select the list of BEX to be copied:


Proceed further will allow us to change the new Query name:


Rename can be done for example like below or to the customer name space.


Once this is done, the program lists the log for all the changes that are being performed.

In the database there is a new BEX query which has made an entry:


Observation: The old Composite is in the name space of @3 as prefix. And the new composite of type HCPR is referred with the same name space the way it looks in RSA1.

You can see the Old and new Composite.


Difference in the Bex Level between the old to the new:

This has additional Dimensions to list the InfoProviders separately where as in the COPR this is present in the Char. Catalog itself. Navigational attribute name space is changed as below:


Validate if the Output results are the same.


Below limitations are applicable:


Note: Prepare the data output before and after conversion and make sure that the results are same. Depending on the results, check whether the new HCPR need to be considered to make the old one as obsolete.

In WEBI, the old BEX has to be replaced with the new BEX if the converted HCPR is considered going forward and if the BO is being used on top of BEX.

Good Luck.

References: help.sap.com

Dear All,



In the last months I indentified several incidents reporting Syntax errors in BW objects (Message no. RG102).

The ones that appear with more frequency are related with DSO and Transformation activation:


1. Syntax error in GP_ERR_RSODSO_ACTIVATE, row xxx (-> long text)

Message no. RG102



Message no. RG102



For these errors the following notes were created:


Syntax error 1:


2100403 - Syntax error in GP_ERR_RSODSO_ACTIVATE during activation of DSO



Syntax error 2:


2152631 - 730SP14: Syntax error during activation of Transformations

2124482 - SAP BW 7.40(SP11) Activation failed for Transformation

1946031 - Syntax error GP_ERR_RSTRAN_MASTER_TMPL during activation of transformation

1933651 - Syntax error in GP_ERR_RSTRAN_MASTER_TMPL for rule type "Time Distribution"

1919235 - "Syntax error in routine" of a migrated transformation/during migration

1889969 - 730SP11:Syntax error in GP_ERR_RSTRAN_MASTER_TMPL for RECORDMODE

1816350 - 731SP8:Syntax errors in routines or Assertion failed during activation of transformation

1762252 - Syntax error in GP_ERR_RSTRAN_MASTER_TMPL







Please help me to Extract data from SAP TCODE ( front end ) automatically in Excel vba code.


Sap “Script Recording and Playback ” is disabled from server.



There is an OFB solution how to model drill down using Analysis Items in WAD. What it takes is to pass selection from parent analysis item to child one. But this solution has two major problems:

  • Bad Performance (since there is no parent Analysis Item initial selection, it takes long time to load detailed data of child analysis item);
  • Not intuitive interface (since there is no parent Analysis Item initialial selection, it is not clear that parent analysis item should limit data of child one).

In my blog I will explain how to model drill down with initial selection to make analysis application both responsive and intuitve (some JavaScript knowledge will be required).

    Once my analysis application is refreshed it looks like this


Analysis Application.jpg

This is what is required to make initial selection work:

Lets see each step in details.


Initially hide child Analysis Item



Find first Product from parent Analysis Item

Add Data Provider Info Item for DP_1 (used by 1st Analysis Item)


Define JavaScript function to read first Product.


function Get_Product() {

var s;
var xml;
xml = document.getElementById('DATA_PROVIDER_INFO_ITEM_1').innerHTML;
xmlDoc=new ActiveXObject("Microsoft.XMLDOM");


var Product = xmlDoc.getElementsByTagName("AXIS")[0].getElementsByTagName("MEMBER")[0].getAttribute("text")

return Product;




Select first row in parent Analysis Item

Define JavaScript function to select first row in 1st Analysis Item


function Select_Row() {
var tableModel;
var element =  document.getElementById('ANALYSIS_ITEM_1_ia_pt_a');
  if (typeof(element) != 'undefined' && element != null)  {
// BW 7.3
  tableModel = ur_Table_create('ANALYSIS_ITEM_1_ia_pt_a'); 
  else {
// BW 7.0
    tableModel = ur_Table_create('ANALYSIS_ITEM_1_interactive_pivot_a'); 

var oRow = tableModel.rows[ 2 ];
sapbi_acUniGrid_selectRowCellsInternal( tableModel, oRow, true, null);



Limit child Analysis Item data to fist Product in parent Analysis Item and unhide child Analysis Item

Define JavaScript function that executes command sequence of two commands:


function Filter_N_Unhide( Product ){

//Note: information can be extracted using the parameter 'currentState'

// and 'defaultCommandSequence'. In either case create your own object

// of type 'sapbi_CommandSequence' that will be sent to the server.

// To extract specific values of parameters refer to the following

// snippet:

//  var key = currentState.getParameter( PARAM_KEY ).getValue();

//  alert( "Selected key: " + key );


// ('PARAM_KEY' refers to any parameter's name)

//Create a new object of type sapbi_CommandSequence

var commandSequence = new sapbi_CommandSequence();


  * Create a new object of type sapbi_Command with the command named "SET_SELECTION_STATE_SIMPLE"



/* Create parameter TARGET_DATA_PROVIDER_REF_LIST */


var paramListTARGET_DATA_PROVIDER_REF_LIST = new sapbi_ParameterList();

// Create parameter TARGET_DATA_PROVIDER_REF

var paramTARGET_DATA_PROVIDER_REF1 = new sapbi_Parameter( "TARGET_DATA_PROVIDER_REF", "DP_2" );


  // End parameter TARGET_DATA_PROVIDER_REF!






/* Create parameter RANGE_SELECTION_OPERATOR */


var paramListRANGE_SELECTION_OPERATOR = new sapbi_ParameterList();

// Create parameter EQUAL_SELECTION

var paramEQUAL_SELECTION = new sapbi_Parameter( "EQUAL_SELECTION", "MEMBER_NAME" );

var paramListEQUAL_SELECTION = new sapbi_ParameterList();

// Create parameter MEMBER_NAME

var paramMEMBER_NAME = new sapbi_Parameter( "MEMBER_NAME", Product );

paramListEQUAL_SELECTION.addParameter( paramMEMBER_NAME );

  // End parameter MEMBER_NAME!

paramEQUAL_SELECTION.setChildList( paramListEQUAL_SELECTION );


  // End parameter EQUAL_SELECTION!






/* Create parameter CHARACTERISTIC */

var paramCHARACTERISTIC = new sapbi_Parameter( "CHARACTERISTIC", "D_NW_PRID" );



/* End parameter CHARACTERISTIC */


// Add the command to the command sequence

commandSequence.addCommand( commandSET_SELECTION_STATE_SIMPLE_1 );


  * End command commandSET_SELECTION_STATE_SIMPLE_1



  * Create a new object of type sapbi_Command with the command named "SET_ITEM_PARAMETERS"


var commandSET_ITEM_PARAMETERS_2 = new sapbi_Command( "SET_ITEM_PARAMETERS" );

/* Create parameter ITEM_TYPE */

    var paramITEM_TYPE = new sapbi_Parameter( "ITEM_TYPE", "ANALYSIS_ITEM" );commandSET_ITEM_PARAMETERS_2.addParameter( paramITEM_TYPE );


    /* End parameter ITEM_TYPE  */

/* Create parameter INIT_PARAMETERS */

var paramINIT_PARAMETERS = new sapbi_Parameter( "INIT_PARAMETERS" );

    var paramListINIT_PARAMETERS = new sapbi_ParameterList();commandSET_ITEM_PARAMETERS_2.addParameter( paramINIT_PARAMETERS );


// Create parameter VISIBILITY

var paramVISIBILITY = new sapbi_Parameter( "VISIBILITY", "VISIBLE" );

paramListINIT_PARAMETERS.addParameter( paramVISIBILITY );

  // End parameter VISIBILITY!

paramINIT_PARAMETERS.setChildList( paramListINIT_PARAMETERS );

/* End parameter INIT_PARAMETERS  */


/* Create parameter TARGET_ITEM_REF */

var paramTARGET_ITEM_REF = new sapbi_Parameter( "TARGET_ITEM_REF", "ANALYSIS_ITEM_2" );

commandSET_ITEM_PARAMETERS_2.addParameter( paramTARGET_ITEM_REF );


/* End parameter TARGET_ITEM_REF */


// Add the command to the command sequence

commandSequence.addCommand( commandSET_ITEM_PARAMETERS_2 );


  * End command commandSET_ITEM_PARAMETERS_2


//Send the command sequence to the server

    return sapbi_page.sendCommand( commandSequence );




Code call of all onload JavaScripts

Define JavaScript function to call all above and attach it to BODY onload event

function initial_selection( )  {





        <body onload="initial_selection();" >

            <bi:QUERY_VIEW_DATA_PROVIDER name="DP_1" >



See attached EPM_DEMO Web Application templete for complete implementation details (rename to EPM_DEMO.bisp before upload to WAD)

Sometimes you face such issues in SAP BW which may drive you crazy and this deadlock issue is one of them. I have recently resolved this infamous dump so decided to share my experience with you all. Before any further delay, let me tell you the system & database details about my system.


Database SystemMSSQL
Kernel Release741
Sup.Pkg lvl.230


Let me first explain what is deadlock.

A database deadlock occurs when two processes lock each other's resources and are therefore unable to proceed.  This problem can only be solved by terminating one of the two transactions.  The database more or less at random terminates one of the transactions.


Process 1 locks resource A.

Process 2 locks resource B.

Process 1 requests resource A exclusively (-> lock) and waits for process 2 to end its transaction.

Process 2 requests resource B exclusively (-> lock) and waits for process 1 to end its transaction.

For example, resources are table records, which are locked by a modification or a Select-for-Update operation.

Following dump is expected when you will upload master data attribute.


Sometimes you might encounter this dump too.




In order to avoid this issue please make sure that your DTP does not have semantic grouping ON and it's processing mode should be "Serially in the Background Process". To be on the safe side, I would recommend to create new DTP with these settings.



Please let me know if you find this blog helpful or not.


P.S. This was related to time-dependent master data.

Hello Guys,



I only would share with you the BW-WHM* released notes for the last 7 days:



BW-WHM-MTD-SRCH2152359BW search/input help for InfoObjects returns no results
BW-WHM-MTD-INST2142826Method INSTALL_SELECTION of class CL_RSO_BC_INSTALL uses selection subset of pr
BW-WHM-MTD-HMOD2211315External SAP HANA view: navigation attribute returns no values
BW-WHM-MTD-HMOD2217796External SAP HANA View with Nearline Storage: column store error: fail to creat
BW-WHM-MTD-CTS2204227Transport: Error RSTRAN 401 in RS_TRFN_AFTER_IMPORT due to obsolete TRCS instan
BW-WHM-DST-UPD2213337Update rule activation ends with dump MESSAGE_TYPE_X
BW-WHM-DST-TRF2216264730SP15: Transformation not deactivated if the InfoObject/DSO used in lookup ru
BW-WHM-DST-TRF2212917SAP BW 7.40 (SP14) Rule type READ ADSO doesn't work as expected
BW-WHM-DST-TRF2214542SAP HANA Processing: BW 7.40 SP8 - SP13: HANA Analysis Processes and HANA Trans
BW-WHM-DST-TRF2215940SP35:Time Derivation in Transformation is incorrect
BW-WHM-DST-TRF2003029NW BW 7.40 (SP08) error messages when copying data flows
BW-WHM-DST-TRF2217533DBSQL_DUPLICATE_KEY_ERROR when transporting transformation
BW-WHM-DST-TRF2192329SAP HANA Processing: BW 7.50 SP00 - SP01: HANA Analysis Processes and HANA Tran
BW-WHM-DST-SRC2185710Delta DTP from ODP Source System into Advanced-DataStore-Objekt
BW-WHM-DST-SDL2126800P14; SDL; BAPI: Excel IPAK changes with BAPI_IPAK_CHANGE
BW-WHM-DST-PSA2196780Access to PSA / PSA / Error stack maintenance screen takes long time or dumps .
BW-WHM-DST-PSA2217701PSA : Error in the report RSAR_PSA_CLEANUP_DIRECTORY_MS when run in the 'repair
BW-WHM-DST-PC2216236RSPCM scheduling issue due to missing variant
BW-WHM-DST-DTP2185072DTP on ODP source system: error during extraction
BW-WHM-DST-DTP2214682P35:PC:DTP:Monitor-Anzeige dumpt bei Skipped DTP
BW-WHM-DST-DS1923709Transport of BW source system dependent objects and transaction SM59
BW-WHM-DST-DS2038066Consulting: TSV_TNEW_PAGE_ALLOC_FAILED dump when loading from file
BW-WHM-DST-DS2154850Transfer structure is inactive after upgrade. Error message: mass generation: n
BW-WHM-DST-DS2218111ODP DataSource: Data type short string (SSTR)
BW-WHM-DST-DFG2216492Datenflusseditor erscheint in den BW Modeling Tools anstatt im SAP GUI
BW-WHM-DST-ARC2155151Archiving request in Deletion phase / Selective Deletion fails due existing sh
BW-WHM-DST-ARC2214688Short dump while NLS Archiving object activation
BW-WHM-DST-ARC2214892BW HANA SDA: Process Type for creating Statistics for Virtual Tables
BW-WHM-DST1839792Consolidated note on check and repair report for the request administration in
BW-WHM-DST2170302Proactive Advanced Support - PAS
BW-WHM-DST2075259P34: BATCH: Inactive servers are used - DUMP
BW-WHM-DST2176213Important SAP notes and KBAs for BW System Copy
BW-WHM-DST1933471Infopackage requests hanging in SAPLSENA or in SAPLRSSM / MESSAGE_TYPE_X or TIM
BW-WHM-DST2049519Problems during data load due to reduced requests
BW-WHM-DBA-SPO2197343Performance: SPO transport/activation: *_I, *_O, transformation only regenerate
BW-WHM-DBA-ODS1772242Error message "BRAIN290" Error while writing master record "xy" of characteris
BW-WHM-DBA-ODS2215989RSODSACTUPDTYPE - Deleting unnecessary entries following DSO activation
BW-WHM-DBA-ODS2209990SAP HANA: Optimization of SID processes for DataStore objects (classic)
BW-WHM-DBA-ODS2214876Performance optimization for DataStore objects (classic) that are supplied thro
BW-WHM-DBA-ODS2218170DSO SID activation error log displays a limit of 10 characteristic values
BW-WHM-DBA-ODS2217170740SP14: 'ASSIGN_TYPE_CONFLICT' in Transformation during load of non-cumulative
BW-WHM-DBA-MPRO2218861730SP15:Short dump 'RAISE_EXCEPTION' during creation of Transformation with so
BW-WHM-DBA-MD2172189Dump MESSAGE_TYPE_X in X_MESSAGE during master data load
BW-WHM-DBA-MD2216630InfoObject Master Data Maintenance - Sammelkorrekturen für 7.50 SP 0
BW-WHM-DBA-MD2218379MDM InfoObject - maintain text despite read class
BW-WHM-DBA-IOBJ2215347A system dump occurs when viewing the database table status of a characteristic
BW-WHM-DBA-IOBJ2217990Message "InfoObject &1: &2 &3 is not active; activating InfoObject now" (R7030)
BW-WHM-DBA-IOBJ2213527Search help for units not available
BW-WHM-DBA-ICUB1896841Function: InfoCube metadata missing in interfaces
BW-WHM-DBA-ICUB2000325UDO - report about SAP Note function: InfoCube metadata missing in interfaces (
BW-WHM-DBA-HIER2211256Locks not getting released in RRHI_HIERARCHY_ACTIVATE
BW-WHM-DBA-HIER2215380Error message RH608  when loading hierarchy by DTP
BW-WHM-DBA-HIER2216696Enhancements to the internal API for hierarchies in BPC
BW-WHM-DBA-HCPR2210601HCPR transfer: Error for MetaInfoObjekte during copy of queries
BW-WHM-DBA-COPR2080851Conversion of MultiProvider to CompositeProvider
BW-WHM-DBA-ADSO2215201ADSO: Incorrect mapping of RECORDTP in HCPR
BW-WHM-DBA-ADSO2215947How to Set Navigation Attributes for an ADSO or HCPR
BW-WHM-DBA-ADSO2218045ADSO partitioning not possible for single RANGE values
BW-WHM-DBA2218453730SP15: Transaction RSRVALT is obsolete
BW-WHM1955592Minimum required information in an internal/handover memo




This may sound very basic but can be useful to someone who doesn't know yet.Others, please ignore.

You may have a situation where Event triggers are used in process chains and you are confused or find difficult to identify which process exactly triggered an particular event. The figures below illustrates example scenario and method of finding it using the standard way of digging the related tables.


You have 2 chains,

1) Chain that raises an event trigger

2) Chain that receives the event





If you need to find out the parent chain that raised the event "EVENT1"( in this case ), you can use below tables to get the information.



2) Input LOW = "EVENT1",TYPE = "ABAP" (basically its the event parameter you want to search for)

3) Copy the value from field VARIANTE


5) Input VARIANTE = value copied from step (3)

6) CHAIN_ID field will give you the technical id of the process chain that raised this event.



In the past I had to analyse PSA tables and, to be more specific, I had to find out the distinct values for a table column in order to know which specific values have been extracted from the source system. This requirement cannot be solved with the se16 transaction directly. As a turnaround I exported the table data to Excel and then I used the option "Remove duplicates". This worked in the beginning, but with larges PSA tables that turnaround wasn't practicable anymore.

For SAP BW InfoProviders this requirement can be handled with the transaction LISTCUBE, but from my point of view it is too complicated and time consuming.


So I developed a solution for this requirement in SAPGUI which was inspired by the "Distinct values" option in SAP HANA Studio.





User-friendly tool to analyse the distinct values of a se16 table column & of any SAP BW InfoProvider.



Solution & Features

The attached report ZDS_DISTINCT_VALUES has two parameters to pass the specific table and column name. The parameter values will be checked and analysed. If the table parameter is a SAP BW InfoProvider the function "RSDRI_INFOPROV_READ" will be used to extract the data. Otherwise a generic ABAP SQL call is executed to get the distinct values. If the column parameter is empty or cannot be found for this table / InfoProvider, the list of possible column values for this table is returned.


The output returns a table with the distinct values and the number of occurrence. If it is possible to get text values (InfoObject master data or domains), these text values will also be returned. For SAP BW master data the function "RSDDG_X_BI_MD_GET" is used and for domains "DDIF_DOMA_GET".









Feel free to use and extend the tool. Contact me for any questions, etc. Attention: MultiProviders are not supported.

I have seen recently a lot of problems or SCN discussions under BW area about the use of Error Handling and Semantic groups on DTP's.

And I thought, it would be a good idea to give a brief overview of the use of these features on DTP's.

The goal of this blog post is provide a generic information about the Influence of START/END Routine in Transformation on the Processing mode of Data Transfer Process (DTP) for loading Datastore Object (DSO) and the technical reason behind it. In all the cases it is assumed that either a START routine or END routine or both are used in the transformation connecting the source and DSO which is the target. The cases are broadly described below:

  • A1: Semantic Group is not defined in the DTP, Parallel Extraction flag is checked and Error handling is switched off ie. either 'Deactivated' or set to 'No update No Reporting': Processing Mode of the DTP is 'Parallel Extraction and Processing'.

  • A2: Semantic Group is not defined in the DTP, Parallel Extraction flag is not checked and Error handling is switched off ie. either 'Deactivated' or set to 'No update No reporting':Processing mode of the DTP is 'Serial Extraction, Immediate Parallel Processing'.

  • A3: Semantic Group is not defined in the DTP, Error handling is switched on ie. either 'Valid Records Update, No reporting (Request Red) or 'Valid Records Update, Reporting Possible (Request Green)': Processing Mode of the DTP is 'Serial Extraction and Processing of Source Package'. The system also prompts a message 'Use of Semantic Grouping'.

  • B1: Semantic Group is defined in the DTP and Error Handling is switched off ie. either 'Deactivated' or set to 'No update No Reporting': Processing Mode of the DTP: 'Serial Extraction, Immediate Parallel Processing'. The system also prompts a message 'If possible dont use semantic grouping'

  • B2: Semantic Group is defined in the DTP and Error Handling is switched on ie. either 'Valid Records Update, No reporting (Request Red) or 'Valid Records Update, Reporting Possible (Request Green)': Processing Mode of the DTP is 'Serial Extraction, Immediate Parallel Processing'.

In any DSO we allow the aggregation 'OVERWRITE' along with 'MAX', 'MIN' and 'SUM' which is non cumulative. So it is very important that chronological sequence of the records are intact for update because the 'principle of last wins' needs to be maintained. Therefore if Error handling is switched on and there are errors in the update then the erreneous records which are filtered out and written in the errorstack must be in chronological sequence.

The solution for the cases described above are:

  • In cases A1 and A2 the error handling is switched off so if there is one error the load is terminated and erreneous records are not stored anywhere. Therefore according to the the setting 'Parallel Extraction' ie. whether is checked or not the processing mode of the DTP is 'Parallel Extraction and Processing' or 'Serial Extraction, Immediate Parallel Processing' respectively.

  • In case of B1 you have defined a Semantic Group which ensures that the records pertianing to the same keys defined in the semantic group are ensured to be in one package. But as error handling is switched off this contradicts the setting of the semantic group as you dont want erroroneous records to written in the errorstack and so the processing mode of the DTP is 'Serial Extraction, Immediate Parallel Processing'. Also the system prompts to remove the Semantic group otherwise it makes no sense.

  • In case of A3 the semantic group is not defined but the errorhandling is switched on so it means that erreneous records are needed to be written in the errorstack and the chronological sequence needs to be maintained. But as there are no keys in semantic group defined it cannot be ensured that records pertaining to the same keys are in the same package. So the processing mode of the DTP is 'Serial Extraction and Processing of the source package'.Also the system prompts you to use the semantic groups sothat it is ensured that the records pertaining to the same keys defined in the semantic group are ensured to be in same package.

  • In case of B2 semantic group is also defined and errorhandling is switched on. This means the records pertaining to the same keys defined in the semantic group are ensured to be in one package and if the errors happen then the chronogical sequence will be maintained while writing the erreneous records in the errorstack after they are sorted according to the keys. So the processing mode of the DTP is 'Serial Extraction, Immediate Parallel Processing'.

I hope this blog can help you in further questions about use of error handling, semantic groups on DTP's.



1. Business Scenario



As a system performance improvement measure, the requirement is to send an email to the team with a list of ABAP Short Dumps that occur in the system during the day.

The email needs to be sent at 12:00 AM, and should contain a list of all the short dumps that have occurred in the system during the previous day.



2. Create a variant for the ABAP Runtime Error program RSSHOWRABAX


  1. Go to SE38 and enter the program name RSSHOWRABAX. Select the Variants Radio button and click display.

        In the next screen, enter the Variant Name and create.




     2. This takes you to the Parameters screen, where we need to add the parameters that we want our variant to contain.




     3. Click on Attributes. Enter the description.




     4. Since our requirement is to execute the variant for the previous day, we will select the following options for ‘Date’ in the ‘Objects for Selection Screen’ section

                  - Selection Variable = ‘X’ (X: Dynamic Date Calculation (System Date))


                    - Name of Variable: For the Variable name ‘Current date +/- ??? days’ the Indicator for I/E should be selected as ‘I’ and option as ‘EQ’



                 - Upon clicking ‘OK’, the next screen allows to enter the value for the Date Calculation Parameters.

                    Enter ‘-1’ here, since we need the previous day’s data.




                    - The final screen will be as follows




     5. Upon saving this, you will be re-directed to the Parameters screen, where the Date field will be auto populated with the previous day value




3. Define a Job to schedule the above report output as an email


     1. Go to System à Services à Jobs à Define Job




     2. Enter the Job Name and Job Class




     3. Go to Step. Here, enter the program name RSSHOWRABAX and the variant created above ZSHORT_DUMPS.

          In the user field, you can enter the User ID with which you want the email to be triggered.




          In our case, we needed it to be executed with ALEREMOTE. Click on Save.




     4. This step will send a mail to the SAP Business Workspace. In order to further forward this mail to the external email addresses, we will use the                         program RSCONN01 (SAPconnect: Start Send Process) and the variant SAP&CONNECTINT.




     5. Upon clicking Save, you can see both the steps in the overview.




     6. Next, enter the recipient details using the ‘Spool List Recipient’ Button. You can select from Internal User, Distribution lists and External addresses.




     7. Next, select your Start Condition to trigger this job. In our case, we have defined the same to trigger at the 1st second of the day daily.




5. Final Output


An email will be received daily at 12:00 AM, from ALEREMOTE. The Subject of the email will be as follows:

      Job <Job Name>, Step 1



The attachment will display the Runtime Errors information as shown below. This is the same information that we get in ST22.

      The below information is obtained in the mail triggered at 12:00 AM on 8/12/2015. Hence, it gives all the ABAP short dumps occurred on 8/11/2015.




Filter Blog

By author:
By date:
By tag: