1 2 3 24 Previous Next

SAP Business Warehouse

349 Posts

This Document Will be useful for Beginners those who working for BW Data Modeling.








5.Source System

6.Open Hub

7.Planning Sequence

8.Process Chains


Lets begin with InfoObjects.


Will Explain the Step by Step Procedure for Creation of InfoObjects.


Step 1 : Go to Transaction Code RSA1

            RSA1 Used for Data Warehousing Workbench Modeling



Step 2:  So you can see the Modeling





Step 3: Go to InfoObjects




Step 4: For Creating InfoObjects, You must have InfoArea and InfoObject Catalog.

          Create InfoArea First





Step 5:  As I m Creating it for Sample


            But you need to Create it As Per your Requirements.

  1.     Object Name Must be Alphanumeric
  2.     Usually Standard Infoobject will be Starts with '0"
  3.     Customized Infobject It means Created by User should Start with "Z or Y"









Step 6: then RIGHT CLICK the InfoArea


            Create InfoObject Catalog



Step 7: Infoobject has 2 Types


          Characteristic and Key figures




Step 8: Depends on your Requirement have to Create Infoobject Characteristic and Key figures


          Now i m going to Create Info Object Characteristic







Step 9:  Once you Created the InfoObject Catalog Characteristic It takes to the Next Screen



Here, You Can use this Option for Standard InfoObject or Other Customized Object(which Means Created Already for Other Modules).





Or Else you Want to Create the New InfoObject , First change your "Object Status" As "ACTIVE"


For that just CLICK the ACTIVATE ICON   Or  CTRL+F3




Step 10 : Then It Takes to the Object Directory Entry



As of now I m  Saving it in "Local Object" . About Package We can See it Later.





Step 11 : Now We going to Create InfoObject for Characteristic.


Object Name Must Have Between " 3 and 9" Characters.








Step 12: Once you Created the InfoObject ,It takes to the Next Screen



Info Object Characteristic Has " 6 Tabs"


1. General

2. Business Explorer

3.Master Data/Texts






Step 13:  Now i have Customer Number with Data type and Length ,Its Just for Sample




          Data Type has 4 Types




As Per my Requirement I have to give Data type As Char




So as per my Requirement " Customer Number " is Master Data.


Go to Master data/Text Tab


Have to Check " With Master Data"





Step 14:  Another point your have to check it your Desgin Document, As per my Design Document i have


              Customer Number is Master data & Text.


              Have to Check "with Master Data and With Texts'





Step 15 : Then  SAVE   CHECK   and ACTIVATE



Step 16: Finally Create one InfoObject Characteristic





WILL SEE IT ONE BY ONE............

There is an OFB solution how to model drill down using Analysis Items in WAD. What it takes is to pass selection from parent analysis item to child one. But this solution has two major problems:

  • Bad Performance (since there is no parent Analysis Item initial selection, it takes long time to load detailed data of child analysis item);
  • Not intuitive interface (since there is no parent Analysis Item initialial selection, it is not clear that parent analysis item should limit data of child one).

In my blog I will explain how to model drill down with initial selection to make analysis application both responsive and intuitve (some JavaScript knowledge will be required).

    Once my analysis application is refreshed it looks like this


Analysis Application.jpg

This is what is required to make initial selection work:

Lets see each step in details.


Initially hide child Analysis Item



Find first Product from parent Analysis Item

Add Data Provider Info Item for DP_1 (used by 1st Analysis Item)


Define JavaScript function to read first Product.


function Get_Product() {

var s;
var xml;
xml = document.getElementById('DATA_PROVIDER_INFO_ITEM_1').innerHTML;
xmlDoc=new ActiveXObject("Microsoft.XMLDOM");


var Product = xmlDoc.getElementsByTagName("AXIS")[0].getElementsByTagName("MEMBER")[0].getAttribute("text")

return Product;




Select first row in parent Analysis Item

Define JavaScript function to select first row in 1st Analysis Item


function Select_Row() {
var tableModel;
var element =  document.getElementById('ANALYSIS_ITEM_1_ia_pt_a');
  if (typeof(element) != 'undefined' && element != null)  {
// BW 7.3
  tableModel = ur_Table_create('ANALYSIS_ITEM_1_ia_pt_a'); 
  else {
// BW 7.0
    tableModel = ur_Table_create('ANALYSIS_ITEM_1_interactive_pivot_a'); 

var oRow = tableModel.rows[ 2 ];
sapbi_acUniGrid_selectRowCellsInternal( tableModel, oRow, true, null);



Limit child Analysis Item data to fist Product in parent Analysis Item and unhide child Analysis Item

Define JavaScript function that executes command sequence of two commands:


function Filter_N_Unhide( Product ){

//Note: information can be extracted using the parameter 'currentState'

// and 'defaultCommandSequence'. In either case create your own object

// of type 'sapbi_CommandSequence' that will be sent to the server.

// To extract specific values of parameters refer to the following

// snippet:

//  var key = currentState.getParameter( PARAM_KEY ).getValue();

//  alert( "Selected key: " + key );


// ('PARAM_KEY' refers to any parameter's name)

//Create a new object of type sapbi_CommandSequence

var commandSequence = new sapbi_CommandSequence();


  * Create a new object of type sapbi_Command with the command named "SET_SELECTION_STATE_SIMPLE"



/* Create parameter TARGET_DATA_PROVIDER_REF_LIST */


var paramListTARGET_DATA_PROVIDER_REF_LIST = new sapbi_ParameterList();

// Create parameter TARGET_DATA_PROVIDER_REF

var paramTARGET_DATA_PROVIDER_REF1 = new sapbi_Parameter( "TARGET_DATA_PROVIDER_REF", "DP_2" );


  // End parameter TARGET_DATA_PROVIDER_REF!






/* Create parameter RANGE_SELECTION_OPERATOR */


var paramListRANGE_SELECTION_OPERATOR = new sapbi_ParameterList();

// Create parameter EQUAL_SELECTION

var paramEQUAL_SELECTION = new sapbi_Parameter( "EQUAL_SELECTION", "MEMBER_NAME" );

var paramListEQUAL_SELECTION = new sapbi_ParameterList();

// Create parameter MEMBER_NAME

var paramMEMBER_NAME = new sapbi_Parameter( "MEMBER_NAME", Product );

paramListEQUAL_SELECTION.addParameter( paramMEMBER_NAME );

  // End parameter MEMBER_NAME!

paramEQUAL_SELECTION.setChildList( paramListEQUAL_SELECTION );


  // End parameter EQUAL_SELECTION!






/* Create parameter CHARACTERISTIC */

var paramCHARACTERISTIC = new sapbi_Parameter( "CHARACTERISTIC", "D_NW_PRID" );



/* End parameter CHARACTERISTIC */


// Add the command to the command sequence

commandSequence.addCommand( commandSET_SELECTION_STATE_SIMPLE_1 );


  * End command commandSET_SELECTION_STATE_SIMPLE_1



  * Create a new object of type sapbi_Command with the command named "SET_ITEM_PARAMETERS"


var commandSET_ITEM_PARAMETERS_2 = new sapbi_Command( "SET_ITEM_PARAMETERS" );

/* Create parameter ITEM_TYPE */

    var paramITEM_TYPE = new sapbi_Parameter( "ITEM_TYPE", "ANALYSIS_ITEM" );commandSET_ITEM_PARAMETERS_2.addParameter( paramITEM_TYPE );


    /* End parameter ITEM_TYPE  */

/* Create parameter INIT_PARAMETERS */

var paramINIT_PARAMETERS = new sapbi_Parameter( "INIT_PARAMETERS" );

    var paramListINIT_PARAMETERS = new sapbi_ParameterList();commandSET_ITEM_PARAMETERS_2.addParameter( paramINIT_PARAMETERS );


// Create parameter VISIBILITY

var paramVISIBILITY = new sapbi_Parameter( "VISIBILITY", "VISIBLE" );

paramListINIT_PARAMETERS.addParameter( paramVISIBILITY );

  // End parameter VISIBILITY!

paramINIT_PARAMETERS.setChildList( paramListINIT_PARAMETERS );

/* End parameter INIT_PARAMETERS  */


/* Create parameter TARGET_ITEM_REF */

var paramTARGET_ITEM_REF = new sapbi_Parameter( "TARGET_ITEM_REF", "ANALYSIS_ITEM_2" );

commandSET_ITEM_PARAMETERS_2.addParameter( paramTARGET_ITEM_REF );


/* End parameter TARGET_ITEM_REF */


// Add the command to the command sequence

commandSequence.addCommand( commandSET_ITEM_PARAMETERS_2 );


  * End command commandSET_ITEM_PARAMETERS_2


//Send the command sequence to the server

    return sapbi_page.sendCommand( commandSequence );




Code call of all onload JavaScripts

Define JavaScript function to call all above and attach it to BODY onload event

function initial_selection( )  {





        <body onload="initial_selection();" >

            <bi:QUERY_VIEW_DATA_PROVIDER name="DP_1" >



See attached EPM_DEMO Web Application templete for complete implementation details (rename to EPM_DEMO.bisp before upload to WAD)

Sometimes you face such issues in SAP BW which may drive you crazy and this deadlock issue is one of them. I have recently resolved this infamous dump so decided to share my experience with you all. Before any further delay, let me tell you the system & database details about my system.


Database SystemMSSQL
Kernel Release741
Sup.Pkg lvl.230


Let me first explain what is deadlock.

A database deadlock occurs when two processes lock each other's resources and are therefore unable to proceed.  This problem can only be solved by terminating one of the two transactions.  The database more or less at random terminates one of the transactions.


Process 1 locks resource A.

Process 2 locks resource B.

Process 1 requests resource A exclusively (-> lock) and waits for process 2 to end its transaction.

Process 2 requests resource B exclusively (-> lock) and waits for process 1 to end its transaction.

For example, resources are table records, which are locked by a modification or a Select-for-Update operation.

Following dump is expected when you will upload master data attribute.


Sometimes you might encounter this dump too.




In order to avoid this issue please make sure that your DTP does not have semantic grouping ON and it's processing mode should be "Serially in the Background Process". To be on the safe side, I would recommend to create new DTP with these settings.



Please let me know if you find this blog helpful or not.


P.S. This was related to time-dependent master data.

Hello Guys,



I only would share with you the BW-WHM* released notes for the last 7 days:



BW-WHM-MTD-SRCH2152359BW search/input help for InfoObjects returns no results
BW-WHM-MTD-INST2142826Method INSTALL_SELECTION of class CL_RSO_BC_INSTALL uses selection subset of pr
BW-WHM-MTD-HMOD2211315External SAP HANA view: navigation attribute returns no values
BW-WHM-MTD-HMOD2217796External SAP HANA View with Nearline Storage: column store error: fail to creat
BW-WHM-MTD-CTS2204227Transport: Error RSTRAN 401 in RS_TRFN_AFTER_IMPORT due to obsolete TRCS instan
BW-WHM-DST-UPD2213337Update rule activation ends with dump MESSAGE_TYPE_X
BW-WHM-DST-TRF2216264730SP15: Transformation not deactivated if the InfoObject/DSO used in lookup ru
BW-WHM-DST-TRF2212917SAP BW 7.40 (SP14) Rule type READ ADSO doesn't work as expected
BW-WHM-DST-TRF2214542SAP HANA Processing: BW 7.40 SP8 - SP13: HANA Analysis Processes and HANA Trans
BW-WHM-DST-TRF2215940SP35:Time Derivation in Transformation is incorrect
BW-WHM-DST-TRF2003029NW BW 7.40 (SP08) error messages when copying data flows
BW-WHM-DST-TRF2217533DBSQL_DUPLICATE_KEY_ERROR when transporting transformation
BW-WHM-DST-TRF2192329SAP HANA Processing: BW 7.50 SP00 - SP01: HANA Analysis Processes and HANA Tran
BW-WHM-DST-SRC2185710Delta DTP from ODP Source System into Advanced-DataStore-Objekt
BW-WHM-DST-SDL2126800P14; SDL; BAPI: Excel IPAK changes with BAPI_IPAK_CHANGE
BW-WHM-DST-PSA2196780Access to PSA / PSA / Error stack maintenance screen takes long time or dumps .
BW-WHM-DST-PSA2217701PSA : Error in the report RSAR_PSA_CLEANUP_DIRECTORY_MS when run in the 'repair
BW-WHM-DST-PC2216236RSPCM scheduling issue due to missing variant
BW-WHM-DST-DTP2185072DTP on ODP source system: error during extraction
BW-WHM-DST-DTP2214682P35:PC:DTP:Monitor-Anzeige dumpt bei Skipped DTP
BW-WHM-DST-DS1923709Transport of BW source system dependent objects and transaction SM59
BW-WHM-DST-DS2038066Consulting: TSV_TNEW_PAGE_ALLOC_FAILED dump when loading from file
BW-WHM-DST-DS2154850Transfer structure is inactive after upgrade. Error message: mass generation: n
BW-WHM-DST-DS2218111ODP DataSource: Data type short string (SSTR)
BW-WHM-DST-DFG2216492Datenflusseditor erscheint in den BW Modeling Tools anstatt im SAP GUI
BW-WHM-DST-ARC2155151Archiving request in Deletion phase / Selective Deletion fails due existing sh
BW-WHM-DST-ARC2214688Short dump while NLS Archiving object activation
BW-WHM-DST-ARC2214892BW HANA SDA: Process Type for creating Statistics for Virtual Tables
BW-WHM-DST1839792Consolidated note on check and repair report for the request administration in
BW-WHM-DST2170302Proactive Advanced Support - PAS
BW-WHM-DST2075259P34: BATCH: Inactive servers are used - DUMP
BW-WHM-DST2176213Important SAP notes and KBAs for BW System Copy
BW-WHM-DST1933471Infopackage requests hanging in SAPLSENA or in SAPLRSSM / MESSAGE_TYPE_X or TIM
BW-WHM-DST2049519Problems during data load due to reduced requests
BW-WHM-DBA-SPO2197343Performance: SPO transport/activation: *_I, *_O, transformation only regenerate
BW-WHM-DBA-ODS1772242Error message "BRAIN290" Error while writing master record "xy" of characteris
BW-WHM-DBA-ODS2215989RSODSACTUPDTYPE - Deleting unnecessary entries following DSO activation
BW-WHM-DBA-ODS2209990SAP HANA: Optimization of SID processes for DataStore objects (classic)
BW-WHM-DBA-ODS2214876Performance optimization for DataStore objects (classic) that are supplied thro
BW-WHM-DBA-ODS2218170DSO SID activation error log displays a limit of 10 characteristic values
BW-WHM-DBA-ODS2217170740SP14: 'ASSIGN_TYPE_CONFLICT' in Transformation during load of non-cumulative
BW-WHM-DBA-MPRO2218861730SP15:Short dump 'RAISE_EXCEPTION' during creation of Transformation with so
BW-WHM-DBA-MD2172189Dump MESSAGE_TYPE_X in X_MESSAGE during master data load
BW-WHM-DBA-MD2216630InfoObject Master Data Maintenance - Sammelkorrekturen für 7.50 SP 0
BW-WHM-DBA-MD2218379MDM InfoObject - maintain text despite read class
BW-WHM-DBA-IOBJ2215347A system dump occurs when viewing the database table status of a characteristic
BW-WHM-DBA-IOBJ2217990Message "InfoObject &1: &2 &3 is not active; activating InfoObject now" (R7030)
BW-WHM-DBA-IOBJ2213527Search help for units not available
BW-WHM-DBA-ICUB1896841Function: InfoCube metadata missing in interfaces
BW-WHM-DBA-ICUB2000325UDO - report about SAP Note function: InfoCube metadata missing in interfaces (
BW-WHM-DBA-HIER2211256Locks not getting released in RRHI_HIERARCHY_ACTIVATE
BW-WHM-DBA-HIER2215380Error message RH608  when loading hierarchy by DTP
BW-WHM-DBA-HIER2216696Enhancements to the internal API for hierarchies in BPC
BW-WHM-DBA-HCPR2210601HCPR transfer: Error for MetaInfoObjekte during copy of queries
BW-WHM-DBA-COPR2080851Conversion of MultiProvider to CompositeProvider
BW-WHM-DBA-ADSO2215201ADSO: Incorrect mapping of RECORDTP in HCPR
BW-WHM-DBA-ADSO2215947How to Set Navigation Attributes for an ADSO or HCPR
BW-WHM-DBA-ADSO2218045ADSO partitioning not possible for single RANGE values
BW-WHM-DBA2218453730SP15: Transaction RSRVALT is obsolete
BW-WHM1955592Minimum required information in an internal/handover memo




This may sound very basic but can be useful to someone who doesn't know yet.Others, please ignore.

You may have a situation where Event triggers are used in process chains and you are confused or find difficult to identify which process exactly triggered an particular event. The figures below illustrates example scenario and method of finding it using the standard way of digging the related tables.


You have 2 chains,

1) Chain that raises an event trigger

2) Chain that receives the event





If you need to find out the parent chain that raised the event "EVENT1"( in this case ), you can use below tables to get the information.



2) Input LOW = "EVENT1",TYPE = "ABAP" (basically its the event parameter you want to search for)

3) Copy the value from field VARIANTE


5) Input VARIANTE = value copied from step (3)

6) CHAIN_ID field will give you the technical id of the process chain that raised this event.



In the past I had to analyse PSA tables and, to be more specific, I had to find out the distinct values for a table column in order to know which specific values have been extracted from the source system. This requirement cannot be solved with the se16 transaction directly. As a turnaround I exported the table data to Excel and then I used the option "Remove duplicates". This worked in the beginning, but with larges PSA tables that turnaround wasn't practicable anymore.

For SAP BW InfoProviders this requirement can be handled with the transaction LISTCUBE, but from my point of view it is too complicated and time consuming.


So I developed a solution for this requirement in SAPGUI which was inspired by the "Distinct values" option in SAP HANA Studio.





User-friendly tool to analyse the distinct values of a se16 table column & of any SAP BW InfoProvider.



Solution & Features

The attached report ZDS_DISTINCT_VALUES has two parameters to pass the specific table and column name. The parameter values will be checked and analysed. If the table parameter is a SAP BW InfoProvider the function "RSDRI_INFOPROV_READ" will be used to extract the data. Otherwise a generic ABAP SQL call is executed to get the distinct values. If the column parameter is empty or cannot be found for this table / InfoProvider, the list of possible column values for this table is returned.


The output returns a table with the distinct values and the number of occurrence. If it is possible to get text values (InfoObject master data or domains), these text values will also be returned. For SAP BW master data the function "RSDDG_X_BI_MD_GET" is used and for domains "DDIF_DOMA_GET".









Feel free to use and extend the tool. Contact me for any questions, etc. Attention: MultiProviders are not supported.

I have seen recently a lot of problems or SCN discussions about the use of Error Handling and Semantic groups on DTP's.

And I thought, it would be a good idea to give a brief overview of the use of these features on DTP's.

The goal of this blog post is provide a generic information about the Influence of START/END Routine in Transformation on the Processing mode of Data Transfer Process (DTP) for loading Datastore Object (DSO) and the technical reason behind it. In all the cases it is assumed that either a START routine or END routine or both are used in the transformation connecting the source and DSO which is the target. The cases are broadly described below:

  • A1: Semantic Group is not defined in the DTP, Parallel Extraction flag is checked and Error handling is switched off ie. either 'Deactivated' or set to 'No update No Reporting': Processing Mode of the DTP is 'Parallel Extraction and Processing'.

  • A2: Semantic Group is not defined in the DTP, Parallel Extraction flag is not checked and Error handling is switched off ie. either 'Deactivated' or set to 'No update No reporting':Processing mode of the DTP is 'Serial Extraction, Immediate Parallel Processing'.

  • A3: Semantic Group is not defined in the DTP, Error handling is switched on ie. either 'Valid Records Update, No reporting (Request Red) or 'Valid Records Update, Reporting Possible (Request Green)': Processing Mode of the DTP is 'Serial Extraction and Processing of Source Package'. The system also prompts a message 'Use of Semantic Grouping'.

  • B1: Semantic Group is defined in the DTP and Error Handling is switched off ie. either 'Deactivated' or set to 'No update No Reporting': Processing Mode of the DTP: 'Serial Extraction, Immediate Parallel Processing'. The system also prompts a message 'If possible dont use semantic grouping'

  • B2: Semantic Group is defined in the DTP and Error Handling is switched on ie. either 'Valid Records Update, No reporting (Request Red) or 'Valid Records Update, Reporting Possible (Request Green)': Processing Mode of the DTP is 'Serial Extraction, Immediate Parallel Processing'.

In any DSO we allow the aggregation 'OVERWRITE' along with 'MAX', 'MIN' and 'SUM' which is non cumulative. So it is very important that chronological sequence of the records are intact for update because the 'principle of last wins' needs to be maintained. Therefore if Error handling is switched on and there are errors in the update then the erreneous records which are filtered out and written in the errorstack must be in chronological sequence.

The solution for the cases described above are:

  • In cases A1 and A2 the error handling is switched off so if there is one error the load is terminated and erreneous records are not stored anywhere. Therefore according to the the setting 'Parallel Extraction' ie. whether is checked or not the processing mode of the DTP is 'Parallel Extraction and Processing' or 'Serial Extraction, Immediate Parallel Processing' respectively.

  • In case of B1 you have defined a Semantic Group which ensures that the records pertianing to the same keys defined in the semantic group are ensured to be in one package. But as error handling is switched off this contradicts the setting of the semantic group as you dont want erroroneous records to written in the errorstack and so the processing mode of the DTP is 'Serial Extraction, Immediate Parallel Processing'. Also the system prompts to remove the Semantic group otherwise it makes no sense.

  • In case of A3 the semantic group is not defined but the errorhandling is switched on so it means that erreneous records are needed to be written in the errorstack and the chronological sequence needs to be maintained. But as there are no keys in semantic group defined it cannot be ensured that records pertaining to the same keys are in the same package. So the processing mode of the DTP is 'Serial Extraction and Processing of the source package'.Also the system prompts you to use the semantic groups sothat it is ensured that the records pertaining to the same keys defined in the semantic group are ensured to be in same package.

  • In case of B2 semantic group is also defined and errorhandling is switched on. This means the records pertaining to the same keys defined in the semantic group are ensured to be in one package and if the errors happen then the chronogical sequence will be maintained while writing the erreneous records in the errorstack after they are sorted according to the keys. So the processing mode of the DTP is 'Serial Extraction, Immediate Parallel Processing'.

I hope this blog can help you in further questions about use of error handling, semantic groups on DTP's.



1. Business Scenario



As a system performance improvement measure, the requirement is to send an email to the team with a list of ABAP Short Dumps that occur in the system during the day.

The email needs to be sent at 12:00 AM, and should contain a list of all the short dumps that have occurred in the system during the previous day.



2. Create a variant for the ABAP Runtime Error program RSSHOWRABAX


  1. Go to SE38 and enter the program name RSSHOWRABAX. Select the Variants Radio button and click display.

        In the next screen, enter the Variant Name and create.




     2. This takes you to the Parameters screen, where we need to add the parameters that we want our variant to contain.




     3. Click on Attributes. Enter the description.




     4. Since our requirement is to execute the variant for the previous day, we will select the following options for ‘Date’ in the ‘Objects for Selection Screen’ section

                  - Selection Variable = ‘X’ (X: Dynamic Date Calculation (System Date))


                    - Name of Variable: For the Variable name ‘Current date +/- ??? days’ the Indicator for I/E should be selected as ‘I’ and option as ‘EQ’



                 - Upon clicking ‘OK’, the next screen allows to enter the value for the Date Calculation Parameters.

                    Enter ‘-1’ here, since we need the previous day’s data.




                    - The final screen will be as follows




     5. Upon saving this, you will be re-directed to the Parameters screen, where the Date field will be auto populated with the previous day value




3. Define a Job to schedule the above report output as an email


     1. Go to System à Services à Jobs à Define Job




     2. Enter the Job Name and Job Class




     3. Go to Step. Here, enter the program name RSSHOWRABAX and the variant created above ZSHORT_DUMPS.

          In the user field, you can enter the User ID with which you want the email to be triggered.




          In our case, we needed it to be executed with ALEREMOTE. Click on Save.




     4. This step will send a mail to the SAP Business Workspace. In order to further forward this mail to the external email addresses, we will use the                         program RSCONN01 (SAPconnect: Start Send Process) and the variant SAP&CONNECTINT.




     5. Upon clicking Save, you can see both the steps in the overview.




     6. Next, enter the recipient details using the ‘Spool List Recipient’ Button. You can select from Internal User, Distribution lists and External addresses.




     7. Next, select your Start Condition to trigger this job. In our case, we have defined the same to trigger at the 1st second of the day daily.




5. Final Output


An email will be received daily at 12:00 AM, from ALEREMOTE. The Subject of the email will be as follows:

      Job <Job Name>, Step 1



The attachment will display the Runtime Errors information as shown below. This is the same information that we get in ST22.

      The below information is obtained in the mail triggered at 12:00 AM on 8/12/2015. Hence, it gives all the ABAP short dumps occurred on 8/11/2015.




Write-optimized DSOs were first introduced in SAP BI 7.0 and are generally used in the Staging layer of an Enterprise Data Warehouse as data loads to these are quite fast. This is because they do NOT have three different tables but only one, Active table. This means data loaded to a write-optimized DSO goes straight to the active table, thus saving us the activation time. Also these DSOs further save time by NOT involving the SID generation step.

However, write-optimized DSOs have one short-coming. During data loads all the data packages are processed serially and not in parallel, even if parallel processing is defined in the batch manager settings of the DTP. This results in cumbersomely long loading times while loading large number of records (typical in a full dump and reload scenarios).

The goal of this paper is to demonstrate how to enable parallel processing of data packages while loading to write-optimized DSOs thereby optimizing load time.

<< UPDATE >> This is applicable to SAP BI 7.0 only. In SAP BI 7.3 packages process in parallel by default.

Step By Step Solution

Parallel processing of data packages while loading to a write-optimized DSO can be enabled by defining the semantic key in the Semantic Groups of the DTP.

Open the DTP of the write-optimized DSO and in the Extraction tab click on Semantic Groups button.


In the pop-up screen select the fields which form the semantic key of the DSO.


Make sure that parallel processing is enabled by going to the menu Goto > Settings for Batch Manager and defining the number of processes for parallel processing.


Now if you run this DTP you will notice that the data packages are processed in parallel and there is a significant improvement in the data load timings. Please note that the improvement will be conspicuous in loads involving large data sets.

Load Time Comparison

First screenshot below shows that it took around 17 hours to load about 23.5 million records in a write-optimized DSO. During this load semantic key was NOT defined in the DTP.


The next screenshot shows that it just took a little over one and half hour to load the same number of records in the write-optimized DSO (11 times faster!). The difference was this time the semantic key was defined in the DTP.


Further Reading

1. Write Optimized DSO


2. SAP Note 1007769 : Parallel updating in write-optimized DSO

In first part of the blog Simulation Workbench: Part 1 - Transformation Rules I introducted Simulation Workbench and demonstrated how it help simulate Transformation Rules. Now I will explain how it can be used to simulate Transfer Rules and Update Rules. To begin with start Simulation Workbench calling trx. ZSWB, then click on BW 3.x button


Simulation Workbench 7.x Navigation to 3.x.jpg

It will take you to another screen for Tranfer Rule / Update Rule simulation

Simulation Workbench Navigation to 3.x.jpg


Multiple Request Simulation with Additional Selection

Select from Source system, DataSource, Target, leave Request field empty. Again Simulation Workbench assists you with value helps all steps of the way. Press F8 (Execute) to continue

Simulation Workbench Navigation to 3.x - F8.jpg

On next popup provide selection to limit PSA data

Simulation Workbench Navigation to 3.x - PSA Selection Criteria.jpg

As you can see on next screen data HT-1000 material sales from multiple requests was pre-selected. Select all records

PSA Data Preselection - Select all.jpg

Then click on Transfer (F5) to proceed with simulation

PSA Data Preselection - Transfer.jpg

On next screen Transfer Structure data records are displayed, click on Communication Structure (Shift+F4) button to simulate Transfer Rules

Transfer Structure Data Records.jpg

On next screen Communication Structure records are displayed, click on Data Target (Shift+F6) button to simulate Update Rules


Data Target data records.jpg


Naviation to Transfer Rules

On Simulation Workbech selection screen click on Transfer Rules (Ctrl+F2) button to navigate to Transfer Rule, for example, to set break-point on specific trasnfer rule

Simulation Workbench Navigation to 3.x - Navigate to Transfer Rules.jpg

On next screen from Menu Choose Extras -> Display Program

Simulation Workbench Navigation to 3.x - Navigate to Transfer Rules - Display Program.jpg

Choose Transfer Program on popup

Simulation Workbench Navigation to 3.x - Navigate to Transfer Rules - Choose Program.jpg

Set break-point on Role Transfer Rule

Simulation Workbench Navigation to 3.x - Navigate to Transfer Rules - Break Point.jpg


Navigate to Update Rules

On Simulation Workbech selection screen click on Update Rules (Ctrl+F1) button to navigate to Update Rule, for example, to set break-point on specific update rule

Simulation Workbench Navigation to 3.x - Navigate to Update Rules.jpg

On next screen from screen choose from menu

Simulation Workbench Navigation to 3.x - Navigate to Update Rules - Display Activate Program.jpg

On next screen set break-point on Created at Update Rule

Simulation Workbench Navigation to 3.x - Navigate to Update Rules - Set Break Point.jpg


Master Data Transfer Rules / Update Rules Simulation

Similarly to Transactional Data Simulation Workbench can simulate Master Data

Simulation Workbench 7.x Navigation to 3.x - Master Data Simulation.jpg


Texts Transfer Rules / Update Rules Simulation

Similarly to Transactional Data Simulation Workbench can simulate Texts

Simulation Workbench 7.x Navigation to 3.x - Texts Simulation.jpg


SAP Standard Output Format

Simulation Workbench also support SAP Standard Output Format for simulation result comparison (if in doubt)

Simulation Workbench - Introduction

Data loaded to BW might go through complex transformations. In many case it is required to debug these transformations. BW provides standard simulation functionality that can be improved providing better interface. This is what Simulation Workbench is for. Major benefits of Simulation Workbench are:

  • Simplified data selection;
  • Improved data presentation;
  • One stop shop for all simulations;
  • Simple navigation to transformations and targets;
  • Variants creation.

First part on the blog explains how to use of Simulation Workbench with Transformation Rules (BW 7.x data staging type) and second part with Transfer Rules and Update Rules (BW 3.x data staging type).



Simulation Workbench - Installation

Import attached ZSWB SAPLink nugget. Activate Z_SIMULATION_WORKBENCH_BW_3X and Z_SIMULATION_WORKBENCH_BW_7X programs along with report texts, screens and statuses.



Transactional Data Transfer Rule Simulation

Launch Simulation Workbech using trx. ZSWB. It will take you to following screen.

Simulation Workbench BW 7.x.jpg

Keep defaults and select Target, Source and DTP. Simulation Workbench assists you all steps of the way.

Transformation Target - F4.jpg

Transformation Target - Value Help.jpg

Transformation Source - F4.jpg

Transformation Source - Value Help.jpg

Transformation DTP - F4.jpg

Transformation DTP - Value Help.jpg

To limit simulation to specific PSA requst select it from drop-down.

Request - F4.jpg

Select request from popup

Transformation DTP - Value Help.jpg

Press F8 (Execute) on next screen

Simulation - F8.jpg

On next screen request selection can be refined providing additional selection.

Debug Request - F8.jpg

Lets skip additional selection for now and just press F8 (Execute and Display of Log) to execute simulation.

Transformation rules get simulated and both After Extraction and After Transformation Temporary Storages are displayed one underneath another

Temporary Storage - Descriptions.jpg

Temporary Storage fields headers can be switched between Descriptions and Technical Names (in contrast with SAP standard functionality) to help you identify required field

Temporary Storage - Technical Names.jpg


Navigation to Transformation

From initial screen you can navigate to Transformation, for example, to set break point on specific Transformation Rule

Navigation - Transformation.jpg

Select Create on Transformation Rule

Transformation Rule - Created on (EPM Demo).jpg

Copy ABAP code line

Transformation Rule - Created on (EPM Demo) Display Rule ABAP code.jpg

Open Transformation Rules Generated Program.

Transformation Rule - Generated Program.jpg

Lookup for copied ABAP code and set break-point.

Transformation Rule - Created on (EPM Demo) Display Rule Breakpoint.jpg

Navigate all way to Simulation Workbench selection screen, press F8 (Execute) and then press F8 (Execute and Display of Log) on Debug Request popup.

Voilà simulation stopped at desired Transformation Rule

Transformation - Debugger Session.jpg



Navigation to Data Target

From Simulation Workbench selection screen you can also navigate to simulation target, for example, to find some request for simulation.

Navigation - Target.jpg

Manage InfoProvider - Monitor.jpg

Copy Request Id.

Monitor - DTP.jpg



Simulation Across All PSA Requests

If Request field is left empty on Simulation Workbench selection screen then all requests are selected for simulation. Use this option with caution because even well maintained PSA tables can have lots of records.

Debug Request - Multiple Request Selection.jpg

Debug Request - Multiple Request Selection Popup.jpg


Simulation with Performance Optimized Request Selection

Optimize Request Selection option on Simulation Workbench selection screen can improve simulation performance. Check Optimize Request Selection check-box and press F8 (Execute)

Simulation Workbench BW 7.x - Optimize Request Selection.jpg

Provide additional selection on Debug Request screen and press F8 (Execute and Display Log).

Debug Request - Optimize Request Selection.jpg

Press F8 (Execute and Display Log). What Simulation Workbench will do it will limit request selection based on additional selection provided.



Master Data Simulation

Transformation Rules for Master Data can also be simulated. Select Target, Source, DTP, Request and uncheck Expert Mode check box to skip Debug Request popup

Simulation Workbench BW 7.x - Master Data Selection.jpg

Press F8 (Execute) button to simulate

Simulation Workbench BW 7.x - Master Data Temporary Storage.jpg


Texts Transformation Rules Simulation

Transformation Rules for Texts can also be simulated. Select Target, Source, DTP, Request and uncheck Expert Mode check box to skip Debug Request popup

Simulation Workbench BW 7.x - Texts Selection.jpg

Press F8 (Execute) button to simulate

Simulation Workbench BW 7.x - Texts Temporary Storage.jpg


SAP Standard Output Format

Simulation Workbench also support SAP Standard Output Format for simulation result comparison (if in doubt)



Second part of the blog Simulation Workbench: Part 2 - Transfer Rules and Update Rules

Applies to:


SAP BW NW 7.X. For more information, visit the Business Intelligence home page for Data ware house management.


Author:          MP Reddy

Company:      NTT DATA Global Delivery Services Private Limited

Created On:   4th August 2015

Author Bio  


Pitchireddy Mettu is a Principal Consultant at NTT DATA Global Delivery Services Private Limited from the SAP Analytics Practice.


In one BI system, the volume of data increases constantly. Constant changes to business and legal requirements mean that this data must be available for Longer. Since keeping a large volume of data in the system affects performance and increases administration efforts, we recommend BI administrator to apply Data archiving task if needed.

Using the archive administration for the archiving object BWREQARCH we execute an archiving program. This program writes the administration data for selected requests in an archive file. After archiving,we execute a deletion program that deletes the administration data from the database.

By archiving request administration data we make sure that the request administration data does not impair the performance of our system.


This guide gives a step-by-step demo to show how to Achieving BW Requests using BWREQARCH and reloading Achieved requests when needed.




In our case the settings are already maintained for the object that we are archiving on the SAP BW system.


The objects that are archived are:

  • IDOC


The checklist of archiving new objects is as followed

Process Flow

Before using an archiving object for the first time


  • Check archiving object-specific Customizing settings:
  • Was the file name is correctly assigned?
  • Are the deletion program variants maintained? (Note that the variants are client-specific)
  • Is the maximum archive file size correctly set?
  • Is the deletion program to run automatically?


File locations

File locations must be set in order to write the archive files. Currently the files are located in /local/data/storage/<SYSEMID>/archiving.

The section below explains the setup of the files. Unless specified by the support lead DO NOT CHANGE ANY OF THE SETTINGS.


This can be accessed via AL11 .



File Names


The file names are generated automatically by the archiving tool. The current setup is as followed:

For BWREQARCH the customizing shows that ARCHIVE_DATA_FILE is used.



and uses the logical path ARCHIVE_GLOBAL_PATH

PARAM_1 = Type of system

PARAM_2 = Sequence number

PARAM_3 = Archiving Object











The ARCHIVE_GLOBAL_PATH is set to /local/data/storage/<SYSID>/archiving.


In AL11 you can view the files created.


Archive BW Requests

When archiving requests there are two steps to perform.


  1. Write the archive files
  2. Delete the data from the tables


Write the archive files

Start transaction code SARA - Fill in BWREQARCH



First we need to write the archive log files.
In order to this we need to create a variant of what needs to be archived.

When installing a regular job we need to make the timing relative to the date. By default we archive requests older than 4 months.


For a test run you can put the processing option to Test Mode, for actual archiving you can put the processing option to Production Mode. Keep With DTP Requests turned on as we also archive the DTP request information. Min. Number Requests will stay on 1000. This means that only if there are more than 1000 requests to archive it will actually start archiving.



Manual writing with SARA


When running the manual jobs make sure that the username is has the correct authorization to run archive jobs.

Create the archive file once all the settings (Spool Params& Date) have been maintained.


When the write is executed you can find the jobs running/finished.


There will be two jobs, one with SUB in the name that will schedule the various write job(s).
The other job will have WRI in the name.



Delete the data from the tables


When the request are writing into the archive files. The data can be deleted from the tables.

The next chapters will provide details on how to delete the BW-Request data once the archive files have been written.


Manual deletion with SARA

  Return to SARA and select Delete.


  Click on Archive selection to select the file for deleting the actual entries from the table



When file is selected enter the start data for deletion of the entries from the table. Periodic scheduling only makes sense when the write
job is dynamically deleting the requests. As we are deleting once a month on a 4 month basis we can schedule the delete also periodically.

Be aware that the delete should not run during the write job and that there is enough time between the two activities.

When all settings (Spool params.and scheduled date) are maintained you can run the deletion job.


In the job overview you should see a deletion job running/finished.


Reloading Archived Requests

When the requests are archived they are still accessible when needed if

There are three ways of reloading the requests


  1. Reload the individual request from the DTP monitoring screen or InfoPackage monitoring screen
  2. Reload a complete archiving job (T-Code SARA)
  3. Reload multiple requests (T-Code RSREQARCH)


For reloading complete archive jobs or multiple requests look further down in the documents. The following sections shows how individual
requests can be reloaded from the archive.


Reload the individual request


Before retrieving the request from the archive, question yourself if the detailed data is needed. The header information of the request is still visible, only detailed messages are archived.


InfoPackage Requests


When displaying the request in the monitoring screen a popup will inform the user that the request is archived with a question to retrieve
the details from the archiving. By default do not reload the details when looking at an archived request unless it is really necessary.


  An archived requests looks like this:


  When an request is reloaded the data becomes visible again



DTP Requests


When displaying the request there will not be a popup compared to InfoPackage requests. On the DTP overview screen you will see that the DTP is archived.



By default the details will not show the details.



On the menu you can reload the request from the archive.



When the data is reloaded the details become visible again.



Reload a complete archiving job :

When you want to reload a complete archiving job you have to reload is within T-Code SARA.


Run SARA and put in the archiving object. In the menu the Reload function becomes available.


  Select a variant. There will be only two necessary variants as the selection screen will only give you option Test or Production mode.



Select an Archive file.



Maintain the start data and spool parameters like in the previous sections and run the reload activity.



The job log will show if the reload has finished.


A job with REL in the name will run.



Related Content







For more information, visit the Business Intelligence home page for Data ware house management.

Master data deletion is not a straightforward deletion what normally we do for Info Cube or DSO. Master data may have dependency with the transaction data, in this case Master data deletion is not easy. We should delete the related master data over the transaction data providers (Info Cube or DSO or Info Object) first and then we need to delete the main master data. Here in this blog I would like share the procedure what we follow for Master data deletion in our project,



  1. Identify the Master data we would like to delete. Here I would like to delete a data for an employee in the master data.



2.  Select all the three records and delete. After pressing the delete button, click Save, next one pop up will come for selecting the data deletion Without SID's and With SID's. Always select with SID's and Save.



3. If the master data is used somewhere in the transactional data providers (Info Cube or DSO or Info Object), it will not get data deleted in the master data. It will pop up an message saying "No Master Data was deleted".


4. This means that the master data is used somewhere, to check where this go to the transaction SLG1 and pass the parameters as mentioned in the below screenshot,



After passing the above parameters execute and now you can see the details where the master data still used.



5. Now in the above step it shows that the master data is used on one of the Info cube. Usually it shows Info Cube, DSO, or Info Object. In case the master data is used in the Info Cube, please do selective deletion of the master data in that Info Cube. By doing this, Fact table data only deleted, to delete the data in the Dimension table, go to transaction RSRV, and pass the required parameters and execute for test, by doing this, the data in the Dimension table will also get deleted.



In case the master data is used in the DSO, please do selective deletion of the master data in that DSO, by doing this data in the Active data table and Changed log table will gets deleted.


In case the master data is used in the Info Object, please do selective deletion of the master data in that Info Object and follow the procedure from 1 to 5 if in case the master data of the current object is used somewhere in the target providers(Info Cube or DSO or Info Object).

This is the second part of the interview with Juergen Haupt by Sjoerd van Middelkoop. The first part of the interview, covering LSA++, native development and S/4HANA topics, is available here >>

This blog is also available on my company website.  This blog is cross-posted here to reach the SCN audience as well


Q: BW is now more open to non-SAP sources than it was before. Is the main development focus now on supporting any data model and source in BW modeling, or is the focus more on hybrid scenarios?

We are continuously improving and extending BW’s possibilities in respect to also supporting non SAP data. That means we do not force the use of InfoObjects any longer but enable straight forward modeling of persistencies using fields and defining datawarehouse semantics using Open ODS views on top of it. This allows customers to respond faster to business requirements. Next to that, we also support landscapes where a customers use SAP HANA as a kind of a Data Inhub or landing-pad replicating data from any source to HANA and modeling natively on that data. From LSA++ perspective these areas are like an externally managed extension of the Open ODS Layer.


When it comes to data warehousing the customer can integrate these data virtually with BW data or stage them via generate data flows to BW to apply more sophisticated services.

Q: How did BW on HANA and LSA++ change the way you see BW development?

BW on HANA now provides the option to work a lot more with a bottom-up approach. It means that you can evolutionary improve your models and your data starting for example with fields that define Advanced DSOs in the Open ODS Layer ending up with Advanced DSOs that leverage also InfoObjects to provide advanced consistency and query services. These Advanced DSOs are shielded by virtual Open ODS Views allowing a smooth transition between these stages if a transition is necessary at all. This flexibility is highly important to integrate non-SAP data in a step by step manner. I think this complements the proven but slow top-down approach in BW projects like we have seen them in the past.

Q: Talking about development in the current landscape. Customers that are have migrated to HANA a while ago and are remodeling their current LSA structures are finding it hard to keep up with developments in BW and the new functionality rapidly coming available. How can customers develop and remodel without investing in objects that will become obsolete soon?

This is a real challenge. Not a technology challenge, but more of an architectural and functional challenge. How will my landscape of the future look like what are the functions and features that provide most value for my business users? I would advise customers to think of their EDW strategy from a holistic point of view. That means for example you can’t see BW on HANA without considering SAP’s operational analytics strategy. Overall BW is not an island any longer, BW is now tighter connected than ever to other systems. So we have to think about the role in the future of all of our systems and what services they should provide.

So when customers think about going on BW on HANA, normally the first question is “Do we go greenfield or are we going to migrate?” This is very understandable question but I fear that this question does not go far enough.

Q: Most customers, when on the decision point to migrate or greenfield, consider their current investments and make sure these investments will not be undone.

Yes. Very often, but not always. Over the last time we s a steady increase of customers choosing a greenfield approach. They see that introducing BW on HANA is more than just a new version that you upgrade to. They are aware that BW on HANA means running and developing solutions on a real new platform, and they do not want to bring their ‘old’ style solutions into this new platform. So these customers go for a greenfield approach. This approach does not prevent you of course to transport in some of your existing content that you want to keep and maybe invested heavily in.

Q: This point of view is quite opposite of SAP’s ‘non-disruptive’ marketing strategy

What does non-disruptive mean? It is non-disruptive when it comes to migrating existing systems-yes. But does a ‘non-disruptive’ strategy really change the world to a better one? If you look on BW on HANA just as a new better version a non-disruptive migration would be your choice. But if you have the idea that BW on HANA is something really new that allows you creating values you never could offer before and that enables you to rethink the services you want your BW data warehouse to provide bringing it to new level then you cannot be non-disruptive.

It’s like driving into the Netherlands from Germany, I only notice it by chance because the road signs are different – the border has disappeared at least for car drivers… Compared to the EDW I would say that the border we used to have between EDW and sources has always been a very strict border. These borders between systems are more and more disappearing. And this has a lot of influence on all systems and the solutions we build in future. And this is related again with disruption. I can continue to work like I did it ten years before still stopping at borders that have disappeared in the meantime…….

Q: With the Business Suite on HANA and S/4HANA, embedded BW is seen by many as a viable option to use instead of a standalone BW system. In what cases should customers opt for an embedded scenario?

The question here is a matter of your approach. Let’s assume you start with S/4HANA Analytics or HANA Live, you can do everything with these virtual data models as long as business requirements and SLA’s are met. Then, the question is what to do when we need Data Warehousing services. Why not use the embedded BW? Yes, especially for smaller-sized companies, this will be an option. There are limitations of course. I think the rule of thumb here is that an embedded BW system should not exceed 20% of the OLTP data volume. With the HANA platform it is a matter of managing workload.

But there is also a certain danger with this approach and it does not derive just from the amount of BW data you should not exceed. The bigger the company is, the more likely you will have more than a single source. In this case you should start from the very beginning thinking about an EDW strategy. Otherwise you will sooner or later start to move data back and forth between these embedded BWs. So most importantly when making decisions about using the embedded BW is have a long-term vision about the future DWH landscape. In this context it is important to mention that with SAP HANA SPS9, we have the multi-tenant DB feature that allows us to run multiple databases on the same appliance. So sooner or later we will see BW on HANA and S/4HANA running on different HANA DBs but on the same appliance meaning that as there will be no boundary any longer then between the BWonHANA and S/4HANA. Thus you can share between them data and models directly. This would offer the benefits of the embedded BW but with a higher flexibility and scalability.

Q: So what you are saying is that embedded BW is an option for now in some cases, but with HANA multi-tenant DB in the near future and multi-source requirements stand-alone BW is the better option?

That depends on what your situation and what you are developing, for smaller clients and simple landscapes, I can imagine embedded scenarios to function very well, even in the future. For most other scenarios yes, I think stand-alone BW with multi-tenant DB is the better option.

Thank you very much for this interview!

You are most welcome!


This concludes my two-part blog of the interview I conducted with Juergen Haupt. I would hereby like to thank mr Haupt for his time and cooperation, SAP for their cooperation in getting this published, and the VNSG for getting Mr. Haupt in Eindhoven.

Applies to:       SAP BW 7.X



This document gives a clear picture on How to handle (Calculate)  Before Aggregation (This option was available in the BW 3.x version) at BEx Query level which is obsolete in BW 7.x

Author:           Ravikumar Kypa

Company:       NTT DATA Global Delivery Services Limited

Created On:    24th July 2015

Author Bio  

Ravikumar is a Principal Consultant at NTT DATA from the SAP Analytics Practice.




In some of the reporting scenarios, we need to get the number of records from the info cube and we have to use that counter in calculations. We can easily achieve this in BW 3.x system, as there is a readymade option given by SAP (i.e. Before Aggregation in the Enhance tab of a Calculated Key Figure) at Bex query level.


But this option is obsolete in BW7.X system and we can’t use this option. But SAP has given a different mechanism to achieve this at Bex level.


The below illustration explains you this scenario:





























































The user wants to see the Price of each material in the report, and the format of the report is as shown below:



                  Price / Material


30 USD


40 USD


25 USD



If we execute the report in Bex, it will give the below result:



But expected output is:





30 USD


40 USD


25 USD


We have to calculate this using Counter at Bex query level. In BW 3.X version we can achieve this by using the option ‘Before Aggregation’ in Enhance tab of the Calculated Key Figure (Counter).


Steps to achieve this in BW 3.X system:


Formula to calculate Price of each material is Price / Counter.


Create New Calculated Key Figure (ZCOUNTER1) and give the value as 1.




In the properties of the Calculated Key Figure Click on Enhance tab:




Keep the Time of Calculation as Before Aggregation as shown in the below screen shot:



If we don’t select the above option,the Counter Value will be 1 and it gives the below output:



So we have to calculate Price of each Material with Before Aggregation property (Now the counter value will be 2):


Now the output of the query will be like this:


Now we can hide the Columns ‘Price’ and ‘Counter (Before Aggr)’ and deliver this report to Customer as per his requirement.


This option is obsolete in BW 7.X ( check the below screen shot) :


Create a Calculated Key Figure as mentioned below (Give value 1):


In the Aggregation Tab, unselect the check box: ‘After Aggregation’.



You will get the below message:


Info: Calculated Key Figure Counter (Before Aggr) uses the obsolete setting ‘Calculation Before Aggregation’.


Steps to achieve this in BW 7.X system:


Create a Calculated Key Figure as mentioned below (Give value 1):




If we this Counter directly in the calculation it will give the below output:



We can achieve the ‘Before Aggregation’ option in BW 7.x system by following the below steps:


Create Counter1 with fixed value 1:




In Aggregation Tab select the below options:


          Exception Aggregation: Counter for All detailed Values

          Characteristic: 0MAT_DOC (Because we have different Material Documents (23457, 23458) for the material ABC):



Now the output of query has given correct value for the material ABC and the other 2 are not correct as they have same Material documents (refer sample data):




Now create Counter2:



Aggregation Tab:


Exception Aggregation: Summation

Ref. Characteristic: 0MAT_ITEM (Because we have different Material Items (1, 2) for the material XYZ).



Now the output is showing correct values for the materials ABC and XYZ and still we are getting wrong values for the material DEF, as it has same Material documents and Material Items:




Now create Counter3:




    Exception Aggregation: Summation

    Ref. Characteristic: 0PLANT (Because we have different Plants (3000 and 4000) for the material DEF).




Now create New Formula: Price of Each Material


Price of Each Material  = Price / Counter3



Now the output is:




Now we can hide the columns ‘Price’ and ‘Counter3’ and show the Price of each material in the output:



Likewise we have to analyze the data in the info cube and we have to identify the Characteristics on the aggregation has happened at Bex query level and we have to use them as the Ref. Characteristic in the Calculated Key Figure and we can achieve the counter ( no. of records aggregated).


Filter Blog

By author:
By date:
By tag: