1 2 3 5 Previous Next

Data Warehousing

74 Posts

Dear Followers



Over the past weeks we are seeing many incidents being opened by BW customers wich are receiving a Syntax error in GP_ERR_RSTRAN_MASTER_TMPL_M when trying to activate or transport transformations.




Message no. RG102



There are many causes for this error, they can occurs from transformation routines, or inconsistency in mapping of fields till program errors.

Having this error on mind I decided to create this blog to provide some notes that can be checked before open an incident for SAP.


1725511 - 730SP8: Metadata of Hierarchy transformations corrupted

1762252 - Syntax error in GP_ERR_RSTRAN_MASTER_TMPL

1809029 - Automatic repair for incorr field in functional TRFN group

1818702 - 730SP10: Syntax error in GP_ERR_RSTRAN_MASTER_TMPL_M during activation of transformation

1889969 - 730SP11:Syntax error in GP_ERR_RSTRAN_MASTER_TMPL for RECORDMODE

1919139 - NetWeaver BW 7.30 (Support Package 11) - hierarchy error in transformation

1919235 - "Syntax error in routine" of a migrated transformation/during migration

1933651 - Syntax error in GP_ERR_RSTRAN_MASTER_TMPL for rule type "Time Distribution"

1946031 - Syntax error GP_ERR_RSTRAN_MASTER_TMPL during activation of transformation

2038924 - 730SP13:Transformation with target as outbound Infosurce of an SPO cannot be activated

2124482 - SAP BW 7.40(SP11) Activation failed for Transformation

2128157 - SAP BW 7.40(SP11) advanced DSO read in Transformation

2152631 - 730SP14: Syntax error during activation of Transformations



Janaina Steffens

In my last blog-post I covered Package Size & Impacts of Package Size in DTP. Continuing the same trend of DTP, I am about to write on a topic discussed on SCN forum on & off. I intent to write this blog-post because I myself struggled with knowing the exact way of how the processing of data with allocated processors happens inside a DTP & which is the best available mode. After deep understanding, I thought of writing a blog-post on this very interesting topic. Let's start.

To start with, there are 3 modes of data processing available in DTP which are listed as below:-


1. Parallel Extraction & Processing

2. Serial Extraction & Immediate Parallel Processing

3. Serial Extraction & Serial Processing


Let us get into details one-by-one.

1. Parallel Extraction & Processing :

As the mode-name suggests, it actually extracts the data & process the data parallel i.e. simultaneously. In this mode, all available processes extract the data parallel & as soon as extraction is completed the data processing starts. Here data processing is updating the data to the target. In this mode, there is a possibility that one process might be updating the data packet to the target, while others might still be extracting. Hence, the work is done parallel.


SAP also refers this processing mode as P1.


Example: If we assign 6 parallel processes then each process extracts one data packet at a time & updates the same data packet to the target. Hence, in this case 6 data packets will be processed simultaneously & updated to target accordingly. This process continues till the end of all packets available.


2. Serial Extraction & Immediate Parallel Processing :

When we select this mode for data processing, the process extracts the data from source & after complete extraction, the data packet is processed by another process. In the meantime the 1st process again starts extracting another package. And the cycle continues.


Now, here if the 1st processor has not completed extracting the 1st package yet then a new process will start extracting depending upon the number of parallel processes assigned/allocated. Hence, as the name suggests it serially extracts the data from source & as soon as extraction is finished the packet is processed & updated to target.


SAP also refers this processing mode as P2.

Example: If we have 6 processes assigned for this mode then one process will do the extraction of packets one by one & other processes will do the processing & updating to target parallel. As explained above if 1st process is still extracting then 2nd process will jump in for extraction of next packet. This procedure takes place serial extraction & is further processed parallel.


3. Serial Extraction & Serial Processing :

In this mode only one processor firstly extracts the data packet from source & then the same processor will update the data packet to the target without involvement of any other processor. Once the data packet extraction from source & update to target is finished, the processor moves to next data packet.

This processing mode is rarely used because of performance optimization issues & is not recommended generally.

SAP also refers this processing mode as P3.

Example: As we cannot allocate or define maximum number of background processes for this mode it is obvious that one processor will extract & update while other will do other jobs. 

Summary & Performance Comparison:

Although the selection of Processing Mode in DTP depends upon the combination of Data Source, Transformations/ Mapping & Data Target the selection of processing mode should be smart enough to optimize the performance of DTP. Based upon above information, I can state that P1 has better performance over P2 & P3 whereas P3 has the lowest performance over P1 & P2.

Hope this blog helps you in understanding how the processing of data from extracting data from source to processing it to updating it to data target is done in DTP by the  processing types.

Author: Shubham Vyas

Company: Mindtree Ltd. (Bangalore/India)

Created on: 27 June 2015

Most of the BW consultants have raised a query in SCN for the performance issues they are facing while running DTP. To eliminate the issue of slow or long running DTP it is essential to understand the concept of Package Size & it's impact on the DTP. Once we are clear through it we might be able to eliminate the issue of long running DTPs.


To start with, understanding of Package & Package Size in DTP is important. Let's do that first.

What is a Package?

A Package is a bundle of data. It contains the records in group.


What do you mean by Package Size?

It is the size of the records a Package can hold inside. By default, the package size of DTP is 50000. You can increase or decrease the package size in DTP depending upon the processing type, transformations, routines & data size.


Do notice that there are two types of loading in SAP BI:

1. Infopackage loading:

    Default Package Size 20000

2. DTP loading:

    Default Package Size is 50000

In this blog I will specifically talk about DTP Packages. DTP 'package size' plays important role in loading the data to info-providers. A good consultant might consider package size as a major while designing a project skeleton. The reason being the following impacts.

Impacts of Package Size in DTP:

  •      When to keep Package Size less than 50000:

1. If we are dealing with lots of look-ups in transformations then keeping the package size smaller helps in faster execution of routines. As we know look-up in       transformation is directly proportional to number of records.

    As we know that routines run at package level, bigger the package size longer the duration for completing the look-up. Simply, MORE SIZE=MORE TIME.

2. If we have large volume of daily loads, then reducing the package size good option. It is obvious that data with high volume requires more time for transfer         process. If the Size is reduced then the processing time of a package falls reasonably eventually boosting DTP.

3. If the parallel processing we are allocating in DTP is less & data volume is huge it is better again to reduce the package size since the parallel processors         allocated are less the processing time for each package can be reduced gradually.


  • When to keep Package Size greater than 50000:

1. Obviously, if the data volume is less then keeping the package size more won't affect the performance of DTP.

2. Some times there are no look-ups/routines in a transformation. The mapping is direct. at this situation, increasing the data package size will not hamper the       DTP.

Hope this blog helps to understand the concept of package size in DTP along with it's Impact.



Author: Shubham Vyas  

Company: Mindtree Ltd. (Bangalore/India)

Created on: 19 June 2015

Courtney Driscoll


Posted by Courtney Driscoll May 1, 2015

With the annual SAPPHIRE NOW conference from May 5-7th in Orlando just days away, I want to share some fantastic Data Warehouse sessions that need to be on your agenda. Here are some of the top data warehouse sessions you shouldn’t miss.



  • Simplify Your Data Warehouse While Meeting Tough Requirements

When: May 5, 2015 2:30PM- 2:50PM  Theater 5

Customer: AmerisourceBergen


Explore ways to meet the new demands expected from data warehouse, including handling greater data volumes with lower latency. See how drug wholesaler AmerisourceBergen consolidated business warehouses and accelerators into one 14 TB instance of the SAP BW powered by SAP HANA application with near-line storage and achieved 100% performance gains.


  • SAP BW on SAP HANA migration – A Story of Failed Start to Amazing & Successful Ending

When: Tuesday, May 5, 2015 12:30PM -1:30PM S330G

Customer: Under Armour


Under Armour started its SAP Business Warehouse (BW) on SAP HANA migration journey in 2013 but encountered significant issues and failures that resulted in shelving and cancelling the initiative. In 2014, a fresh approach, migration strategy, approach and executed the plan that was able to achieve amazing results zero impact to business, improved speed, and no post go live issues.


  • Panel: Get Real-World Insights from Our SAP HANA Innovation Award Winners

When: Wednesday, May 6, 2015 5:30PM- 6:00PM Theater 5

Customers: The SAP Innovation Award winners will be selected from the award finalists


Envision the possibilities for transforming work and everyday life with in-memory computing. Meet the winners of the SAP HANA Innovation Award, and learn how they are successfully using the SAP HANA platform to drive innovation and positively impact their workforce, partners, customers, and society.


  • Modernize Business with an In-Memory Data Warehouse Architecture

When: Wednesday, May 6, 2015 11:00AM - 11:20AM PT410


Recognize quantifiable, observable financial benefits with an in-memory data warehouse. Learn how the SAP Business Warehouse powered by SAP HANA application provides an approach to establishing a business case for modernizing your organization.


  • Manage Your Data by Value with Dynamic Tiering

When: Wednesday, May 6, 2015 5:00PM - 5:45PM PT415


This discussion looks at how dynamic tiering can transform businesses by managing all data costs effectively while maintaining performance. See how SAP HANA dynamic tiering lets business warehouse users create powerful analytic applications that can operate on large data sets while significantly reducing the in-memory footprint and lowering costs.


Of course, these are only a small selection of the many great HANA Platform and Data Warehouse sessions that will be available to you at SAPPHIRE NOW. More information for planning your agenda can be found here:


The SAP HANA experience at SAPPHIRE includes hundreds of sessions packed with insights from customers, partners and HANA experts; demonstrations showcasing the latest innovations; Special Interest Activities (SIAs) for networking and fun, and much more.



But, if I don't see you in Orlando, you can still be part of the action by watching online live or on-demand. Online broadcast

Business Scenario


SAP  BW system may needs optimization / performance improvements by which the whole BW system is intact and efficient to fulfill the business needs.


The Following are the high level brief summary points which may serve as a starting point , a general approach to see the system and move forward with the optimization techniques. There are number of performance tuning techniques are available in the BW area. We can incorporate all / any/some of that after getting familiarized with these key points.

This may help a new comer who is on board to the project or an existing consultant.These key points may help to define a optimization project for an existing system and independent of the BW versions.

SAP - BW Optimization Approach at a High Level

  1. Understand the System Landscape
  2. Get familiar with the BW Architecture and the reporting Environment
  3. Understand thoroughly on the Business Areas where the Analytics are performed.
  4. Identify the KPI's and Key reports (Management &  Analytical)
  5. Collect, gather information and understand the road blocks and Business Pain points.
  6. Analyze the optimization / performance tuning steps taken so far and understand the changes done in the system landscape.
  7. Gather information on the highly impacted areas and where the business needs the system to be optimized.
  8. Prioritize the areas where optimization process should start.
  9. Segregate the improvement areas into Data Extraction(Includes Source side as well) , Transformation/Staging,  Analytical/Reporting side.
  10. Take the inventory of the objects pertinent to the each areas defined above.
  11. Identify the Key areas where the system can be improved/optimized by taking out the current changes/enhancing the current changes done so far.
  12. Analyze the impact of changing the existing optimization / performance tuning steps taken so far.
  13. Identify the best performance tuning techniques which can be implemented on the areas (Point 9)
  14. Come up with a solution suggestion where the existing changes can be removed / enhanced to the areas defined in Point 9.
  15. Analyze and identify how to sustain the system live ,up and running when about to perform the optimization process and make sure  business would not be impacted at any point of time.
  16. Draw an Optimization/Project plan based on the prioritized  business areas and get the objects list based on the inventory taken segregated by the improvement areas.
  17. Come up with the plan to identify the deliverable , timelines (The project management work will be involved at this point)
  18. Collaborate and explain the plan internally then go for the approval from Business stake holders.
  19. Create a Unit test plan for Dev and Regression test plan for QA to accommodate the finalized plan.
  20. Start implementing the optimization as planned  with extra care and make sure to do multiple testing from the BW perspective and the Business perspective and make sure nothing breaks.
  21. Continue testing in Dev , meet with internal team and prove the performance/optimization ; done so far against the optimization done earlier.
  22. Finalize and confirm the optimization plan works as expected successfully and move on to the next level in the system landscape.


Hope this gives an idea to the beginners, entry level consultant how to approach for an optimization/performance tuning projects.



I decided to write this blog after I realized that it's not so easy to check which queries use given aggregate. There was a need to optimize the aggregates which were created on different occassions but to do this I had to know which queries work with them. 


Checking if an aggregate was used by any query is possible in RSA1 in “Maintain aggregates” option against a given cube. We can see here how many times an aggregate was used since its last activation.

1. RSA maintain aggregates.png


But we don’t see which exact query used an aggregate.


To check that two tables should be used:




We have to use both tables because none of them contain all the fields that we need i.e. an aggregate number and a query’s technical code.

In RSDDSTAT_DM we have an aggregate number and STEPUID field which we use to join that table with RSDDSTAT_OLAP.



We don’t have in this table the query technical code.

A query's technical code can be found in RSDDSTAT_OLAP as well as STEPUID .



The simplest possible way (no ABAP, no need to define BW data source) to combine the data mentioned above is to use SAP QuickViewer functionality (TCode:SQVI).


All we need to do now is to define and run query joining the data.

1. Give a query code name

2. Start creating the query by presing "Create a query" button

4.SQVI create query.png


3. Write a query title

4. Define Data source as "Table join"

5.SQVI table join.png


5. Add both table to project


6.SQVI add table.png


6. Join tables with STEPID only.


7.SQVI connecting tables.png

Press F3 to exit the window after all is done.


7. Define which fields should be displayed in a query and which should be used in filter. Use approprate checkboxes to do it.

7.SQVI list of fields.png

What was chosen can be seen here

8.SQVI list of fields ddd.png


And here:

9.SQVI list of fields ddd.png


8. Set Export as MS Excel

10.SQVI to excel.png

9. Save changes

11.SQVI to excel save.png


10. Run query


It’s important to restrict “Internal type of query …” to OLAP.


11.SQVI ograniczenie.png


11. After data is displayed in MS Excel functionality for removing duplicates may be used.

11.SQVI wyswietlenie.png


Thank you for reading this blog.

I would be very grateful for any comment on it.


I'd like to give spacial thanks to Tomasz Piwowarski for helping me while creating a QuickViewer query.\


Regards, Leszek



I tried to make usage of the fuction module: RSD_IOBJ_USAGE to create a where-used list for Info Objekcts that are relevant for analysis authorization.


Could be useful for some of you.


I'm thankful for any hint concerning my ABAP.




       BEGIN OF ls_infoobj,
       END OF ls_infoobj,
       tt_infoobj TYPE STANDARD TABLE OF ls_infoobj,
       BEGIN OF ls_report,
        infoobj   TYPE RSDIOBJNM,
        name      TYPE RSDIOBJNM,
        typ       TYPE STRING,
        counter   TYPE n,
       END OF ls_report.
DATA: lt_infoobj TYPE tt_infoobj,
      wa_infoobj TYPE ls_infoobj,
      lt_cube TYPE TABLE OF rs_s_used_by,
      lt_multi TYPE TABLE OF rs_s_used_by,
      lt_iobj TYPE TABLE OF rs_s_used_by,
      wa_used_by TYPE rs_s_used_by,
      lt_odso TYPE RSO_T_TLOGO_ASC,
      wa_odso TYPE RSO_S_TLOGO_ASC,
      lt_iset TYPE RSO_T_TLOGO_ASC,
      wa_asc TYPE RSO_S_TLOGO_ASC,
      lt_viobj TYPE RSD_T_VIOBJ,
      wa_viobj TYPE RSD_S_VIOBJ,
      lt_report TYPE TABLE OF ls_report,
      wa_report TYPE ls_report.
* 1. Part: Collection of relevant elements
*   I_INFOPROV                      =
*   I_IOBJNM                        =
    E_T_IOBJ                        = lt_viobj
*   OTHERS                          = 2
* Implement suitable error handling here
LOOP AT lt_viobj INTO wa_viobj WHERE IOBJNM NS '0TC'.
    wa_infoobj-RSDIOBJNM = wa_viobj-IOBJNM.
    APPEND wa_infoobj TO lt_infoobj.
* 2. Usage of relevant elements
LOOP AT lt_infoobj INTO wa_infoobj.
CLEAR: lt_cube, lt_multi, lt_iobj, lt_odso, lt_iset.
    I_IOBJNM                    = wa_infoobj-RSDIOBJNM
*   p_IOBJNM
*   I_IOBJTP                    = RS_C_SPACE3
*   I_TH_TLOGO                  =
*   I_BYPASS_BUFFER             = RS_C_FALSE
    I_OBJVERS                   = 'A'
    E_T_CUBE                    = lt_cube
*   E_T_IOBC                    =
*   E_T_ISCS                    =
*   E_T_ISNEW                   =
*   E_T_TABL                    =
*   E_T_CMP_IOBJ                =
*   E_T_ATR_IOBJ                =
*   E_T_ATR_NAV_IOBJ            =
    E_T_MPRO_IOBJ               = lt_multi
*   E_T_NAIP_IOBJ               =
*   E_T_HIECHA_IOBJ             =
*   E_T_ICE_KYF                 =
*   E_T_AGGRCHA_IOBJ            =
*   E_T_CHABAS_IOBJ             =
*   E_T_UNI_IOBJ                =
    E_T_IOBJ                    = lt_iobj
*   E_T_CMP_KYF                 =
*   E_T_ISMP                    =
*   E_T_ISMP_INT                =
    E_T_ODSO                    = lt_odso
    E_T_ISET                    = lt_iset.
*   E_T_MPRO                    =
*   E_T_UPDR                    =
*   E_T_ANMO                    =
*   E_T_AQSG                    =
*   E_T_QUERY                   =
*   E_T_DAS                     =
*   E_T_KPI                     =
*   E_T_TRFN                    =
*   E_T_DTP                     =
*   E_T_HYBR                    =
*   E_T_DAP                     =
*   E_T_DMOD                    =
*   E_T_COPR                    =
*   E_T_BPF                     =
*   E_T_APPL                    =
*   E_T_FBP                     =
*   E_T_HCPR                    =
*   E_T_QPROV                   =
*   ILLEGAL_INPUT               = 1
*   OTHERS                      = 2
* Implement suitable error handling here
* Used in InfoCube
LOOP AT lt_cube INTO wa_used_by.
  wa_report-name = wa_used_by-tobjnm.
  wa_report-typ = 'Info Cube'.
  wa_report-infoobj = wa_infoobj.
    APPEND wa_report TO lt_report.
* Used in MultiProvider
LOOP AT lt_multi INTO wa_used_by.
  wa_report-name = wa_used_by-tobjnm.
  wa_report-typ = 'Multi Provider'.
  wa_report-infoobj = wa_infoobj.
    APPEND wa_report TO lt_report.
* Used in InfoObject
LOOP AT lt_iobj INTO wa_used_by.
  wa_report-name = wa_used_by-tobjnm.
  wa_report-typ = 'Info Object'.
  wa_report-infoobj = wa_infoobj.
    APPEND wa_report TO lt_report.
* Used in DSO
LOOP AT lt_odso INTO wa_asc.
  wa_report-typ = 'DSO'.
  wa_report-name = wa_asc-objnm.
  wa_report-infoobj = wa_infoobj.
    APPEND wa_report TO lt_report.
* Used in InfoSet
LOOP AT lt_iset INTO wa_asc.
  wa_report-typ = 'InfoSet'.
  wa_report-name = wa_asc-objnm.
  wa_report-infoobj = wa_infoobj.
    APPEND wa_report TO lt_report.
* 3. Part: Report
WRITE: 'Where-used list:'.
LOOP AT lt_infoobj INTO wa_infoobj.
  WRITE: / wa_infoobj.
WRITE: / '',
       /(30) 'Info Objects', (30) 'Info Provider', (30) 'Typ'.
LOOP AT lt_report INTO wa_report.
  WRITE: / wa_report-infoobj, wa_report-name, wa_report-typ.

Going around various organizations, I often hear complaints about how SAP BI/BW is "slow", "expensive", "rigid", and "carries long development cycles".  In the era of HANA, the prediction of BW's immediate demise resonates seemingly everywhere.


Yet, for all the years I have been working with SAP BW, I find it is a tremendous tool for both developers and self-servicing business users.  First coming to mind: It has a model driven GUI, and is database agnostic.  You don't need a DBA every time you want to touch a data model.  Even data staging, governance, and DB statistics come packaged within the graphical system administration tools and transactions.


Do you want to create a master data object with Attributes and Texts ? Go ahead open Tx RSD1, and fill in a few "forms" as in wizards.  The underlying tables and joins are created for you in the underlying database.  You don't need to have specific database and SQL knowledge.  Even in the wonderful world of HANA today, you need to bring a couple of wonderful tables into a join, define the join type as Text Join, and remember to add in the language column.


What about the "Long Development cycles" ? BW data model, even under the best practice, is a little extra compared to working directly with tables.  We see SAP trying hard to bring the "agile" and "fast" into BW:  Open ODS, ODP, Transient providers are a few.....

That still doesn't sound fast enough ? Well, have you visited a brick-and-mortar warehouse lately ? If you haven't, I encourage you to do so whenever you get a chance.  Large warehouses run by the likes of Fedex or UPS are a thing of beauty, if you enjoy watching well-run processes.


When I stopped by a warehouse the first time, I noticed one significant overhead--or a "waste of time": Incoming well-packed pallets were unpacked and had items individually scanned.  New pallets were built from scratch before being put onto the shelves.  It would have been obviously a lot quicker to directly put the inbound pallets onto the warehouse shelves, right ?   For that matter, shipping directly out of the manufacturing plant and skipping the warehouse step would be even faster ?  Yet, unless you are a mom-and-pop store-front operation, warehouse and warehouse processes are the necessary evil to bring out the efficiency, speed, and savings in supply chain management.


In many regards, data warehouse represented by SAP BW is the equivalent of physical distribution centers.  When organizations accumulate large amount of information, a well-designed and well-run BW enables the consolidation, self-service, and 360 degree view.  This "overhead" of data warehouse modeling, in turn, provides the streamlined and consolidated information delivery and discovery.


Digging in a little deeper into organizations where SAP BW's very existence is challenged, I realized that their BW system is often run under the shadow of SAP ERP.  For example, despite BW's elaborate data level security and task access which can distinguish owners, BW authorizations are setup based on the Transaction-code mode of ECC.   BW developers are starved from many useful BW transactions and capabilities, let alone the business community.


On the opposite end of the trajectory, organizations that fell in love with BW are those where the BW system is "open" and "inviting".  There are business users who write and navigate almost all of their own queries , which became extensive analytic workbooks/Web reports. 


Not dragging the story too long, bits and pieces came to mind during a recent conversation regarding how to start on the right track towards BI success.  Let me share them here, perhaps invoking some open discussions.


Let BI thrive under its own merits.


BI has many unique aspects that make it unfit with some traditional IT principles:


1) BI is access to ever-changing information and uncovering the dynamic truth underneath, beyond long-lasting static formatted reports.


2) BI is a decision support system.  BI is not necessarily a "must-have" to run a business.  The tagline is: ERP runs the business, BI manages the business.  Leadership and senior management sponsorship and adoption can go a long way to jump-start BI success.


3) Adoption: The ONLY success criteria for BI is business adoption.


ERP projects are measured by delivering on target, per functional requirement and process definitions.  While BI projects can be bench-marked against initial requirement, time, and budget, the true measure of BI success is whether the user communities adopt the BI deliverable as their own.


A few years ago, there were studies showing half of the dashboards delivered to users went ignored after a few months.


Two conclusions we can derive from the study:


      a) Dashboard lifetime may not necessarily be long...The initial design can have some short-term decisions built-in, as the outputs could well need to be updated soon after use.


       b) Even when those BI projects came in on-time and on-budget, unless the outputs are truly adopted and truly reflect users' REAL need, we can not claim success.


Coming to the execution, a couple of quick sparks:


1) BI, more than ERP, is about relationships between IT and business.  We should encourage the business to have a sense of Ownership in the BI process, not making them feel "us vs them". Consequently, the BI project methodology often works better when we allow Joint decisions, Flexibility, and Quick turn-around ---Agile.


Many modern BI tools, counting in BPC for example, are designed to allow business to take center stage and ownership.  That is also why we have a mushroom from the likes of SAP Lumira, Tableau, and Qlikview.   One of the three pillars for SAP BI tool-set is agility.


BI team needs to directly connect to the business we are supporting.  We need to understand the personalities of the business process and the people.


Open dialogues, direct contacts, and regular workshops are key to put BI into the users hands.  Many significant BI requirements come from informal or private inquires/conversations, and observing business operations.


If we only allow formal SDLC requirement process in place, we may discourage the business from coming forward with "immature" thoughts or ideas.   The maturity of decisions often comes after uncovering the truth through business intelligence, not before.


In practice, the emphasis of formal SDLC and IT ownership had often forced business to grab raw data, opening their own departmental data marts or BI shops...


Open is key.....


2) Following the Open theme: BI Access and security authorizations should attract people in, not keeping people out.


BW's analytic authorization model is a different concept from the ECC's Transaction Code model.


BW, BOBJ, and BPC have elaborate and extensive data and task level securities, which can be utilized effectively to protect the information.


In the meantime, BW transactions and BI tools should be open at the "maximum" level to BI developers.  Power users should be encouraged to report on their own.


Many of the finest improvements in BW came in the form of new transaction codes.  BW developers should be encouraged to explore those functionality to take advantage of the investments.


In principal, BW developers should be treated as DBA and ABAP developers in the BW clients, not as simply report developers.  This is especially true when we want to emphasize lean and creativity in the BI groups.



Rounding back to the top of the paragraph:  BI's success is directly tied to the BI groups's creativity, user experience, and ultimately user Adoption.  The more organizations open up and foster those, the better we will be in the BI journey.


More to come.....

When you have to set the customizing of the Aggregation task in /POSDW/IMG transaction, you have to take into account the "Processing and Performance" configuration of this task. For that, you should know how this parameter works, so I am going to explain you how this parameter works.


The aggregation task has two steps: one to generate the aggregation itself and another one to send or work with the aggregated data. In Both of these steps you can configure the parameter.




  • Aggregation Task (STEP 1)


At this first step, this parameter represents the number of transactions that you are going to tend into account for each LUW.







If you don't know how a LUW is, you can use the F1 help of this field:





Badi /POSDW/BADI_AGGR_TRANSACTION (The badi where the aggregation is done) works only with one transaction per iteration, it means that the badi receives one transaction and makes the agregation of its information (retail information for example). When the Badi finishes the treatment of all the relevant transactions, the result is an internal table which contains the aggregation data of all the processed transactions and this information has to be updated in the Aggregation database.

At this point you can think that the parameter are not going to have any effect on this task, but when the aggregation data is going to be updated in the database, the parameter is very important because the method ( of the /POSDW/CL_AGGREGATED_DATA class ) COPY_AGGREGATED_DATA_REC receives the entries of the aggregation internal table and depending on the value of the parameter "Number of items", this method is going to be executed one o more times.

The entries of the internal table are going to be processed for updating the aggregation database in packages. Each package contains a number of entries which are the result of the aggregation data of some transactions. How many entries we are going to have in each package? It depends on the value of the "Number of items" that we chose in the customizing, but this parameter don't represents the number of  aggregation internal table entries that we have to take. Instead of that, it represents the number of transactions that we have to take into account to produce the aggregation internal table entries, so in each package there are going to be "X" entries which have been produced by the treatment of the number of transactions that we chose before. A transaction might produce one or more aggregation internal table entries.

An example of this:


  1. We configure the parameter in the customizing as 5 ( 5 transactions). 

  2. We execute the aggregation task for 10 transactions (They produce an aggregation internal table with 50 entries).



   3.  When the information is going to be updated in the database, the information will be splitted in packages. The first package has 30 entries that have                   been produced by the first block of 5 transactions. These entries are treated and their information is updated in the database. The task status of                       these 5 transactions is "PROCESSED". But we still have more entries in another package. These 20 entries are from the last 5 transactions, and they are         procesed and updated in the database. The task status will change to "PROCESSED".

If the "Number of items" parameter is configured as '0' or INITIAL, it produces that all transactions have to be processed at the same LUW and the package will contain all the entries of the aggregation internal table.


  • Aggregation Task (STEP 2)

The second Step is responsible for processing the information of the aggregation database (OUTBOUND task), so for example we can send it by an ABAP_PROXY or we can do what we want with it. At this second step, we have to configure the parameter like the first step:

As you can notice, the value of this parameter can be different to the value of the first step because in this second step we aren't going to work with transactions, instead of that we are going to work with entries of the aggregation database. In the aggregation database there are entries produced by the aggregation task so when we execute the OUTBOUND task for processing these entries, the package size is very important because we are going to treat in each LUW an specific number of database entries. Actually, if we configure the parameter as 20 and we have 50 entries in the database, 3 packages will be created: the first package will have 20 entries, the second will have 20 entries and the last package will have 10 entries.

The badi for the OUTPUT task is /POSDW/BADI_PROCESS_PACKAGE and we select the database entries using the method GET_PACKAGE_DATA. For the example of 50 entries, we would have to call the method 3 times.



Like the first step, if we configure the parameter as '0' or INITIAL, all entries will be processed at the same LUW.








In summary, the package size is very important for processing and performance, so you should choose the best option to your purpose.

EasyQuery, What is it?


To put it simply, it is an automated process by which SAP system generates all necessary backend objects so that one can consume the data output from a normal query in BW, programatically from a (local or remote)/(SAP or Non-SAP system) by calling an RFC enabled autogenerated FM.


How to?


Just tick the EasyQuery check box in the properties of a normal query in query designer and execute the query and all backend FM and structures are autogenerated. All you need to do is to declare a variable by the type of autogenerated structure and pass it in to the autogenerated FM.


Transport issues


Here we reach the crux of this post. The standard method of transporting an EasyQuery is to simply collect the original query and transport it and then check the EasyQuery checkbox again in the target client and execute again. The issue here is there is no guarantee that the autogenerated objects will have the same names in the target client as they had in source client. This creates problems in code for consumption especially when its consumed from SAP systems. You will have a piece of code which addressed the autogenerated FM correctly in one client and after transport it will obviously error in new client if the autogenerated FM name is different.


Below fixes are possible.


1. One Option is to open each client and correct the code so that it matches the autogenerated objects. This not recommended at all as this will mean opening the production client too.


data: t_grid_data type /bic/ne_4 “number 4 in this type will have to be

                                                            "edited according to what number got

                              “generated after transport in the target

call function /BIC/NF_4          “same number above will have to be

                                                    “updated for function name too


= wa_l_r_asondate

= t_grid_data
= t_col_desc
= t_row_desc
= t_message_log.


2. Second option is to use the FM : RSEQ_GET_EQRFC_NAME to fetch the autogenerated FM name for a particular EasyQuery name passed. You will still have to provide the custom structure as input to the autogenerated FM which can be accomplished by using a CASE..WHEN loop to check the nomenclature of the FM name returned below and declare the custom structure according to that. This works because both the autogenerated FM and structure use the same matching numbers as differentiators.


lv_eq_name = 'IHRPA_C01_Q0013'.

call function 'RSEQ_GET_EQRFC_NAME'exporting
= lv_eq_name
= lv_fm_name. “Get the substring from this variable after


                                                       “first 3 characters and use that in a CASE


*Example – CASE lv_substring.

*              WHEN ‘4’

*              t_grid_data type /bic/ne_4

*              WHEN ‘5’

*              t_grid_data type /bic/ne_5


call function lv_fm_name

= wa_l_r_asondate

= t_grid_data
= t_col_desc
= t_row_desc
= t_message_log.



3. The third and seemingly both logical and clean method would be to take these autogenerated objects as a template and create custom Z objects and use them in the code for consumption. We can have a single custom FM for all easyqueries created and this custom FM will replicate the functionality inside the autogenerated FM minus some security checks (example: a date check which is there to make sure autogenerated FM matches the latest changes to the query). This custom FM can be made to accept the EasyQuery name, Custom structure(which will have to be created taking the autogenerated structure as sample for each EasyQuery created, seperately) and ASONDATE


data: t_grid_data type z_Query_Name_Struct. “created using /bic/ne_4 as template.

                                         "Will have to be repeated for each separate EasyQuery and also

                                         "regenerated easyquery.

call function z_Query_Name_Function “created using /BIC/NF_4 as “template, but can be used for any

                               "EasyQuery going forward


      i_s_var_02ix_asondat              = wa_l_r_asondate

i_s_var_eq_name = 'IHRPA_C01_Q0013'

= t_grid_data
= t_col_desc
= t_row_desc
= t_message_log.


I will be back with implemantation code for z_Query_Name_Function and steps for z_Query_Name_Struct

Hope the information provided above is useful. Please suggest further as you see fit, on the subject.



Darshan Suku

Ramya huleppa

Update from PSA Issue.

Posted by Ramya huleppa Dec 19, 2013



Few days back I had searched SDN for the same issue(mentined below) and found many solutions related to it but none of the solutions which i found could help me in resolving the issue.


Hope it helps any.



The PC failed due to update from PSA issue.


When checked for error message


Details Tab : - last delta upload not yet completed. Cancel


In The monitor screen I can find that the load being in yellow status for more than expected time and ended with short dump the  extracted records are (for ex: 0 from 52 records)

In the source system I can find that the job is also  finished.


Checks done:-

  • 1.       Set the status to red and tried reloading but failed again with the same error.
  • 2.       Checked for IDOC’s  in BD87 but no IDOCS stuck.
  • 3.       Checked in SM58 if any TRFC’s stuck I could find the below error status saying


SQL error 14400 when accessing table /BIC/B00000XX000




When checked in ST22 I can find the below error




Tried doing :-


Deleted the PSA table data and tried loading again but the PC failed showing the same error but when changed the info package settings from only PSA to data targets the load was success So it shows that there is some issue with PSA table as mentioned in the short dump.


Exact Solution:- As the  error was saying that “Inserted partition key does not map to any partition “in the short dump as a trail and error mathod tried activating the transfer structure using the program RS_TRANSTRU_ACTIVATE_ALL which worked and the loads was fine as normal.

04.Dec.2013 I received my invoice from Amazon: (Please note this can vary a lot depending on your usage, country, etc.)


This is the bill for 2 hours of usage: (currency USD)





I was looking for a trial version of SAP Lumira to play around in my iPad, I found this page:




Here's a list of a lot of products SAP offers as free or time trials.


Well I found the Lumira free version here, but also I found:


SAP NetWeaver Business Warehouse, powered by HANA


The temptation was big, I've used 7.3 but never on top of Hana, I've seen post and documents, but I wanted to feel it.


So why not ? Immediately after signing up, you will be prompted for your Amazon EC2 account


NOTE: You (yes!, you and from your own wallet) will have to pay for AMAZON EC2 hosting, basically EC2 is a service for renting virtual machines. So you will need to rent virtual machines to run your BW on top of HANA. Amazon invoicing model it's pretty complex, but the bottom line is: While your virtual machine is turned on (running) you are spending money. I used my BW 7.3 on top of HANA for 2 hours, and was loading data and playing around the whole time.


So far im my AMAZON EC2 account I don't see any charges, but they will come, I will update this blog when I receive the charges.


Now I will just add some images and links from my experience and ask you to comment about yours.




The longest waiting time is 10 to 20 minutes when your amazon virtual machines are being created / activated:




And after some time (I was refreshing the web page every now and then) you see that's done:




Then you can click the connect button and a RDP file will download with this name: "Instance of SAP NetWeaver BW 7.3 on SAP HANA SP06 with SAP BusinessObjects BI 4.0 SP07.rdp"


That's the great part, your remote desktop connection client will open and you will be connected to your virtual machine server.

(Default passwords here)



On your remote desktop all your tools will be there installed and ready to use.



This is an actual screen shot:


Then just double click your SAP GUI and voilá you are logged into your own BW 7.3 powered by HANA:




Now just have fun, experiment and remember:If you logout and leave your  server running you will be spending money, so please follow the instruction for shutting down your instance whenever you are not using it.


Also share your experiences running BW 7.3 HANA  Trial on AMAZON!

Below is the Example for better understanding of Upper Limit And Lower Limit for Generic Delta in Generic Data source:







Anil Kumar Thalada

Performance Tuning

Posted by Anil Kumar Thalada Aug 26, 2013

Performance Tuning

Ø   When the query is running slow, how should we improve the query performance? Query Performance

Ø   When we are extracting data from source system to BI, Loading might be going slow? Loading Performance

Ø   Query Performance: Query Execution Process: Whenever we execute the query it triggers the OLAP processer, it first check the available of data in OLAP cache if cache is not available, it identifies the info provider on which BEx report should be executed on & it triggers the info provider & selects the records & aggregates the records based on characteristic values in the OLAP processer & transfer to the front end (BEx) and the records are formatted in the front end.

Ø   Frontend Time: Time spent at the Bex to Execute the query is called as Frontend time

Ø   OLAP Time: Time spend at the processer to perform the process called as OLAP time

Ø   DB Time: The time spent at the database to retrieve the data to the processer is called as DB Time

Ø   Total Time taken to execute query = Frontend time + OLAP time +DB time

Ø   Aggregation ratio: Number of records selected from the database to the OLAP processor / number of records transferred to the BEX

Ø   1. How to collect the statistics:RSA1 à Tools à Setting for BI statistics (Tcode: RSDDSTAT) à(RSDDSTAT_DM & RSDDSTAT_OLAP tables will collect the statics)à If the tables already having data delete the contents of table à You can find a button delete statistical data à It will ask the period à Delete à Observe the job in SM37 à Now select the info provider & Query on which you want maintain the statics à Make the necessary settings

Ø   Save à Now if any one execute the query the statics will be maintained in statistical tables

Ø   How to analyse the statistics collected:1) By looking at the contents of the tables RSDDSTAT_DM, RSDDSTAT_OLAP

Ø   Another Way: By using the Transaction code ST03N

Ø   Another Way: By Implementing BI statistics

Ø   Go to statistical tables à Contents à Settings à List Format à Choose Fields à Deselect all à Select INFO CUBE & QUERY ID (Name of the query) & QDBSEL (Number of records selected from data base) & QDBTRANS (Number of records transferred to BEX) & QTIMEOLAP (Time spent at OLAP) à  QTIMEDB (DB TIME) àQTIMECLIENT (FRONTEND TIME)à TRANSFER à Observe the statics

Ø   Another way: ST03 à Expert Mode à Double click on BI  Work Load à Select drop down for aggregation à Select Query à Filter your query à Go to all data tab à Observe the statistical information

Ø   Another Flexibility by implementing BI statistics (RSTCC_INST_BIAC) àInstead of looking the data in the table what SAP has done is they have given some readymade queries, info cubes, transformations, readymade multi providers, install them & load the data to these cube à There are some readymade BEx queries which will give the analysis of the reports

Ø   0TCT_C01 (Front-End and OLAP Statistics (Aggregated))

Ø   0TCT_C02(BI Front-End and OLAP Statistics (Details))

Ø   0TCT_C03(Data Manager Statistics (Details))

Ø   0BWTC_C04(BW Statistics - Aggregates)

Ø   0BWTC_C05(BW Statistics - WHM)

Ø   0BWTC_C09(Condensing Info Cubes),

Ø   0BWTC_C11(Data deletion from info cube),

Ø   0TCT_MC02 (MULTIP PROVIDER - Front-End and OLAP Statistics (Details))

Ø   0TCT_MC01 (Multi Provider - Front-End and OLAP Statistics (Aggregated))

Ø   0BWTC_C10 (Multi Provider - BW Statistics)

Ø   Most of the system maintenance reports come from these contents, Like How many number of users used some reports & Administration Reports

Ø   STEPS: Install the Business content data source à RSA5 à Expand the application component Business Information Ware house àExpand application component TCT àInstall the data sources (Total 6)àReplicate the data source using My Self Connectionà RSA13 à Select My self-Connection  à Data Source overview à Expand BW data sources à Expand Business information warehouse à Technical content à Context menu à Replicate à

Ø   Install all other contents like info cubes, reports, multi providers, info packages, transformations, DTP’s

Ø   RSOR àExpand the Multi Provider à Double click on select objects àFind à0BWTC_C10 à Select Inflow Before & After àInstall in the background à Once the installation is done

Ø   Load the data to the all cubes by scheduling the info package & DTP’s

Ø   2 reports which are mainly used for report analysis àutilizing OLAP per query (0BWTC_C10_Q012)& utilizing OLAP per Info cube (0BWTC_C10_Q013)

Ø   Open the query Q012 in analyzer à Execute à Specify the cube name & query name à Execute à Observe the statistics

Ø   Different aspects what we can do to improve query performance:

Ø   If DB TIME IS MORE: 1.Modelling Aspects 2. Query design 3. Compression

  1. 4. Aggregates 5. Partitioning 6. Read mode of the query 7. Pre calculated web template 8. Line Item dimension 9. Indexes


Types of Attributes:

  1. Ø  Display Attributes.
  2. Ø  Exclusive Attributes
  3. Ø  Navigational Attributes
  4. Ø  Time dependent Attributes
  5. Ø  Time dependent – navigational attributes
  6. Ø  Compounding attributes
  7. Ø  Transitive attributes

1. Display Attribute: Any info object if Attribute only check box selected it becomes display attribute

Ø  It is stored in Attribute table - /P

Ø  It gives the present truth in the reporting

Ø  It will completely depend on the Main Characteristic

2. Navigational Attribute: Whenever the attribute want to act as characteristic at query level we use navigational attribute

Ø  It gives present truth

Ø  It is stored in - /X

Ø  Navigation attribute should be ON with description

Ø  Whatever we can do with a normal characteristic in a query, we can do that with navigational attribute in the reporting

Ø  Naming convention of navigational attribute à Main characteristic name _ attribute name

Ø  This navigational attribute should be taken into the info provider (Ex: DSO, CUBE, Multi Provider) & Info object should be ON so that it enables for reporting otherwise it is not available for reporting

3. Exclusive Attribute: The attribute only check box is deselected

4. Time Dependent attribute (Display Attribute + Time Dependent Property): (Select the time dependent check box)à When we have the value of characteristic changing based on time period, we model these info object as time dependent attributes – which enables to maintain the different values w.r.t 2 more fields (Date From & Date To)

Ø  Time dependent attributes will be stored in -/Q

Ø  Key date will define what value it has to bring from the time dependent attribute table

5. Time dependent Navigational attribute

Ø  It is both Time Dependent & navigational

Ø  It is stored in table - /y


6. Compounding Attribute:

Ø  Superior level of attribute

Ø  When the value of one info objects depends on the value  another info object

Ø  Example: I have two plants, PLANT 1: M1, M2, M3, PLANT 2: M1, M2, M3

Ø  Here Material is compound of PLANT

Ø  Compounding attribute will act as part of primary key to all your attribute, text & SID table


Ø  Compounding attribute will degrade loading performance

7. Transitive Attribute:

Ø  2nd level of navigational attribute

Ø  If one navigational attribute will have another navigational attribute

Ø  How do you find delta Process of Data Source?


Ø  Detail level of Delta information: RODELTAM

Ø  Early Delta Initialization (It will enable for LO): In info package update tab you can see a radio button Early Delta Initialization - If you execute the early delta initialization, the source system update can continue and data can be written to the delta queue while the initialization request is being processed.






Anil Kumar Thalada





Filter Blog

By author:
By date:
By tag: