1 2 3 22 Previous Next

SAP Business Warehouse

323 Posts

It is been a while I have worked on Archiving solution, during the year 2008 to 2009 on BW 3.5 the standard SAP provided archiving solution SARA. After this, now got a chance to work on Archiving on SAP BW on HANA. Hope you find plenty of documents on How to for NLS archiving solution.

                As usual SAP has improved the solution a lot and developed NLS with SAP SYBASE IQ as database (Columnar), when it compared to SARA to Tape Drives. Anyway this blog is not to compare these two, I would like to share my learnings, Tips & Tricks for NLS archive on SAP HANA.


                Before proceed to blog, I would like to thank you to my onsite co-ordinator for all Knowledge Transfer.

Tips & Tricks:

  • Before start with NLS archive, list down all the InfoProviders which are required for archive. Prioritize based on Business Requirement, User’s Priority, and Based on Volume & Size of InfoProvider.
  • Get the volume, size and years of data stored in each InfoProvider from basis team. So that you can make quick decision which has to be archived and not needed.
  • Archive in phases, like run till Step 50 – Verification Phase. Later schedule Step 70 – Deletion Phase (from BW).
  • If InfoProviders are loaded with historical values, i.e., if daily delta brings changed data for previous years. Then such InfoProviders can’t be archived, because once it is archived to NLS it is locked for any change. It will not allow to load/change old data.
  • If InfoProviders such as Sales etc, in other words larger volume cube. Find the characteristics other than Time to divide the data for archiving. Otherwise Archive jobs might take longer time and memory (remember it is on main memory in HANA – which is expensive). Many times system run out of allocated memory if Archive jobs runs in parallel.
  • Schedule/take a regular backup both NLS and existing SAP BW on HANA before deletion of data. Just in worst case if system crashes (either one) you can recover from backups.
  • Have a tracker which includes all list of InfoProvider and its current status along with the steps completed, sample is below it may vary based person to person/project requirement.


  • Most important is schedule the archive jobs instead of manual execution. This way you save time & effort and use the system effectively.

There are two ways to do it.


I. Using Process Chains


  1. Goto RSPC T-Code and create a new Process Chain.
  2. Expand 'Data Target Administration' and Pull 'Archive Data from an InfoProvider' process to chain.
    • ProcessChain1.jpg
  3. Create Process variant in next prompt, enter required details and find the desired InfoProvider.
    • ProcessChain2.jpg
  4. On Next screen enter the time slice OR any other characteristics and select the step (here till 50)
    • ProcessChain3.jpg
  5. Final Process chain looks like below


Note: If it is request based archiving, then option "Continue Open Archiving Request(s)". Explore more on this, here in my case it is always based on time slice.



  1. Process Chain usage is useful when archive is done regularly and if there is a requirement to archive specific time/year data to NLS.
  2. If the No Of InfoProvider and time slice is limited, process chain can be created for each InfoProviders and schedule.


  1. If the Archive is based on time and one time activity, then it is tedious to create a chain for each InfoProvider and its time slice.


II. Using Job scheduling technique

Process Chain is not well suited for our requirement, hence have gone for Job Scheduling technique. Schedule the jobs one after the other, using the start condition as previous job completion.



  1. It is instant and quick as well as easy. Most important is that each job can be planned for time (when system is free) and increase/decrease the jobs based on time/system free time. For example, just 6 months data archive can be scheduled OR complete 12 months OR 8 months.
  2. Always flexible to change the steps of Archive OR time slice. Also there is no maintenance as in process chain.


  1. During the scenarios like memory full, the jobs might cancel and subsequent job will start after. Which will try to fill memory and system might shutdown/crash/slowdown the other process.
  2. Once scheduled, you may not have option to change due to less authorization in production.


Let’s look out on how to schedule,

  1. Goto RSA1 – InfoProvider and search for the desired InfoProvider and goto manage screen.
  2. If DAP (Data Archiving Process) already created, then there will be a new tab in manage screen named ‘Archiving’.
  3. Click on Archiving Request button underneath, a new pop appears to enter the details of archiving conditions.Job1.jpg
  4. First thing is to select the STEP 50 as shown and goto ‘Further Restriction’ then enter the criteria.Job2.jpg
  5. For the very first job, select the date and time keep at least half an hour from current time. So that you can schedule the remaining jobs. Check and Save,then click on ‘In the Background’ to schedule .
  6. Click on the Job Overview(next to Archiving Request) button in Manage screen of Archiving tab to see the Released Job and its details.


7. For Next Job schedule, we will enter the Released Job name as start condition. Just copy the name of the job, when you schedule the next job. This step is simple because there is only one job ‘Released’ and same will be considered as start condition.


8. For Subsequent job schedule, we would need the Job ID from the ‘Released Job’. We can get this ID by two ways. Either from Job Overview (goto released job and double click on it, further click on job details will give you Job ID) OR the moment you schedule the job, watch the footer (left bottom screen) for the job ID.


    • The reason why we need this ID is, there will be two jobs in Released Status. For third job we need to specify start condition after completion of second. Here is the trick if you specify the job name, it will take the old/first job. Hence for third job schedule, just select date and time, click on check button.
    • Later select the start condition as job, then it will give you pop up to select the correct job as start condition. In this screen, select the job which you noted the Job ID of 2nd job.


   9.  Once all jobs are scheduled, below is the released job status.


  10. Once in a while check the jobs via SM37, please note jobs are started one after the other.



Start archive jobs parallel by considering the system memory availability and work efficiently when these jobs are in progress. Again these are my own experience, feel free to correct if you find any efficient way of achieving the task OR add any steps if it all required.


Feel free to share your feedback, thanks for reading the blog.


Related Content:


1. BEx query discontinued features:-


If we run the program SAP_QUERY_CHECKER_740 and we will get all the queries which are not supporting in BW 7.4 version.

Run the report SAP_QUERY_CHECKER_740. Better to Run it in background as it might run very long. The spool output shows the queries that will not run correctly in 740 any longer. We will get output somewhat like this…




The following features are discontinued in 740:


  • Constant Selection with Append (CSA) is no longer supported and cannot be modeled any longer in the 700 BEX Query Designers. Some business scenarios use the feature to model an outer join.
  • The business requirement can be met by modelling an Info set. Using an Info Set is highly recommended, especially if a selection condition that was evaluated via virtual Info Objects in the CSA approach, can be directly handed down to the SQL statement.
  • In a BW Info Set, the join operation LEFT OUTER JOIN was not permitted up to now with a BW Info Cube. The reason is that, in this case, SQL statements with poor performance may be created.
  • Formulas before Aggregation are no longer supported. The report SAP_QUERY_CHECKER_740 analyzes both, the calculated Key figure definitions, and the query definitions that use the calculated key figures.

Exception Aggregation:-

Exception Aggregation can be defined for a basic key figure, calculated key figures and formulas in the Query Designer. It determines how the key figure is aggregated in the query in relation to the 'exception' characteristic. There is always only one Reference Characteristic for a certain Exception Aggregation.


If we use Exception Aggregation, the following two points are important to know:

The Reference Characteristic is added to the 'Drilldown Characteristics' and aggregation is carried out by the OLAP processor for these 'Drilldown Characteristics'.
The Exception Aggregation is always carried out last after all other necessary aggregations.

  • If the calculation operation commutates with the aggregation, the flag "calculation after aggregation" can be set. The option 'Calculate before Aggregation' are obsolete now and shouldn't be used any longer. Calculating before aggregation results in poor performance because the database reads the data at the most detailed level and the formula is calculated for every record.
  • Aggregation and calculation occurs at different points in time. By default, the data is first aggregated to the display level and afterwards the formulas are calculated.



The Exception Aggregation setting allows the formula to be calculated before aggregation over a chosen Reference Characteristic. The remaining aggregation is then executed using the defined Exception Aggregation for example 'average' or ' last value'.

Calculate after Aggregation: This field is only displayed for Calculated Key Figures; it is not displayed for formulas.


If the operation has to be calculated on a certain granularity, use formula exception aggregation to specify the granularity on which the formula is calculated.

Thereby, you can create Calculated Key Figures by using a formula that uses exception aggregation itself (this is a nested exception aggregation).

2. Call Function not found post Upgrade:-


Process chain steps loading hierarchies will failed. After upgrading to 740, when we load hierarchies using info packages, the loads fail when the info object doesn't have a conversion exit defined. This is due to a Program error.

To resolve the issues we need to implement a SAP note 1912874 - CALL_FUNCTION_NOT_FOUND.


On further Analysing.it is showing ABAP dumps.




Activation of SICF services in  BW:-


During Upgrade, The Software Update Manager disables services of the Internet Communication Framework (ICF) for security reasons. Post upgrade SAP BW 7.4; Internet Communication Framework (ICF) services will be inactivated due to security reasons. The services needs to be activated on application-related basis only, and it can be done manually (Right click then Activate), They can be done manually by following the URL given in the error screen through transaction SICF.


In SICF Tcode...You can get most of the services needs to be activated post BW 7.4 upgrade under default host/sap/public and the tree will open.


If, for example, you want to activate services for the /SAP/public/icman URL, you have to activate the "default host" service tree in transaction SICF. After that, you must activate the individual "sap", "public", and "icman" services.


You can activate an ICF service as follows:

1. Select the ICF service in the ICF tree in transaction SICF.

2. You can then activate the service in one of the following ways:

a) Choose "Service/Virt. Host" -> "Activate" from the menu.

b) Right-click to open the context menu and choose "Activate service".



If the "default host" node is inactive in transaction SICF, the HTTP request produces a "RAISE_EXCEPTION" ABAP runtime error stating that the HOST_INACTIVE exception condition was triggered. If a service is inactive in the SICF transaction, the error text "Forbidden" is displayed when you access this service.


Some services that must be activate in the system . Depending on the operational scenario:

Support for the Internet protocol (HTTP, HTTPS and SMTP) in the SAP Web Application Server /default_host /sap/public/ icman.


After you have installed SAP Web Application Server, you must ensure that this service is activated in transaction SICF.


We are going to face the issues in Metadata Repository, Maintain Master data etc. For this we need to active the services in BW. For example




Pre upgrade



Post upgrade BW 7.4:-


Post upgrade we will find a lot of services which will be in inactive state .we need to activate them.




In case of Metadata Repository we need to activate the services.







Some of the important services need to be activated as part of Post upgrade Checks.


With the message server






With the Web Dispatcher





Using Business Server Pages (BSP)






Analysis Authorization:-


If we are using the reporting authorizations concept and upgraded to SAP Net Weaver 7.3, We have to migrate these authorizations to the new Analysis authorization concept or redefine authorizations from scratch.

In SAP BW 7.3 Analysis Authorization is optional because the Reporting authorization will also work. But in 7.4 there is no Reporting authorization .Analysis authorization is mandatory. All the BW roles should be migrated to Analysis authorization.

The authorization objects S_RS_ICUBE, S_RS_MPRO, S_RS_ISET and S_RS_ODSO were checked during reporting authorization But this objects will no longer be checked during query processing in BW 7.4. Instead, the check is performed using special characteristics 0TCAIPROV (Authorizations for Info Provider), 0TCAACTVT (Activity in Analysis Authorizations) and 0TCAVALID (Validity of an Authorization). These are standard info Objects in BW.

These authorization objects are offered during migration configuration as a migration option. If you select these authorization objects, authorization for these special characteristics are generated according to the entries in the Activity and the associated field for the corresponding Info Provider and then assigned to the users.


By this Authorization we will not able to access any queries output. It will be showing You Don’t have sufficient Authorization for the Info provider.Until unless we will be adding 0BI_ALL object,We cannt access any query output . But it will not be given to any user as per Security.So we need to implement Analysis Authorization to get the output of the queries.


The info object which are Authorization relevant:



When we check Authorization Value Status table, In Older Version we have 0BI_ALL Authorization in Name of an Authorization Field.



But in SAP BW 7.4 upgraded Version We have.



0BI_All:  Assign all Analysis Authorizations to a user. Which are equivalent to SAP_All in BI. It can be assigned directly via RSU01.


We can check (Characteristic Catalog) Table RSDCHA about the info object which are checked as Authorization Relevant.Whenever these info object are called in query output.User will not get the output if he is not authorized.


The Custom Authorization objects can be created and assigned to users.


Exceptions are validity (0TCAVALID), Info Provider (0INFOPROV) and Activity (0TCAACTVT), which cannot be removed and always have to be authorization relevant.


Some of the Authorization issues faced after upgrade is with Semantic Partition and Writing ABAP Routines.

The available role with users should be modified with the object S_RS_LPOA to give required access. This is the Authorization object for working with semantically partitioned objects and their sub objects.





In case of writing Routines we will not Authorized.






Authorization Objects for Working with Data Warehousing Workbench


We should have the authorization for authorization object ABAP Workbench (S_DEVELOP) with the following field assignment:

  • DEVCLASS: You can choose from the following program classes, depending on the routine type:
  • "BWROUT_UPDR": Routines for update rules
  • "BWROUT_ISTS": Routines for transfer rules
  • "BWROUT_IOBJ": Routines for Info Objects
  • "BWROUT_TRFN": Routines for transformations
  • "BWROUT_ISIP": Routines for Info Packages
  • "BWROUT_DTPA": Routines for DTPs
  • OBJNAME: "GP*"
  • ACTIVITY: "23"


                                                           Fiscal Period description(Text) showing wrong values





Fiscal Period(0FISCPER) showing incorrect description at IO and report level.


For illustrating purpose I am giving below example


Incorrect Fiscal period description  at report level.

Fiscal period(wrong).png


Expecting correct description


Fiscal Year/PeriodDescription
001.2015July 2014
002.2015August 2014
003.2015September 2014
004.2015October 2014
005.2015November 2014
006.2015December 2014
007.2015January 2015
008.2015February 2015
009.2015March 2015
010.2015April 2015
011.2015May 2015
012.2015June 2015





Go to below path in SBIW and change Text Fiscal Year / Period value to " 2 Calendar Year " as shown in below screenshot.






After modification of setting Fiscal Period Description showing correct values.


Please find below screenshot.

  Fiscal period (Correct).png


I hope this  doc will help you guys!!

                                          SAP MRS (Multi Resource Scheduling) - A ready reference (Part 1)






Being a beginner in SAP MRS is a challenge. As a BW techie, When I started in MRS Module, I found a lot of scattered information but there is no article or blog with consolidated information even on basic terminologies of SAP MRS.


This blog is my attempt to provide an insight of basic  aspects and terminologies of SAP MRS to a novice.






Introduction to MRS




SAP Multiresource Scheduling enables you to find suitable resources for demands and assign them to the demands. Demands are units of work from the areas of Plant Maintenance, or Project System, for which a resource is to be planned.


It is End to end scheduling process.






Salient features of MRS --




  • Effectively manage high volumes of resources and demands for your service, plant maintenance, or project business.


  • Get a real-time view of resources and project assignments with a user-friendly planning board.


  • Drag and drop” to update project details.


  • Boost productivity and reduce operational downtime.


  • With the ability to view, analyse, and interpret order data, you can easily match technician skills to assignments – for better repair work and improved service quality.


  • We can integrate  MRS with  all SAP modules.





MRS runs fully integrated in the ERP  system.




PS Integration


HR Integration


PM Integration


DBM Integration


C Projects Integration



PS Integration --




Below  diagram will explain the process  from creation of projects to till approval.


Project process Overview.png




Below diagram explains  resource assignment in project planning from creation of resource request to till task completion.


Resource planning.png




Each Network activity converted to Demand .Below  screenshots will explain how resources assigned to activity and  resource planning.








MRS Planning board




Below diagram is the planning board. we can see network activities in our left hand side and number of resources next to network activity(here it is 1)




Planning board is main work are in MRS here we can assign resources on respective days.


We can assign assignments to resources and split assignment to multiple days.






We can do may other things like  time allocation, leaves, color configurations(to identify task status)



Planning Board.png




Some Important t-codes of MRS(PS and PM related) --




OPUU - Maintain Network Profile Project Systems - Network and Activity


CJ2B - Change project planningboard Project Systems - Project Planning Board


OPT7 - Project planning board profile Project Systems - Project Planning Board


/MRSS/PLBOORGSRV - planning board (General) PM - Maintenance Orders


PAM03 - Graphical Operational planning (PAM) PM - Maintenance Notifications


CJ2C - Display project planningboard Project Systems - Project Planning Board



Below are important tables related to MRS and comments column will explain table contents in layman terms.









  Type G Capacity Graphs: Basic Availability w/o On-Call Times

Resource Assigned Hours


Capacity Graph Type H: Basic Availability

Resource Available Hours


Type B Capacity Graphs: W/o Cap. Assgmnts, w/ Reservations

Resource Adhoc Hours


Type A Capacity Graphs: W/ Cap. Assignments & Reservations

Remaining Hours


Time allocations for resources

Time Allocation


MRS Basis Assignments

Booked Hours


Informative Fields for Demand Items

Resource Utilization Hours


Data required for planning-relevant items

Demand Hours




In my next blog I will explain the BW MRS reports on top of tables  and other modules integration with MRS.



Other reference for MRS understanding:



Note: I did originally publish in the following post in my company's blog on software quality. Since it might be interesting for developers in the SAP BW realm, I re-publish it here (slightly adopted).

As my colleague Fabian Streitel explained in another post, a combination of change detection and execution logging can substantially increase transparency regarding which recent changes of a software system have actually been covered by the testing process. I will not repeat all the details of the Test Gap Analysis approach here, but instead just summarize the core idea: Untested new or changed code is much more likely to contain bugs than other parts of a software system. Therefore it makes sense to use information about code changes and code execution during testing in order to identify those changed but untested areas.


Several times we heard from our customers that they like the idea, but they are not sure about its applicability in their specific project. In the majority of these cases the argument was that the project mainly deals with generated artifacts rather than code, ranging from Python snippets generated from UML-like models and stored in a proprietary database schema to SAP BW applications containing a variety of artifact types beyond ABAP code. Even under these circumstances Test Gap Analysis is a valuable tool and may provide insight into what would otherwise be hidden from the team. In the following I explain how we applied Test Gap Analysis in an SAP BW environment.


The Starting Point

As you all know, in SAP BW a large amount of development is performed graphically in the BW Workbench. In cases where custom behavior is required, routines can be attached at well-defined points. As a consequence, it is very hard to track changes in a BW application. Of course, there is metadata attached to every element containing the relevant information, but seeing all changes that have occurred since a given point in time (e.g., the current production release) is not a trivial task. The same holds for execution information. Since we were already using Test Gap Analysis for transactional ABAP systems, we reached out to a team developing in BW and showed them some results for their own custom ABAP code.


Figure 1: Test Gaps in Manually Maintained ABAP code only


The picture shows all ABAP code manually maintained by the development team. Each rectangle having white borders corresponds to a package, and the smaller rectangles within correspond to processing blocks, i.e., methods, form routines, function modules, or the like. As explained in Fabian’s post, grey means the block was unchanged, while the colors denote changed blocks. Out of these, the green ones have been executed after the most recent change, while the orange ones have untested modifications with regard to the baseline, and the red ones are new and untested. Of course, tooltips are provided when hovering over a rectangle containing all the information necessary to identify which code block it represents - I just did not include it in the screenshot for confidentiality reasons.


Moving Forward

As expected, the feedback was that most of the development effort was not represented in the picture, since development mainly happened in Queries, Transformations and DTPs rather than plain ABAP code. Another insight, however, was that all these artifacts are transformed into executable ABAP code. Therefore, we analyzed the code generated from them, keeping track of the original objects’ names. The result was (of course) much larger.


Figure 2: Code generated from DTPs (left), Transformations (middle top), Queries (middle bottom), and manually maintained code (far right)


Including all the programs generated out of BW objects, the whole content of the first picture shrinks down to what you can see in the right column now, meaning that it only makes up a fraction of the analyzed code. Therefore, we have two main observations: First, ABAP programs generated from BW objects tend to get quite large and contain a lot of methods. Second, not every generated method is executed when the respective BW object is executed. In order to make the output more comprehensible, we decided to draw only one rectangle per BW object and mark it as changed (or executed) if at least one of the generated methods has been changed (or executed). This way, the granularity of the result is much closer to what the developer expects. In addition, we shrink the rectangles representing these generated programs by a configurable factor. Since the absolute size of these programs is not comparable to that of manually maintained code anyway, the scaling factor can be adjusted to achieve an easier to navigate visual representation.


Figure 3: Aggregated and scaled view for generated code (left to middle) and manually maintained code (right)

The Result

With this visualization at hand, the teams can now directly see which parts of the application have changed since the last release in order to focus their test efforts and monitor the test coverage over time. This helps increasing transparency and provides timely feedback regarding the effectiveness of the test suite in terms of change coverage.

A recap ...


since the NetWeaver Release 7.0 the SAP JAVA or J2EE stack is a component within the SAP BW Reporting Architecture, and was building the foundation for the BW Integrated Planning (BW-IP) as well. I also spended a lot of time creating documents and giving the main input for the BI-JAVA CTC template which is described for 7.0x here - SAP NetWeaver 7.0 - Setting up BEx Web - Short ... | SCN  and for 7.3x/7.40 here - New Installation of  SAP BI JAVA 7.30 - Options, Connectivity and Security


Then, not really recognized by the Audience (not even me ... ;-) a unspectacular SAP Note was released - Note 1562004 - Option: Issuing assertion tickets without logon tickets which introduced and extension to the parameter login/create_sso2_ticket  - See also the SAP Help Background to that Topic.


btw: the activation for SSL together with the SAPHostAgent can be found in this SFG Document - SAP First Guidance - SAP-NLS Solution with SAP IQ | SCN


After understanding the impact, I have updated the Document - SAP NetWeaver BW Installation/Configuration (also on HANA) and didn't give them much attention. The mayor difference in my active Implementation for SAP BW 7.3x and 7.40 and SAP Solution Manager 7.1 was: No SSO error at all, no connection problems between ABAP and JAVA stack anymore.

Unfortunately countless SAP Notes, SAP online help and SAP tools still referring to the old value - login/create_sso2_ticket = 2 but since 7.40 the "correct" value is now a system default: login/create_sso2_ticket = 3
btw: did you know, that the parameter is dynamically changeable in tx. RZ11? This allows you to switch the parameter during the usage of the BI-JAVA CTC template and continue the configuration successfully.



Finding out the BI-JAVA system status

the easiest way is to call the SAP NetWeaver Administrator http://server.domain.ext:<5<nr>00>/nwa and proceed to the "System Information" page



Details of the Line "Version":





2015 …


Main Version





Note 1961111- BW ABAP/JAVA SPS dependencies for different NetWeaver release and BI JAVA patch updating relevant


Note 1512355 - SAP NW 7.30/7.31/7.40 : Schedule for BI Java Patch Delivery




Running the BI-JAVA CTC template


Call the wizard directly with the following URL - http://server.domain.ext:<5<nr>00>/nwa/cfg-wizard



Checking the result

Now, that this hurdle is taken we have to check the the BI-JAVA configuration with the BI diagnostic tool or directly in the System Landscape of the EP.


And the Result in the BI Diagnostic tool (version 0.427)

Note 937697 - Usage of SAP NetWeaver BI Diagnostics & Support Desk Tool


To get to this final state, you have to additionally check/correct the following settings in the NetWeaver Administrator for evaluate_assertion_ticket and ticket according SAP Note 945055. (The Note is not updated since 2007)


[trustedsys1=HBW, 000]

[trusteddn1=CN=HBW, OU=SSL Server, O=SAP-AG, C=DE]

[trustediss1=EMAIL=xxx, CN=SAPNetCA, OU=SAPNet, O=SAP-AG, C=DE]

[trustedsys2=HBW, 001]

[trusteddn2=CN=HBW, OU=SSL Server, O=SAP-AG, C=DE]

[trustediss2=EMAIL=  , CN=SAPNetCA, OU=SAPNet, O=SAP-AG, C=DE]

As we are now using assertion tickets instead of logon tickets, the RFC connection from ABAP to JAVA looks a bit different:


crosscheck as well with the entry for the default Portal in the ABAP Backend (tx. SM30 => RSPOR_T_PORTAL)

Note 2164596 - BEx Web 7.x: Field "Default Font" missing in RSPOR_T_PORTAL table maintenance dialog


Checking additional settings in the BI-JAVA configuration via the findings from this SAP Note (even it is SolMan related at this time)

Note 2013578 - SMDAgent cannot connect to the SolMan using certificate based method - SolMan 7.10 SP11/SP12/SP13

the Notes deals about the missing P4/P4S entries in the configuration

Note 2012760 - Ports in Solution Manager for Diagnostics Agent registration
Note 1898685 - Connect the Diagnostics Agent to Solution Manager using SSL

Activating the BEx Web templates

Ok. This is also now solved. Now that the BI-JAVA connection technically works, we can check if the standard BEx Web template 0ANALYSIS_PATTERN works correctly. Please remember that you have to at least once activate the necessary web templates once from the SAP BW-BC, otherwise follow the SAP Note

Note 1706282 - Error while loading Web template "0ANALYSIS_PATTERN" (return value "4")


Now you can call with tx. SE38 the Report RS_TEMPLATE_MAINTAIN_70 and choose as Template ID 0ANALYSIS_PATTERN


Running BEx Web from RSRT

Ok. This only approves that the BEx Web Template can be called directly. But what happens when you call the web template or any query from tx. RSRT/RSRT2? you will encounter (as well I did) that this is a complete different story. Tx. RSRT has three different options to show the result of a query in a web based format, and we are interested in the "Java Web" based output.


The recently added "WD Grid" output is nice to use together with the new BW-MT and BW-aDSO capabilities with SAP BW 7.40 on HANA.

But what we see is this:


Hmm? We checked the BI-JAVA connection and the standard BEx Web Template and still there is an error? Is there a problem with RSRT? Is the parameter wrong?

No. Recently (again not really recognized by the Audience again) another SAP Note was released which also has an impact to SAP EP 7.3x/7.40:

Note 2021994 - Malfunctioning of Portal due to omission of post parameters

in this context the following SAP Note is also important to consider:

Note 2151385 - connection with proxy

After applying the necessary Corrections to the SAP BI-JAVA EP Instance, also the tx. RSRT finally shows the correct output:



If you encounter the following error:


"DATAPROVIDER" of type "QUERY_VIEW_DATA_PROVIDER" could not be generated

Cannot load query "MEDAL_FLAG_QUERY" (data provider "DP_1": {2})



This is solved by the SAP Note (to apply in the ABAP Backend) - Note 2153270 - Setup Selection Object with different handl id

If the following error ("classic RSBOLAP018") occurs, this can have different reasons, e.g. outdated BI-JAVA SCA's, user or connectivity problem.

"RSBOLAP018 java system error An unknown error occurred during the portal communication"

Some of them can directly be solved with the following SAP Notes below:


Note 1573365 - Sporadic communication error in BEx Web 7.X

Note 1899396 - Patch Level 0 for BI Java Installation - Detailed Information

Note 2002823 - User-Specific Broadcaster setting cancels with SSO error.

Note 2065418 - Errors in SupportDeskTool

More Solutions can be found here - Changes after Upgrade to SAP NetWeaver BW 7.3x


Finally ...

On this Journey finding the real connection, I found also some helpful KBA which I added to the existing SCN Document - Changes after Upgrade to SAP NetWeaver BW 7.3x in the 7.3x JAVA section. The good news here is: at the end the documents are very nice for background knowledge and where not needed almost at all, if you stick to the automated configuration.

So, of course the BI-JAVA - EP configuration is a hell of a beast, but you can tame it ... ;-)

I hope this bring's a bit of a light into the successful BI-JAVA configuration

Best Regards

Roland Kramer, PM BW/In-Memory

"the happy one's are always curious"

Business data is often viewed as the critical resource of the 21th century. The more actual the business data is, the more valuable it is considered. However historic data is not utterly worthless. To offer the best possible, meaning the most performant, consistent and correct access to data given a fixed budget, we need to know: Who consumes which slice of our business data at what point in time? This blog is about how to find out valid answers to this question from the perspective of a BW administrator.

Access to the data is granted via SAP BWs analytic engine. SAP BW users access the data via a BExQuery. The analytic engine in turn requests the data from the persistency services. BW (on HANA) offers a multi-temperature data lifecycle concept, with data stored in-memory and columnar format, usage of the non-active data concept, in the HANA ExtendedStorage (aka Dynamic Tiering), usage of the Nearline-Storage options, or archiving and, of course, you can delete the data.

Now given our fixed budget, how should we find out how to distribute the data across the different storage layers?

SAP BW on HANA SP 8 comes equipped with the “Selection statistics”, a tool designed to track data access and then assist finding a proper data distribution. With the selection statistics you can record all data access requests of the Analytic Engine on your business data. The selection statistics can be enabled per Info Provider. If enabled, then for each data access request the minimal and maximal selection date on the time dimension, the name of the Info Provider, the name of the accessing user and the access time are stored.

One of the major use cases for the “Selection Statistics” is for the “Data aging” functionality in the Administrator workbench (Administrative Tools->Housekeeping->Data Aging) is to be able to propose time slices for shifting data to the NearlineStore. Technically the “Data Aging” tool assist in creating:

  • Data Archving Processes
  • Parametrization (variants) of Data Archiving Processes , containing the proposed time slice
  • Process Chains that schedule the Data Archiving Processes

The recording of selection statistics is currently limited to time slices only. This limitation was introduced to

a)      keep the amount of recorded data under control

b)      minimize the impact on the query runtime due to the calculation of the data slices.

c)      emphasize time filters, that are usually provided in all queries and are the most important criteria when it comes to data retention and lifecycle considerations. 

If you would agree to this fine, otherwise feel free to post a comment and share your view.


Here some screenshots that demonstrate the use of the tools:

1.)    Customizing the selection statistics (transaction spro)



2.)    Analyzing the selection statistics




3.)    Using selection statistics for Data Aging


Scenario :


In our project we are using statistical method to calculate number of products left at a customer location considering their past sales( cumulative)  and natural retirement with time.  To calculate the retirement we are using statistical density function to predict retirement over time. To get the current product base we are subtracting predicted retirement from total sales over time.


Now, as this prediction might not give 100% correct values ( in fact it will never give ) , business wants to update the "Current Product Base"  in case that information is available via field intelligence i.e. from the sales representative.




For example  row 1 , our model is predicting "Current Product Base"  for customer C1 as of April-2015 for Product P1 is 50 . However, my sales representative knows it is exactly 60 .  So, he/she updated this value to 60 manually.  We used Integrated Planning functionality in BW to achieve that.  Now, we want to capture who changed the values and when the changes were made.


Step By Step Procedure :

1.  Create  Direct Update DSO  to log the changes:

We logged the changes in a Direct Update DSO.  So first we need to create some characteristics relevant for logging and then create a Direct Update DSO.

We have used 0DATE0 , 0TIME , ZUSERNM( to hold user information ) and ZSAVEID to log the changes. Created a DSO with 0DATE, 0TIME,  ZUSERNM , ZSAVEID these as the key fields together with other characteristics relevant for business.


        InfoObjects Settings :

pic 2.png

Now , we will create a DSO and change the Type of Data Store Object to " Direct Update" from the settings.  We shall use all our business key and above mentioned 4 characteristics as the key of DSO.


    pic 3.PNG

In the Data fields of DSO , you can include all the Key Figures which are supposed to be manually updated. For case our scenario it is actual value of product base.



2. Create Enhancement Spot Implementation to log the changes in DSO :

Now , we shall implement an Enhancement Spot which will do the job of logging manual update.  Every time user updates the value in real time cube, system will generate an Save Id and push that to our DSO along with user name, date and time.


Go to Transaction SE18 , choose Enhancement Spot  RSPLS_LOGGING_ON_SAVEChoose Tab Enhancement Implementation and click on Implement Enhancement Spot  ( highlighted ).

                              PIC 4.png

Put the name of your implementing class and description and then choose OK  . Select suitable Package and then fill the below screen with BAdi name and class name and choose BAdi definition

                      pic 5.png


                        pic 6.png


    Now we have to work on two things  1 ) Implementation Class and 2 ) Filter


    Let us work with implementation class first .  A class will have methods which will do the actual work for us. We have to put our code in those methods.


    Double click on the implementation class of the BAdi definition .

                                          pic 7.png

  It shall bring the below screen and you would be able to see the methods for the implementation class. We have to put our code inside these methods.  Please check the attachment for the code with comments.  You need very minimum adjustment to the code to adapt it for your scenario.

                              Pic 8.png


Here we need to define for which Real Time Cube logging is activated . Assign the cube name  to i_infocube_name  parameter.  Additionally I put my name , so that changes by my user id only would be logged as of now.  Later on we shall comment out second statement.


      PIC 9.png



This method will give us the structure of the  data which will be logged.  In our case it will provide me the structure of the  DSO where I am storing the log.  Please check the appendix for code adjustment with all relevant comments for understanding .




This method actually writes the data to Direct Update DSO in a structure defined in  method 2.

Here we need to mention for which Real Time Cube we want to log the changes and where  ( in our case it is Direct Update DSO) . It could also be a DB table.



This method you can use to write it the log to Database Table if you are using HANA as DB



This method you can use to write it the log to Database Table if you are using HANA as DB

For our case , we are tracking the changes in DSO, so , we did not use method 4 or 5 .  Still , we activated these two ( d and e)  method ,otherwise BAdi activation was throwing error.


**** Please check attached document for complete code

Once we put all our code in respective method , we need to fill Filter for this BAdi implementation.  Double click on the filter area and put your Real Time Cube name.


                    Pic 10.PNG


3. Login to Planning workbook and Update Values :

Now, we need to login to our planning workbook and manually adjust the number of Product Base and then save it in real time cube.



Note , we have changed Actual Product Base for first 4 rows and save them in the planning cubes .


We will check our Direct Update DSO to see if our BAdi has logged all those changes and the user id who changed it.




As we can see , it logged my user id and date, time and save id for the change I did.  If you want to update to some other target only the last changed time and change by user , you can read only the latest record by sorting with time .


Please find complete codes in link ( dropbox) , just need to adjust the portion highlighted.

Dropbox - Class Methods.pdf



Debug Tips :  If you face any problem, please set external breakpoints inside the methods one by one and debug.



For some more detail, please check How to... Log Changes in Plan Data when using the SAP BW Planning Applications Kit






Hi all,


Despite having authorizations for this InfoProvider, an error on a DTP ( XXX -> YYY ) is produced while executing a process chain. The message displayed is:






But the user does have authorization for InfoProvider XXX (and for YYY):




If we generate a trace in tcode ST01 we can see that the error (RC=4) is related to the authorization object S_BTCH_ADM:







To avoid this error the following authorization (authorization object S_BTCH_ADM) should be granted to the user:




This is because the DTP is running serial extraction. In this case the user needs authorization to manage background processing. If you do not want to grant this authorization to the user, you can check the "Parallel Extraction" mode and the authorization problem will be solved:




Best Regards,




Hello everyone, I recently completed BeX upgrade project and I found below information should be helpful for folks who work on similar projects

Business Scenario


Consider a master data info object has large number of attributes and business want to display only a selected number of attributes OR set the sequence of the attributes for display in F4 for help in report output




In the below example I have considered Employee Master Data to have Home Sub-LoS 5 to4 appear in F4 for help window in sequence instead of all other attributes that exists. When I execute a report I see the sequence maintained in F4 for help/Filter is the same as Attribute tab


TCODE -> RDS1 -> Select Appropriate Info Object -> Attribute Tab


F4 for Help.jpg


F4 for Help 2.jpg


SAP Notes for Reference


1080863 - FAQ: Input helps in Netweaver BI




Abhishek Shanbhogue



You have the requirement of updating the DataSource in source system like changing the Extract Structure or changing the Extractor for example from View to Function Module. You do not have authorization to the RSA2 transaction and cannot wait for the SP release or cannot upgrade the SP level.


In standard BI, you can only change the Extractor fields of the DataSource in RSA6 transaction.


In this case, you can write a Z report to achieve the desired results.


I am providing a sample report which changes the Extract Structure for the DataSource without accessing the RSA2 transaction.









        lv_objvers_d  TYPE ROOBJVERS  VALUE 'D',

        lv_objvers_a  TYPE ROOBJVERS  VALUE 'A'.

  DATA: ls_roosource_old TYPE ROOSOURCE,

        ls_roosource_new TYPE ROOSOURCE.

  DATA: ls_roosfield_old TYPE ROOSFIELD,

        ls_roosfield_new TYPE ROOSFIELD.

  DATA: txt(24) TYPE C.
















*                                                                     *


*                                                                     *




  TEXT1 = 'Dear Customer.'.

  TEXT2 = 'You are just running a report, which will change '

        & 'structure of DataSource '.

  TEXT3 = '0GT_HKPSTP_TEXT on your database.'.

  TEXT4 = 'In case of doubt please contact SAP.'.



*                                                                     *


*                                                                     *





     WRITE: / 'Mode...: Update-Run'.

     txt = '  <-successfully updated'.


     WRITE: / 'Mode...: Test-Run'.

     txt = '                      '.









* 1.1 get current values for protocol


    WHERE oltpsource = lv_oltpsource

    AND   objvers    = lv_objvers_d.


* ..1.2 build workarea for update

    ls_roosource_new = ls_roosource_old.

    ls_roosource_new-EXSTRUCT = 'WB2_TEXTSTR1'.


    WRITE: / 'DataSource "0GT_HKPSTP_TEXT" not found in version "D".',

           / 'Nothing to do ... bye.'.






    WHERE oltpsource = lv_oltpsource

    AND   objvers    = lv_objvers_a.


* ..1.2 build workarea for update

    ls_roosource_new = ls_roosource_old.

    ls_roosource_new-EXSTRUCT = 'WB2_TEXTSTR1'.


    WRITE: / 'DataSource "0GT_HKPSTP_TEXT" not found in version "A".',

           / 'Nothing to do ... bye.'.





* Step 3: Update tables ROOSOURCE, ROOSFIELD


* ..3.1 Update ROOSOURCE

    UPDATE ROOSOURCE FROM ls_roosource_new.

    IF SY-subrc IS INITIAL.

*     ..OK, table has been updated successfully



      WRITE: / 'Error on update table ROOSOURCE.'.





    SELECT SINGLE * FROM roosfield INTO ls_roosfield_old

      WHERE oltpsource = lv_oltpsource

      AND   objvers    = lv_objvers_a

      AND   field      = 'SPRAS'.

    IF sy-subrc IS INITIAL.

* ..1.2 build workarea for update

      ls_roosfield_new = ls_roosfield_old.

      ls_roosfield_new-selection = 'X'.



      UPDATE roosfield FROM ls_roosfield_new.


      WRITE: / 'DataSource "0GT_HKPSTP_TEXT" not found in version "A".',

           / 'Nothing to do ... bye.'.





    SELECT SINGLE * FROM roosfield INTO ls_roosfield_old

      WHERE oltpsource = lv_oltpsource

      AND   objvers    = lv_objvers_d

      AND   field      = 'SPRAS'.

    IF sy-subrc IS INITIAL.

* ..1.2 build workarea for update

      ls_roosfield_new = ls_roosfield_old.

      ls_roosfield_new-selection = 'X'.



      UPDATE roosfield FROM ls_roosfield_new.


      WRITE: / 'DataSource "0GT_HKPSTP_TEXT" not found in version "A".',

           / 'Nothing to do ... bye.'.






* Step 4: Protocol

* 4.1 HEADER for ROOSOURCE protocol



  WRITE: / 'Table ROOSOURCE:'.



  WRITE: '1 Field to update: EXTRACTOR STRUCTURE'.








* 4.2 Protocol for ROOSOURCE





  WRITE: / ls_roosource_new-oltpsource,

    AT 50 ls_roosource_old-EXSTRUCT, AT 80 ls_roosource_new-EXSTRUCT.

*                                                                     *


*                                                                     *




* Set test flag on the initial screen

  test = 'X'.



* End of report  Z_0GT_HKPSTP_TEXT.

In SAP NetWeaver BW release 7.3 a new Analysis Authorizations BAdI was introduced: BAdI RSEC_VIRTUAL_AUTH_BADI as part of Enhancement Spot RSEC_VIRTUAL_AUTH. The authorized values or hierarchy nodes can be determined dynamically during query runtime. It does not require any Analysis Authorization objects and PFCG Roles. Virtual Authorizations can be used to enhance any existing “classic” authorization model. I.e. you do not have to make an exclusive choice for one or the other, both classic and virtual can be used simultaneously and complementary.

I would like to share my implementation experience with virtual Profit Center and Cost Center authorizations. For an introduction please read my blog Virtual Analysis Authorizations - Part 1: Introduction. In this blog we will discuss the use case and chosen approach, the solution overview, the control tables and default hierarchies. All implementation details you can find in my document Implementing Virtual Analysis Authorizations.


As already mentioned in my previous blog, our use case was Profit Center and Cost Center authorizations. We had to deal with hierarchy authorizations as well as value authorizations. There existed multiple hierarchies which had to be authorized on many hierarchy nodes. We urgently needed a more dynamic and flexible approach.

We implemented Virtual Authorizations for Profit Center and Cost Center authorizations next to the classic model for all other Analysis Authorizations. We tried to mitigate the “compliance issue” by introducing a Profit Center Basic and Cost Center Basic authorization object with only : (aggregation) and # (unassigned) authorization. These objects are checked by the BAdI and the Profit Center and Cost Center authorization is only processed if the respective “basic” object is assigned to the user. In our case that was a role-based assignment. This way we enhanced the Virtual Model:


  • An additional access key is required to get authorized;
  • It will the improve the traceability and auditability;
  • It will increase the compliance with security standards.

Solution Overview

Virtual authorizations can be realized by implementing BAdI RSEC_VIRTUAL_AUTH_BADI as part of Enhancement Spot RSEC_VIRTUAL_AUTH. The Analysis Authorizations are determined dynamically, i.e. during query runtime. Both value and hierarchy authorizations are supported.

Authorizations per user have to be maintained using two central control tables:


  • Value authorizations;
  • Hierarchy authorizations.


Both control tables can be maintained using their own table maintenance dialog. It is recommended to maintain the control tables in every system separately (i.e. no transports) to remain as flexible as possible. An initial mass upload could be facilitated by LSMW (Legacy System Migration Workbench).

Those control tables only have to maintained once for the respective basis Characteristic, i.e. Profit Center and Cost Center. The authorization for Display Attributes and Navigational Attributes is automatically derived and processed by the BAdI.

Control Tables

The hierarchy authorizations are maintained in control table ZBW_VIRTAUTH_HIE that looks almost equal to table RSECHIE. Here we can enter a Profit Center or Cost Center hierarchy authorization for a particular user.



Figure 1: Control Table - Hierarchy Authorization


The value authorizations are maintained in control table ZBW_VIRTAUTH_VAL that looks almost equal to table RSECVAL. Here we can enter a Profit Center or Cost Center value authorization for a particular user.



Figure 2: Control Table - Value Authorization

Default Hierarchies

Another requirement was to be able to generate hierarchy authorization based on value authorization. The rationale behind it is that the majority of reports are based on “default hierarchies”. Particular roles like Cost Center responsible do not get any hierarchy authorization and as a consequence were not able to run those reports. At the same time, we wanted to prevent double maintenance.

The solution was to define a third control table for Default Hierarchies: ZBW_VIRTAUTH_DEF. Here you can enter one or more default hierarchies for a Characteristic. The BAdI will then generate the hierarchy authorization for the default hierarchy restricted to the authorized values as leaves in the hierarchy.



Figure 3: Control Table - Default Hierarchy


In the example above we have defined the (standard) hierarchy 1000KP1000 as default hierarchy for Cost Center.


In this blog we discussed the use case and chosen approach, the solution overview, the control tables and default hierarchies. All implementation details you can find in my document Implementing Virtual Analysis Authorizations.

In SAP NetWeaver BW release 7.3 a new Analysis Authorizations BAdI was introduced: BAdI RSEC_VIRTUAL_AUTH_BADI as part of Enhancement Spot RSEC_VIRTUAL_AUTH. The authorized values or hierarchy nodes can be determined dynamically during query runtime. It does not require any Analysis Authorization objects and PFCG Roles. Virtual Authorizations can be used to enhance any existing “classic” authorization model. I.e. you do not have to make an exclusive choice for one or the other, both classic and virtual can be used simultaneously and complementary.

I would like to share my implementation experience with virtual Profit Center and Cost Center authorizations. This introductory blog will discuss the rationale, a comparison between classic and virtual authorizations, and the different call scenarios for which the BAdI is processed. For the solution details please read my blog Virtual Analysis Authorizations - Part 2: Solution Details. All implementation details you can find in my document Implementing Virtual Analysis Authorizations.


The main problem with a classic authorization concept is that it is less flexible in situations with a big user population, many authorization objects/roles and frequent changes. E.g. organizational changes impacting large parts of the organization and ongoing roll-outs with big increments in the user population.

Classic use cases for a more flexible and dynamic approach are Profit Center and Cost Center authorizations. Often we have to deal with hierarchy authorizations as well as value authorizations. There might exist multiple hierarchies which have to be authorized on many hierarchy nodes. The number of required authorization objects and roles is likely to become high.

As a consequence, you can expect TCD (Total Cost of Development) as well as TCO (Total Cost of Ownership) becoming too high.

Classic versus Virtual Authorizations

Before diving into the Virtual Authorizations, Iet’s try to compare the classic model with the virtual model.



Figure 1: Evaluation Matrix


The biggest draw-back of the classic model pops up in the efficiency with a big user population in combination with many authorization objects and roles. Here the virtual model shows its added value.

On the other hand, the virtual model is less transparent and clear compared to the classic model. Also in the area of compliance we do not have the out-of-the-box functionality compared to the classic model.

Different Call Scenarios

During query run-time the BAdI is called multiple times. This might be a bit confusing in the beginning when you start working with the BAdI. There are 3 call scenarios:


  • Call scenario 1: InfoProvider-independent or cross-InfoProvider authorizations;
  • Call scenario 2: InfoProvider specific authorizations ;
  • Call scenario 3: Documents protected with authorizations.


Call scenario 1: InfoProvider-independent or cross-InfoProvider authorizations

Scenario 1 can be called multiple times. Importing Parameter I_IOBJNM is not initial and Importing Parameter I_INFOPROV is initial. Importing Parameter I_T_ATR might be filled with authorization-relevant Attributes of the respective Characteristic, if any.

In this call scenario the following authorization is processed:


  • Authorization-relevant InfoObjects; e.g. I_IOBJNM = '0PROFIT_CTR';
  • Authorization-relevant Attributes; e.g. I_IOBJNM = '0WBS_ELEMT' and I_T_ATR with ATTRINM = '0PROFIT_CTR' *);
  • Authorization-relevant Navigational Attributes; e.g. I_IOBJNM = '0WBS_ELEMT__0PROFIT_CTR'.


*) Display Attributes need full authorization; see also SAP Note 1951019 - Navigation Attribute and Display Attribute for BW Analysis Authorization.


Call scenario 2: InfoProvider-specific authorizations

Scenario 2 will be called once only. Importing Parameter I_IOBJNM is initial and Importing Parameter I_INFOPROV is not initial. You can determine the authorization-relevant InfoObjects using Function Module RSEC_GET_AUTHREL_INFOOBJECTS.

In this call scenario the following authorization is processed:


  • Authorization-relevant InfoObjects; e.g. I_IOBJNM = '0PROFIT_CTR';
  • Authorization-relevant Navigational Attributes; e.g. I_IOBJNM = '0WBS_ELEMT__0PROFIT_CTR'.


Call scenario 3: Documents protected with authorizations

I did not experiment with scenario 3 yet. It can be called in the context of documents which are protected with authorizations. In this case, both Importing Parameter I_IOBJNM and Importing Parameter I_INFOPROV are initial.


In this introductory blog we discussed the rationale of virtual authorizations, a comparison between classic and virtual authorizations, and the different call scenarios for which the BAdI is processed. In my blog Virtual Analysis Authorizations - Part 2: Solution Details we will discuss the solution details. All implementation details you can find in my document Implementing Virtual Analysis Authorizations.



Hello everyone, I have been working on multiple BW landscapes and operations support for quite some time and from my experience I saw batch processing has high visibility among leadership (business) and its always challenging to refine the existing batch processes and bring down the overall runtimes as part of continuous process improvements.


I have been fortunate to successfully optimize batch processing in multiple instances and in my blog I intend to advise handful easy tips to optimize batch processing.

1. DTP - Data Transfer Processes


I have always seen when people create DTP’s they never consider optimization aspects of how this will impact are simple techniques you can use to reduce the runtimes for data loading. Combination of Parallel Processing and Data packet optimization there can be dramatic reduction in runtimes


Increase Parallel Processing: There is a provision in DTP to increase the number of parallel processing, if you have available work processes then feel free to increase this number and change the job priority. By default this is set to 3 and the job class is set as “C”

Pic 2.jpg

Pic 1.jpg

Another way of parallel processing is to split the data from the source into smaller chunks in case of full load from source to target and run them in parallel with filters applied.


Example: If you have to load Business Partner master data from CRM/SRM system then you can always split them into chunks depending on value range for Source /Territory/Type and run the DTPs in parallel


Data Packet Size: In DTPs you can always vary the data packet size which is directly proportional to the loading run time. Lesser the size of the data packet lesser is the loading time and vice versa. The default value set is 50k records but it can be changed when in edit mode.


Note: At times even after changing the data packet size the number of records in a packet won’t change and in such cases you will have to change the size of source package


Pic 3.jpg

2. Info Package

For Full Info Packages too we can have parallel processing to split the data from the source into smaller chunks and run them in parallel with filters applied and there is also provision to change the data packet size

Pic 4.jpg


In Scheduler there are other options (“Timeout Time” and “Treat Warnings”) as well which is not for runtime optimization but helpful in case you encountering issues with timeout errors and if warnings are to be ignored

3. DSO


DSO activation can be slow if the batch tables are large in size as these are run through for object activations, you can always ask BASIS team to clean such tables with report RSBTCDEL2, Tcode SM65


BASIS – SQL team should always consider updating the statistics for the DSO and reorg/fragment the tables if required. This can be also a routine activity based on the your requirements and needs


There is a provision to create secondary index for DSO tables and it can be either done by SQL DBA team OR in BW console Tcode SE11 to optimize the runtimes


If you are not reporting on the DSO, the activation of SIDs is not required (this will take up some considerable time in activation); Often the logs show that the activation job takes almost all the time to schedule the RSBATCH_EXECUTE_PROZESS as job BIBCTL_*. RSBATCH_EXECUTE_PROCESS is for scheduling and executing the SID-generation process. If you don't need the relevant DSO for reporting & you don't have queries on it, you can delete the reporting flag in the DSO maintenance. This would be a good way to speed this process up significantly. Check under 'Settings' in the DSO maintenance whether you have flagged the option "SID Generation upon activation".

Helpful SAP Notes & Documents

SAP Note 1392715: DSO req. activation: collective perf. Problem note

SAP Note 1118205: RSODSO_SETTINGS Maintain runtime parameter of DSO

SDN Documenthttp://scn.sap.com/docs/DOC-45290



Abhishek Shanbhogue


In this blog I just want to share few tips related to BW transports that might help in collection & validation efficiently.




Though its a personal choice, but some settings are recommended while some are chosen based on personal comfort level.

Following are the settings I prefer for more clear one shot view:




Its most obvious setting but recommending it implies- we should collect different objects by types only. I mean, preferably, instead of dragging in infoproviders with "Data Flow Before" for collecting transformation, we should go & collect specific transformation from corresponding object type. For example-





Following setting can be chosen once required objects are dragged in for collection:


Using this setting in conjugation with "Necessary Objects" settings makes picture very clear on what to be selected & what not, even for BEx Queries or transformation. For example :-


Here we can right click on required object type & click "Transport All Below". Same ways following is sample for BEx Query collection:




Grouping of BW Objects in Transports

I think there is no fixed rule for this but objective is complete transport without error & import in reasonable amount of time.

Following can be two strategies for grouping:

1) If we are sending our development first time & we have large number of data models & reports, then this strategy is recommended:

Separate Transport Requests based on following groups-

a) Infoobjects, Infoobjects Catalogs & Infoareas

b) Infoproviders

c) Datasources& Infopackages

d) Master Data Transformations & DTP

e) Transaction Data [First Level] Transformations & DTP "Transformations which are between datasource & infoprovider

f) Transaction Data [Second Level & Upwards] Transformations & DTP "Transformations which are between infoprovider & infoprovider

g) Process Chains

h) BEx Queries

i) Customer Exit Codes

If number of objects in any group is very high that group can be divided in parts, as if number of objects are too high sometimes importing that transport can become nightmare.

This is very generic sequence, but important thing is to take care of dependency i.e. dependent objects should go in second step once main objects are moved.

While releasing transports system itself checks dependent objects and give warnings or errors accordingly.


Possible question in section can be- "Why we collected 2 different transports for different levels of transaction data transformations?".

This is required only if we have multiple clients of ECC QA for testing but single client of BW. In this case BW will have two source systems connected, hence we will need to transport all TR's with a) first level transformations (between datasource & infoprovider) and b) datasources two times. Each time with correct destination client in "Conversion of Logical System Names":




2) This strategy can be used when we are making ad-hoc transports. For example if we want to transport only 1 simple data model & 1 query, then all objects can be transported together in same transport request. This approach is not recommended while transporting complex data model where total number of objects to be moved is very high.



Some Tips for Quicker Collection

This tip is mainly for collecting large number of transformations. Suppose we have list of 36 [random number] master data models (36 Infoobjects, some ATTR, some TEXT & some all three HIER ATTR TEXT) to be collected and we took a call to collect these in 3 separate transport requests with 12 (Infoobjects data flows) [We need to take a call if we want to move all 36 together in one TR or break them in multiple TR based on complexity of objects & total number of objects in one transport request] each (To avoid large import times). For sake of simplicity, suppose all master data transformations are between datasource & infoobject.

In excel we can use concatenate formula to generate a list of following pattern-



This list can be used as shown in screenshot below-


This trick may seem foolish for collecting 2 or 3 transformations, but while collecting large number (15-20 or more) of transformations from even larger group (100 or more) of transformations, it will be handy.

This trick will reduce the number of objects shown in "Select Objects" pop-up and show only most relevant ones. Now quickly required transformations can be selected & transfer selection-




Note, here if we change concatenate formula little bit we can achieve following results as well:

a) Restricting result set only for specific source system - "RSDS*<SOURCE SYSTEM NAME>*<INFO...*"

b) Restricting only those transformations which are between BW objects- "TRCS*<INFOBJECT>*"

This technique of applying filter based on wild characters can be used for collecting almost all object types (Except BEx Queries).

If we have very robust development tracker maintained we can directly take UID's of Transformations for filtering [list in select options of object type as shown in 2 screenshots above] (or DTPs) using tables RSTRAN (or RSBKDTP) by making selection on source & target object (or other selection fields based on readily available information like source type or target type)

Another point worth noting here is, if we are collecting DTP & Transformations in a same transport request:

We can use same wild card technique for DTP as well. We need to just drag in all required DTP's and in one shot ["Necessary Objects"] we can collect both DTP & corresponding Transformation (plus dependent routines & formulas).


This trick can be easier to apply if we maintain a development tracker listing developed/changed objects by object type -Infoobject, Infoprovider, Transformations, BEx, Chains etc.



Validation of Transports

When we are working with multiple people in a team, it is a good idea to validate the transports before releasing them.

Following two tables are starting point-

1) E070 this table will give you list of sub tasks in a transport request

2) E071 this table will give all objects captured in sub tasks by object types


We can use following link to check different system tables by different object types:



This link might not have all the system tables but by making use of wild card character "*" we can refer many more.

Some tables for reference-

1) RSZCOMPDIR for verifying BEx Query Tech Names

2) RSZELTDIR for checking different Query elements

3) RSTRAN for verifying Transformations

4) RSTRANROUTMAP & RSTRANSTEPROUT can help in identifying routines of a transformations based on table RSTRAN

5) RSBKDTP for verifying DTP's


Making use of tables listed above we can make quick & basic validations (using Excel & VLOOKUP) for example:

a) All relevant routines are captured or not for transformations collected in a transport request

b) Different Objects (by Object Type) are captured in transport request or not based on development tracker



References for transport related blogs

How to manage Error Free Transports



Note to SCN Members: Please feel free to add more references related to the topic.


Filter Blog

By author:
By date:
By tag: