1 2 3 18 Previous Next

SAP Business Warehouse

269 Posts

SAP BW 7.4 -Analysis & issues:-

In SAP BW Release 7.4 SPS2 the domain RSCHAVL was converted from CHAR 60 to SSTRING 1333. In 7.4 versions the characteristic 0TCTIOBJVAL references the data element RSCHAVL_MAXLEN and has the type CHAR 250. Since the maximum permitted length of characteristic values is 250 characters, the values of all characteristics can be stored in 0TCTIOBJVAL.

If we compare with previous version, the characteristic 0TCTIOBJVAL directly referenced the data element RSCHVAL, which uses the domain RSCHAVL. So the characteristic 0TCTIOBJVAL and other objects that reference the characteristic have to be changed in 7.4 versions.

Data Element in SAP BW 7.4 Version:-




Data Element in SAP BW previous Version:-



There are characteristics 0TCTLOW and 0TCTHIGH referenced the characteristic 0TCTIOBJVAL. Since both characteristics are frequently used together in the key of a DSO, they can no longer reference 0TCTIOBJVAL.As a result, both together would be 500 characters long. However, the total key length of an ABAP Dictionary table in an SAP system must be shorter. Therefore, a new characteristic 0TCTIVAL60 of type CHAR 60 was introduced and the characteristics 0TCTLOW and 0TCTHIGH now both reference 0TCTIVAL60.


They previously had the type CHAR60 and still have same type post upgrade. As a result, all objects that use these two characteristics together are still executable. However, they work only for applications that have characteristic values no longer than 60 characters.

During SUM tool runs, the XPRA program RSD_XPRA_REPAIR_0TCTIOBJVL_740 which copies the contents of the SID table from 0TCTIOBJVAL (/BI0/STCTIOBJVAL) to the SID table of 0TCTIVAL60 (/BIO/STCTIVAL60).


The characteristics 0TCTLOW_ML and 0TCTHIGH_ML are created and they reference 0TCTIOBJVAL.




In previous version, We used the DSO 0PERS_VAR and it contains the characteristics 0TCTLOW and 0TCTHIGH in its key part. Since this DSO could not store characteristic values that are longer than 60 characters, they BW 7.4 came up with a new DSO 0PERS_VR1.The data part of this DSO contains the two characteristics 0TCTLOW and 0TCTHIGH.

The program RSD_XPRA_REPAIR_0TCTIOBJVL_740, activates the new DSO, and copies the contents of the previous DSO 0PERS_VAR to the new DSO 0PERS_VR1. Then, the personalization should work as usual.

The programs that run during the upgrade activate new objects and, if necessary, copy data from old objects to new objects. However, they do not delete obsolete objects. The DSO 0PERS_VAR for storing the personalized variable values is no longer used.


The database tables RSECVAL and RSECHIE that store analytical authorization objects are no longer required.

Database table RSECVAL


Database table RSECHIE


The program RSD_XPRA_REPAIR_RSCHAVL_740, copied to the table RSECVAL_STRING or RSECHIE_STRING. If that has been successfully executed, we can delete the contents of the tables RSECVAL and RSECHIE.


The tables RSRNEWSIDS and RSRHINTAB_OLAP are also no longer required. The program RSD_XPRA_REPAIR_RSCHAVL_740 also copies the contents of these tables to the new tables RSRNEWSIDS_740 or RSRHINTAB_OLAP_S. If that has been successfully executed, we can also delete the contents of these tables.









ABAP Program:-

In SAP BW 7.4, the domain RSCHAVL was changed from CHAR 60 to SSTRING 1333. As a result, data elements that use the domain RSCHAVL are "deep" types in an ABAP context. Therefore, some ABAP language constructs are no longer possible and it will give syntax errors or they cause runtime errors in customer-specific programs.


Texts with a length of up to 1,333 characters are possible for characteristic values. For this, the structure RSTXTSMXL was created which is a "deep" type in an ABAP context. In the internal method interfaces and function module interfaces that handle the texts of characteristic values, the type RSTXTSML was replaced with RSTXTSMXL. However, the RSRTXTSML structure remains unchanged and is required for the description of metadata.





The change should have little effect on our programs. We must expect problems where we operate on characteristic values that are assigned with a generic type in case of variable exits or where we call SAP-internal functions or methods whose interfaces were changed by SAP.


Most of the problems are syntax errors that result in a program termination. We can use the Code Inspector Tool to systematically detect and resolve these problems. We can run Code inspector program as a check both before and after the upgrade. It will show us the things that need to change as per 7.4 versions.




The include ZXRSRU01 and the function module EXIT_SAPLRRS0_001 will not analyzed by the Code Inspector. This include must be fixed by ABAPER from SE38.The point we should remember is that we should use keyword ‘Type’ instead of ‘Like’. There will be RRRANGEEXIT complex structure after the enhancement; so we should use TYPE instead of LIKE.




During pre upgrade the structure is given.



Post upgrade:-


Similar with Function Module too:-


These are the ABAP code change should be done by an ABAPER after BW 7.4 Post upgrade.




I'd like to share some knowledge I recently stumbled upon while creating a transformation and having to debug it, since it did not work as expected.

Since I did not see any hints to this on the SCN or in the help, I think I'd share this:


When I use custom routines in transformation, I like to include proper exception handling and making monitor entries, since you can never be sure, what kind of data is being input by the users and you don't want to waste too much time searching for the document that actually caused the error.

The help specifies the exceptions, that can be used like this:


Exception handling by means of exception classes is used to control what is written to the target:

  •   CX_RSROUT_SKIP_RECORD: If a raise exception typecx_rsrout_skip_recordis triggered in the routine, the system stops processing the current row and continues with the next data record.
  •   CX_RSROUT_SKIP_VAL: If an exceptiontype cx_rsrout_skip_valis triggered in the routine, the target field is deleted.
  •   CX_RSROUT_ABORT: If a raise exception type cx rsrout_abortis triggered in the routine, the system terminates the entire load process. The request is highlighted in the extraction monitor as having been  Terminated. The system stops processing the current data package. This can be useful with serious errors.


(See Routine Parameters for Key Figures or Characteristics - Modeling - SAP Library)

What the help does not say is that there is one obstacle if you try to use it not with a key figure, but with a characteristic:


The thought behind seems to be that, if it is not a valid record (i.e. you raise the exception in a characteristic), the whole record is skipped.


So instead of a "clear", that happens when you raise the exception with a key figure (example):


You get a raise exception type cx_rsrout_skip_record:


If you search for it, you can find it easily in the generated program though.

Hope it helps with the proper usage of the routines and the exceptions.

Hi Everyone,


This blog gives you a brief idea about the 'General Services' option under Process Chain and few useful links/docs related to it  (esp. for Beginners).



It includes various options like


  • Start Process
  • Interrupt Process
  • AND(Last)
  • OR(Each)
  • EXOR(First)
  • ABAP Program
  • OS Command
  • Local Process Chain
  • Remote Process Chain
  • Workflow (Remote also)
  • Decision Between Multiple Alternatives
  • Is the Previous run in the chain still Active?
  • Start job in SAP Business Objects Data Services



Start Process :


As we all know that  this Process type let us to start the Process Chain.

  • Every Process Chain starts with this process type.
  • It is Mandatory in all the Process Chains.
  • It can be scheduled according to the Client needs.
  • It triggers the Process Chain.



Interrupt Process :



  • This Process type will interrupt the currently running Process chain and checks the condition mentioned in it.

Once you drag this Process type on to the work area, then you shall create a new variant for it. When you are creating a variant, you will find the below screen.



  • If it's 'Immediate', then the next Process connected to the 'Interrupt Process' is carried out instantly.
  • If it's 'Data & Time', then it waits till the mentioned data/time is reached and later the Process chain will resume back.
  • If it's 'Event/Job', then the particular job/event will be triggered. After it gets completed, the Process Chain will continue to run.



Useful Link:











AND - When we use AND operator, it will check whether all the above processes are completed successfully. If ALL the above process are successful, then it

will proceed to the following process which is connected to AND process type.



If  both '1 & 2' in the above snapshot are completed, then the 'Program' which is connected to AND process will be executed.



OR - When we use OR operator, it will check whether all the above processes are completed successfully. If EITHER  of the above process are successfull, then it will proceed to the following process which is connected to OR process type.



If  either '1' OR '2' in the above snapshot are completed successfully, then the 'Program' which is connected to OR process will be executed.



EXOR - When we use EXOR operator, it will check whether all the above processes are completed successfully. If EITHER  of the above process are

successfull, then it will proceed to the following process whereas if both the above process are successfull or failed then EXOR will not proceed further.




Only If, either '1' OR '2' in the above snapshot are completed successfully, then the 'Program' which is connected to EXOR process will be executed.

If both '1' & '2' in the above snapshot are completed successfully, then the 'Program' which is connected to EXOR process will not be executed.

If both '1' & '2' in the above snapshot are failed, then the 'Program' which is connected to EXOR process will not be executed.



ABAP Program :


  • This will let us to include both standard and customized  ABAP programs from various destinations like from the local workstation or from other source systems which are connected to our BW system through RFC connections.
  • You can also pull the ABAP program through the Events that you have created for it.


Useful Links:







OS Command:


  • If we want to execute certain scripts or OS instructions, then this process will  help us to execute the same.
  • It shall be created in the SM69 and can be executed in SM49.


Useful Link :





Local Process Chain :


  • This will let us to add a local Process chain which is present in our current BW system.
  • When the control moves to this Process, then the referenced process chain(Local Process Chain)  will be executed in the background and if it's successful, then the control resumes back to the current process chain.


When you drag the 'Local Process chain', you will get the following screen.



  • Click on the prompt window which will show all the Process chain in your BW system.
  • Select the one you want to add in the new process chain.
  • Once you have selected your process chain, then you will get a similar screen like below.




  • Then you can connect to other Process types in the Process Chain like the below snapshot.




Remote Process Chain :


  • This is similar to the Local Process Chain except that the referenced process chain is in different system which is connected to our current BW system through RFC connectivity.



Useful Link:






Decision Between Multiple Alternatives :


  • This process can be considered as Decision maker of the Process chain. While creating this process, we can write our 'If..else' condition in it.
  • Then, the subsequent process is carried out according to the condition filled.


You can refer the below useful document on 'Decision between Multiple Alternatives'.


Useful Links :








Is the Previous run in the chain still Active? :

  • It's a new Process type added in BW 7.0
  • This process type will check whether the previous run of the Process chain is successful or not. If it's successful, then it will continue with the subsequent processes.


Sample Scenario :


If the Process chain runs on every 'Friday' of the week. When it runs on the second friday of the current month, you need to check whether the Process chain has run successfully or not  in the previous Friday. If it's successful, then it shall run on the second friday of the current Month. To use this Logic, we shall use this Process Type.


There are many scenarios in which it shall be used effectively.



Useful Link :





Start job in SAP Business Objects Data Services :


  • This Process help us to run BODS jobs from BW system.
  • BODS (Business Objects Data Services) - It's a service by SAP which can extract the data from  both SAP and non SAP data sources.
  • The jobs will extract the  data from various data sources through 'Data Services' and pull them into the BW.



Useful Link  :







I hope that this blog is helpful for you.


Thanks a lot for reading this Blog



Gokulkumar RD

The purpose of this document is to help people who have BCS component installed in their landscape along with BW during BW upgrade. As, I have to do a thorough research before I came across the solution I thought of sharing it in a blog so that ,in future, if anyone faces the similar problem he/she can refer this blog. So, Lets look at the problem that my team faced during upgrade and what we did it to resolve it..

Recently, we did an upgrade in our BW environment form 7.0 to 7.4. During our BW post upgrade activities, we found error in one of our Bex query as shown below:



This Bex query was built on top of BCS cube. BCS is a SEM component based on BW. The Strategic Enterprise Management (SEM) is a SAP product that provides integrated software with comprehensive functionality that allows a company to significantly streamline the entire strategic management process.The BCS component is a part of SEM that provides complete functionality for the legally required and management consolidation by company.

Upon choosing the Individual display from the above error message we got below screen. This is the Data Model Synchronizer screen which highlights the difference in the data models between BCS and BW application. Here, in field MSEHI we can see that the difference exists between BCS and BW landscape as shown below which is the root cause for this issue.


So to resolve this issue, we tried to follow the details in the message as shown  in the window below,but with that every object(Cube,DSO etc) built on top of BCS got regenerated in BW. Due to which, all the existing modifications done in the these objects by BW team got lost.


So, in order to avoid that, we used program 'UGMD_BATCH_SYNC'. This program synchronized BW and BCS application without regenerating any thing. The details of this program can be found at below link:

Manual Data Synchronization - Business Consolidation (SEM-BCS) - SAP Library.

After executing this program, we need to mentioned the following things:

  • Application
  • Application Area
  • Field name


Application and application area can be found out as highlighted below:


We executed this program with selections as shown below and both the applications got synchronized without the regeneration of any of the BW objects.dms_error.png

BI_small.jpgSooo, have you thought about buying HANA? Ha-ha, just kidding! No, folks, this is not another sales pitch for HANA or some expensive “solution”, but a simple customer experience story about how we at Undisclosed Company were able to break free from the BI (*) system with no external cost while keeping our users report-happy. Mind you, this adventure is certainly not for every SAP customer (more on that below), so YMMV. The blame, err… credit for this blog goes to Julien Delvat who carelessly suggested that there might be some interest in the SAP community for this kind of information.


It might be time to part with your BI system if…


… every time you innocently suggest to the users “have you checked if this information is available in BI?” their eyes roll, faces turn red and/or they mumble incoherently what sounds like an ancient curse.

… you suspect very few users actually use the BI system.

… your have a huge pile of tickets demanding an explanation why report so-and-so in BI doesn’t match report so-and-so in SAP.

… your whole BI team quit.

… the bill for BI maintenance from your hosting provider is due and you can think of at least 10 better things to do with that money.


What went into our strategic decision


  • Tangible cost to run BI. Considering number of active users and value, we were not getting our money’s worth.
  • Relatively small database size. The Undisclosed Company is by no means a small mom-and-pop shop but due to the nature of our business we are fortunate to have not as many records as, say, a big retail company might have.
  • Reports already available in SAP. For example, it just happened that few months before the “BI talk” even started our financial team already made a plea for just one revenue report in SAP that they could actually rely on. Fortunately, we were able to give them all the information in (gasp!) one SQ01 query.
  • No emotional attachment to BI. As far as change management goes, we had the work cut out for us (see the eye rolling and curse-mumbling observation above). The users already hated BI and SAP team didn’t want anything to do with it either.


We're doing it!


Personally I have suggested that we simply shut down BI and see who screams, but for some reason this wasn’t taken by the management with as much excitement as I was expecting.


Instead we took a list of the users who logged into BI in the past few months (turned out to be a rather small group) and our heroic Service Delivery manager approached all of them to find out what reports they’re actually using in BI and how did they feel about it. Very soon we had an interesting matrix of the users and reporting requirements, which our SAP team began to analyze. Surprisingly, out of the vast BI universe the users actually cared about less than 15 reports.


For every item we identified a potential replacement option: an existing report in SAP (either custom or standard), a new query (little time to develop), a new custom ABAP report (more time to develop). With this we were able to come up with a projected date for when we could have those replacements ready in SAP and therefore could begin the BI shutdown. It was an important step because having a specific cut-off date puts fear into the users’ minds. Otherwise if you come asking them for the input or testing and no specific due date we all know it’s going to drag forever (there seems to be always “end of month” somewhere!).


Drum roll, please


So what did 15 BI reports come down to in ECC? We actually ended up with just 2 custom ABAP reports and 2 new queries, everything else was covered by standard SAP reports and just a couple of existing custom reports. Interestingly, we discovered that there were sometimes 3 different versions of essentially the same report delivered as 3 different reports in BI. In those cases we combined all the data into one report/query and trained the users on how to use the ALV layouts.


The affected functional areas were Sales, Finances and QM (some manufacturing reports were and are provided by our external MES system). There was very little moaning and groaning from the user side - it was definitely the easiest migration project I’ve ever worked on. Breaking free from BI felt like a breeze of fresh air.


Are you thinking what I’m thinking?


If you’ve already had doubts in the BI value for your organization or this blog just got you thinking “hmm”, here are some of our “lessons learned” and just random related observations and suggestions. (Note – HANA would likely make many of these points obsolete but we have yet to get there.)

  • If you feel you don’t get adequate value from your BI system it is likely because you didn’t really need it in the first place.
  • If you are already experiencing performance issues in the “core” SAP system, you might want to hold on to your BI for a bit longer (unless it’s BI extraction that is causing the issues). Adding more reporting workload to the already strained system is not a good idea.
  • Find the right words. If we just told our business folks that we’re shutting down BI the hell would break lose (“evil IT is taking away our reports!!!”). But when you start conversation with “how would you like to get a better report directly from SAP?” it’s a different story. And don’t forget to mention that they will still be able to download reports into Excel. Everybody loves Excel!
  • Always think ahead about your reporting needs. I can’t stress this point enough. For example, in our scenario one of the reporting key figures is originally located in the sales order variant configuration. If you’ve never dealt with VC, let me tell you – good luck pulling this data into a custom report. (The same problem with the texts, by the way – makes me shiver when some “expert” suggests on SCN to store any important data there). So our key VC value was simply copied to a custom sales order (VBAP table) field in a user exit. Just a few lines of code, but now we can easily do any kinds of sales reports with it. It only took a couple of hours of effort but if you don’t do it in the beginning, down the line you’ll end up with tons of data that you cannot report on easily.
  • Know your SAP tables. Many times custom SAP reports get a bad rep because they are simply not using the best data sources. E.g. going after the accounting documents in BSEG is unnecessary when you can use index tables like BSAD/BSID and in SD you can cut down on the amount of data significantly if you use status tables (VBUK/VBUP) and index tables like VAKMA/VAKPA. I’m sure there are many examples like that in every module – search for them on SCN and ask around!
  • Queries (SQ01) are awesome! (And we have heaps of material on SCN for them - see below.) If you have not been using them much, I’d strongly encourage you to check out this functionality. You can do authorization checks in them and even some custom code. And building the query itself takes just a few button clicks with no nail-biting decisions whether to use procedural or OO development. SAP does everything for you – finally!
  • Logistics Info System (LIS)  – not so much. Even though I wouldn’t completely discount it as an option for reporting (yet), it is usually plagued by the same problems as BI – inconsistent updates and “why this report different from that report” wild goose chase.
  • When it comes to reports – “think medium”. You’ve probably noticed that in our case number of reports reduced greatly in SAP compared to BI. Why was that? Turned out that we had many reports that essentially used the same data but were presenting it slightly differently. There is no need to break up the reports when display customization can be easily achieved by using the ALV layouts, for example. And on the other side of the spectrum are the “jumbo reports” that include data from 4 different modules because someone requested the report 10 years ago and thought it was good, so he/she told other users about it and other users liked it too BUT they needed to add “just these two fields” to make it perfect, then more and more users joined this “circle” and everyone kept asking for “just these two fields” but nothing was getting removed because the first guy left the company years ago and now no one even remembers what the original requirement was. So you end up with the ugly Leviathan of a report that has to be sent to the farm upstate eventually. Try to avoid those.
  • Be creative. If a “jumbo report” cannot be avoided (ugh!), you might want to consider creating a “micro data warehouse” in a custom table that can be populated in a background job daily (or more frequently, if needed). Such reports usually do not require up to the second information and we can get the best of both worlds – minimize impact on performance by pre-processing the data and allow the users to run the reports on their own. Another tip – if a report is used by different groups of users and includes certain fields that are more time-consuming than others, you can add an option to selection screen to exclude those fields when they’re not needed. Also simple training the users on ALV functionality can be very helpful. For example, we noticed that some users ran a report for one customer, then went back to the selection screen and ran it for another. But running the report for two customers and then using ALV filter would actually be more efficient.
  • Don’t let the big picture intimidate you. The big goal of taking down a large (as we thought!) productive system seemed pretty scary in the beginning, but, as you could see, we broke it down into pieces and just got it down one by one. And this was done by the team of just 5 people in 3.5 months while supporting two productive SAP systems and handling other small projects as well. If we did it, so can you!


Useful links


Next Generation ABAP Runtime Analysis (SAT) – How to analyze performance - great blog series on the SAT tool. My weapon of choice is still good old ST05, but in some cases it might be not enough.

There are many SCN posts regarding ABAP performance tuning, although quality of the posts varies greatly. This wiki page could be a good start, use Google to find more. Look for the newer/updated posts from the reputable SCN members. (Hint – I follow them! )

Some tips on ABAP query - this is Query 101 with step by step guide, great for the beginners.

10 Useful Tips for Infoset Queries - good collection of miscellaneous tips and tricks

Query Report Tips Part 2 - Mandatory Selection Field And Authorization Check - great tip on adding a simple authority check to a query.


(*) or BW? – I’m utterly confused at this point but had the picture already drawn so let’s just stick with BI

This blog will clarify how to resolve the issue related to calculation of KPI for high level view and detailed view

where detailed view is based on drill down of 2 different Infoobjects and this combination is unique key  :-


For Example:

we have a division calculation for one KPI : (Unit charge/Consumption * 100 ).


High level view:


  Determination ID
Unit charge
ConsumptionUnit charge/Consumption


Detailed view :

If we drill down on Installation and date output is :

NOTE: Installations 6000359409 is repeated and date 03/31/2014 is also repeated corresponding to different Installation.


  Determination ID
To DateUnit charge
ConsumptionUnit charge/Consumption

however if we see the sum of last column:


Unit charge/Consumption

Sum is =


If we use Exception aggregation alone , it won't work for example if we aggregate on Installation output will be like below :-


  Determination ID
To DateUnit charge
ConsumptionUnit charge/Consumption
Sum =42.87079519


How to achieve this in BW :-

1. Concatenation:

To achieve this we have to create a new Infoobject of length equal to sum of both these Infoobjects and get concatenated value into this new object :-

2. Exception agreegation

Now apply exception aggregation on

Unit charge/Consumption  calculation based on this new Infoobject


Output will be displayed like below :-


High level view:

  Determination ID
  Number and Date
Unit charge
ConsumptionUnit chare/Consumption


Detailed view :

  Determination ID
Unit charge
ConsumptionUnit chare/Consumption

Now sum is matching.

Dear All,


Objective of this post is to understand what happens when you transport the flat file transformation from development server to quality server or from quality server to production. I struggled many times that after moving the flat file transformation successfully still the transformation is not visible in the quality or production server.


Whenever you create a transformation in development server two version will be create in the RSTRAN table.

ScreenHunter_03 Jun. 04 10.51.gif

ScreenHunter_04 Jun. 04 10.52.gif


After moving the flat file transformation to the quality server you should have the similar entries for the transformation, but some times you will be getting only the "T" version.


Doing all kind of transport moves like moving the data marts and datasource first and then moving the transformation or moving all the data marts, datasource & transformations didn't helped to view the transformation in the quality server.


Reason behind the issue is not maintaining the logical system conversion for flat file as we do for the ECC system.

Go-to RSA1 select the Tools menu.


Select the conversion of logical system names and maintain the source system for flat file.

ScreenHunter_05 Jun. 04 11.01.gif


Next assign the source system ID to the flat file source system.



Finally by re-importing the transport request, transformations are made visible in the quality system. Please do check the above mentioned steps when you are moving the transport to other landscapes. Now you can see the "A" & "M" version are visible in the RSTRAN Table in quality server.


Hope this post is helpful.

Today I had to debug a data load between 2LIS-Datasource for Billings and a DSO-based SPO. I executed the DTP in debug mode and I wondered why it didn't stop in debug mode. It took me quite a while to find out the reasons. I want to give you some tips, what you should pay attention to.


  1. Identify the Records in PSA and note down billing document number.
  2. Identify target part DSO into which the identified billing documents should have been written to. In my case the SPO is partitioned by country. I had to find out into which part DSO the records will be written. The first billing documents were for Great Britain, others for Poland and some others for Germany. For the erroneous billing documents the Great Britain part DSO was the right one for me.
  3. Create a new DTP between datasource and  target part DSO (e.g. DSO for Great Britain).
  4. Run a test load of this DTP to see how many data packages will be created and if data gets updated into identified part DSO
  5. If you have more then 200 data packages from your test load then you have to assign the correct data package number in DTP settings. In my case the erroneous data came in package number 314 and all other data packages were empty. I started DTP in debug mode, but DTP didn't stop. The reason was that by default data package number in debug mode are restricted from 1 to 200. I deleted all the keys for data package number and entered number 314 only. Then I started debugging again and there was finally the debugger!


I hope this blog is helpfull for you and prevents you from running into the same problems debugging a DTP load into SPO as me.

Multiple SAP BW Landscape Consolidation! – Perhaps, I heard it for the first time and sounded really rare and ‘not so common’ scenario. The first question that came to my mind was, why one would want to do it – bring multiple BW systems on to a single database, say for example, SAP HANA. Well, the reasons are multifold –


  1. To simplify the landscape
  2. To enable easier maintenance
  3. Comparatively less investment on hardware and software
  4. Software installations/updates, patch updates etc., is just done once
  5. Take advantage of SAP HANA (if you are consolidating on SAP HANA)


There could be more reasons, better reasons. This can typically be like consolidation of regional systems, which is normally spread across geographically into a single landscape. Technically, it’s quite complex, since, the BW Objects like the InfoObjects, DSO, Infocube, Queries etc., when brought together from multiple systems can face overlapping in naming, which has subsequent effects on its own. For example, the infocube 0SD_C03 from one BW system can have issues when 0SD_C03 Infocube from a different BW system want to be moved to the consolidated BW system, especially when both the objects have different characteristics and keyfigurs. The same way with other BW objects as well.


In simple terms, the concept is clear that the technical names of the BW objects need to be exclusive, so that they can be seamlessly consolidated to a single system. But how? Imagining the volume of the BW objects and the intricacies involved in each object, the whole project would truly be massive and humongous.


From the approach point of view, there can be two ways


  1. Superset Approach
  2. Unique Object Renaming Approach


The concept of superset approach is quite simple – Create a single object and include attributes used in all other BW systems, so that we have a single object to cover up all attributes. For example, for the master data 0MATERIAL from BW1 system has attributes attr1, attr2, and attr3. The 0MATERIAL from BW2 system has attributes attr4, attr5, and attr6.  If we were to follow the superset approach for this object, the consolidated BW system will have 0MATERIAL with attributes attr1, attr2, attr3, attr4, attr5 and attr6. Though, this is clear from the object metadata point of view, from the data point of view, it brings up complexity. How? If 0MATERIAL from two different BW systems has same values representing different materials. This basically implies two things.


  1. We can follow the superset approach provided the data is harmonized across the regional BW systems. If this data governance has been followed right, this approach will work fine.
  2. However, if data harmonization has not been taken care, it might need development effort like compounding to 0LOGSYS to the objects to differentiate the data coming from different BW systems. Again, think of the massive effort involved in making this change.


With the Unique Renaming approach, the clarity is more interms of the approach, since, it’s a straightforward renaming of all objects so that the objects from different BW systems, when consolidated, still remains issueless, as they stay unique and exclusive.  But, think of the manual effort to rename the entire set of objects (InfoObjects,DSO, Infocube, MultiProvider, Transformations, Routines, Queries, Process Chains etc.,.) This is quite cumbersome, as manual effort is massive, definitely inefficient and is more prone to errors.


Either way, it’s definitely not a project that’s regular in nature, but requires extreme clarity regarding the complexity of the activities involved and adequate planning and expertise is required to ensure that the consolidated BW system works as before.


My Other Blogs:



Hi All,


This blog will help you in understanding the relation between DTP loads and ODS request activation in a technical perspective.


ODS request activation will be similar to your delta DTP loads where the delta requests(requests which are not activated) available in the source(New Table) will be processed to target(Active & Change log table) based on Request ID.


Source and Target : DSO activation is also just similar to you DTP load processing where your New table will act as a "Source", your Active table and Change log table will acts as "Targets"

Data Package : Based on your system settings, alike the package size in your DTP settings, you'll be also having package size in DSO activation settings       (refer T-Code: RSODSO_SETTINGS) which will be used to transfer the 'n' records grouped into data packages for processing.

Parallel Processing : Parallel processing in DTP is used to process the data packages in parallel. Similarly Parallel processing in DSO activation also will works.

From the below figure(taken from the RSODSO_SETTINGS) to illustrate Data Package & Parallel Processing,

  • Package size Activation will determines the number of records to be sent in single package(from New table to Active & Change log table).
  • Number of Processes will determines the number of packages to be processed at a time.

For example, If your New table has total number of records = 1,000,000; with this settings => Package size Activation = 50,000; Number of Processes = 4, totally 20 packages (=1,000,000 /50,000) will be created and at a time 4 packages will be processed in parallel.


Request by Request in Delta DTP loads and Do not condense request into Single request in DSO activation : When you select the "get delta request by request" in DTP settings, the delta requests from the source will be processed one after another and it will create separate request ID for each run. Similarly, when you select the "do not condense request into one request while activation takes place" in DSO activation settings, the multiple requests which are waiting for activation will get activated one after another and each request activation will generate new request ID.

To know more in detail about how the ODS activation is working please navigate to wiki page

Sharing even more comparisons(if any) which are missed above are much appreciated !!


Bharath S

Just recently, I got dragged - yet again - into a debate on whether data warehousing is out-dated or not. I tried to boil it down to one amongst many problems that data warehousing solves. As that helped to direct the discussion into a constructive and less ideological debate, I've put it into this short blog.

The problem is trivial and very old: as you need data from multiple sources why not accessing the data directly in those sources whenever needed! That guarantees real-time. Let's assume that the sources are powerful, network bandwidths at the top of technology and overall query performance be excellent. So: why not? In fact, this is absolutely valid but there is one more thing to consider, namely that all sources to be accessed need to be available. What is the mathematical probability for that? Even small analytic systems (aka data marts) access 30, 40, 50 data sources. For bigger data warehouses this goes to the 100s. That does not mean that every query accesses all those sources but naturally a significantly smaller subset. However, from an admin perspective it is clearly not viable to continuously translate source availability to query availability. One must assume that end users would want to access all sources continuously as it is required.

Figure 1 pictures 3 graphs that show the probability of all sources (= all data) being available, depending on the average availability of a source. For the latter 99%, 98% and 95% were considered to cater for planned and unplanned downtimes, network and other infrastructure failures. Even if a service-level agreement (SLA) of 80% availability (see dotted line) is assumed, it becomes obvious that such an SLA can be achieved only for a modest number of sources. N.b. that this applies even when data is synchronously replicated into an RDBMS because replication will obviously fail if the source is down or not accessible.


Fig. 1: Probability that all (data) sources are available given an average availability for a single source.


In a data warehouse (DW), this problem is addressed by regularly and asynchronously copying (extracting) data from the source into the DW. This is a controlled, managed and monitored process that can be made transparent to the admin of a source system who can  then cater for downtimes or any other non-availability of his system. As such, a big problem for one admin - i.e. availability of allsources - is broken down to smaller chunks that can be managed in a simpler, de-central way. Once, the data is in the DW, it is available independent from planned / unplanned downtimes or network failures of the source systems.

Please do not read this blog as a counter argument to federation. No, I simply intend to create awareness for an instance that is solved by a data warehouse and that must not be under-estimated or neglected.

This blog has been cross-published here. You can follow me on Twitter under @tfxz.

The following are the few design considerations when reporting for New General Ledger Accounting module is implemented in SAP BW. This is applicable for the following data sources.


0FI_GL_10           General Ledger: Balances, Leading Ledger

3FI_GL_xx_TT      General Ledger (New): Balances from Any Ledgers (Generated)

0FI_GL_14           General Ledger Accounting (New): Line Items of the Leading Ledger

3FI_GL_XX_SI      General Ledger Accounting (New): Line Items of Any Ledger (Generated)



Factors / Delta Methods






Not required

Not required

Update type for key figures

Over write

Addition (if DSO is used)

Addition (if DSO is used)

Data load performance

Considerable performance problems if a large number of totals records are added or changed between two delta transfers. In particular, performance can drop dramatically when the balance carry forward or mass postings are executed during the year-end closing activities.

Good performance if there are large data volumes

Very good performance if there are large data volumes

Data volume

Relatively high data volumes are transferred to BW.

Relatively low data volume is transferred to BI

Since the data is aggregated only in a logical unit of work (LUW) for totals records, the data volume transferred to BI is usually greater than the volume transferred during the method ADD

ECC downtime during delta initialization

Not required

A posting-free period must be ensured in ECC

A posting-free period must be ensured in ECC

Planning data in delta mode

Planning data can be extracted in delta mode

No planning data can be extracted in delta mode

No planning data can be extracted in delta mode. (However , this can be enabled using a modification with SAP note)

Key figure 0BALANCE

Data is available by default

It has to be calculated in start routine in BW.  The prerequisite for using this method is that line items are written in period 0.

It has to be calculated in start routine in BW

Data availability in BW

Data in BW is current

1 hour latency. Upper safety interval of an hour, that is, only line items that are MORE THAN ONE HOUR OLD are transferred to BI. This safety interval is NOT allowed to be reduced because this would risk posting records being lost during the extraction.

Data in BW is current

Additional Index in ECC

An additional secondary index is required for the totals table for the field TIMESTAMP

Additional index for the TIMESTAMP field in the line item table required

No additional index is required in the totals table or the line item table


This delta method can transfer only about 1,000 totals records each minute to BW. Therefore, this delta method is recommended only if a relatively low number of totals records are added or changed between two delta transfers.

For example, almost two hours are required for 100,000 extracted totals records.

If there are large data volumes, this method is faster than the method AIED described previously. It is particularly efficient if a large number of the SAME characteristic combinations are posted between two delta transfers (for example, numerous postings to the value-added tax account with the same profit center) because, in this case, the selected line items are transferred to BW in aggregated form

This method is the best alternative in most cases if there are large data volumes because of its performance advantages compared with the two other methods(AIED & ADD). The data is aggregated only in a LUW, this method is most efficient if relatively few DIFFERENT characteristic combinations are posted between two delta transfers (for example, all postings are made to different profit centers)



                 Anything in this World requires General/Critical maintenance to increase the survival. I will take two Classic examples here, to make you more clear about What I am going to talk about in this blog. One is Human body and another one is Vehicle. Both require regular Check-ups/Maintenance to have long life. Similarly, all our SAP Systems require regular Check-Ups and General/Critical Maintenance to keep them lively. You will find interesting sections below which could be produced by EWA. I am gong to show only very important sections but not everything. Let's jump into EWA now.


  • An SAP EWA can be produced by SOLMAN, which is nothing but Solution Manager
  • It can be generated on Weekly basis to keep an eye on whole Production System status
  • I am using the same terms which SAP uses in the report and you could find explanations for some of them in brackets
  • I am highlighting Column headers and imagine your configuration details in those tables to make sense
  • My intention is to show you What all parameters(Column Headers) will SAP EWA consider
  • There will be multiple sections describing about your Hardware details
  • Report starts with all Hardware and Software information precisely for BASIS administrators (as well as for BW guys for Knowledge sake)
  • Lower part of the report is for BI Developers/Support Guys which talks about Largest Aggregates etc.,


Your report heading would be like Early Watch Alert - BI_SYSTEM_LANDSCAPE


It gives a complete high level overview of your BW system with ratings on various parameters

TopicSub TopicRating
Performance OverviewPerformance EvaluationEg: Green Tick Mark
SAP System OperatingProgram Errors (ABAP Dumps)
Update Errors
Hardware Capacity
Database PerformanceMissing Indexes
BW ChecksBW Administration & Design
BW Reporting & Planning
BW Warehouse Management
SecuritySAP Security Notes: ABAP and Kernel Software Corrections
Users with Critical Authorizations
JAVA System DataJava Workload Overview
Java Application Performance


Service Summary


Performance Indicators for Production BW (System Name)


Area IndicatorsValueTrend
System PerformanceActive Users (>400 steps)
Avg. Availability per Week
Avg. Response Time in Dialog Task
Max. Dialog Steps per Hour
Avg. Response Time at Peak Dialog Hour
Avg. Response Time in RFC Task
Max. Number of RFCs per Hour
Avg. RFC Response Time at Peak Hour
Hardware CapacityMax. CPU Utilization on Appl. Server
Database PerformanceAvg. DB Request Time in Dialog Task
Avg. DB Request Time for RFC



1) Products and Components in current Landscape


SIDSAP ProductSAP Product Version
SAP NetWeaver     Eg: 3.5 / 7.0 / 7.3


Main Instances (ABAP or Java based)

SIDMain Instance
Eg: Application Server JAVA
Eg: Enterprise Portal



SIDDatabase SystemDatabase Version
Eg: Oracle, MS-SQL, HANA etc,

2) Servers in current Landscape

SAP Application Servers (If you have multiple servers, those will be listed down below)

SIDHostInstance NameLogical HostABAPJAVA


DB Servers

SIDHostLogical Host( SAPDBHOST)



Related SIDComponentHostInstance NameLogical Host


3) Hardware Configuration


Host Overview

HostCPU TypeOperating SystemNo. of CPUsMemory in MB


ST-PI and ST-A/PI Plug-Ins. This will indicate you to update to the latest levels.


RatingPlug-InReleasePatch LevelRelease RecPatch Level Rec


Software Configuration For your Production System

SAP Product VersionEnd of Mainstream MaintenanceStatus
Eg: SAP NetWeaver Version 7.0                   31.12.2017


Support Package Maintenance - ABAP.


This info can be found in your BW System via System-->Status, except Latest Available Patch Level Info. Below table indicates us to update to latest Patches.

Software ComponentVersionPatch LevelLatest Avail. Patch LevelSupport PackageComponent Description


Support Package Maintenance - JAVA

ComponentVersionSPLatest Available SP

Database - Maintenance Phases

Database SystemDatabase VersionEnd of Standard Vendor Support*CommentEnd of Extended Vendor Support*CommentStatusSAP Note


Similar table for your Operating System also.


SAP Kernel Release. You can find this via System-->Status

Instance(s)SAP Kernel ReleasePatch LevelAge in MonthsOS Family


They indicate you to update to latest Support Package Stack for your Kernel Release, if required.


Overview System (your SID)



This analysis basically shows the workload during the peak working hours (9-11, 13) and is based on the hourly averages.



If the average CPU load exceeds 75%, temporary CPU bottlenecks are likely to occur. An average CPU load of more than 90% is a strong indicator of a CPU bottleneck.



If your hardware cannot handle the maximum memory consumption, this causes a memory bottleneck in your SAP system that can impair performance.


Workload Overview (Your SID)


  • Workload By Users
  • Workload By Task Types Eg: RFC, HTTP(S)

Th above information will be given in fantastic graphical presentation by SAP.You can make out good sense out of them.


BW Checks for (Your SID)


BW - KPIs : Some BW KPIs exceed their reference values. This indicates either that there are critical problems or that performance, data volumes, or administration can be optimized.

KPIDescriptionObservedReferenceRatingRelevant for Overall Service Rating
Nr of Aggregates recommended to deleteEg: 25Eg: 13Yellow


This indicates all Zero calls Aggregates can be deleted.


Program Errors (ABAP Dumps)


This section shows the ABAP Dumps(ST22) which have occurred in the last week. They will suggest us to monitor on a regular basis and we should determine the possible causes asap. Eg : CX_SY_OPEN_SQL_DB


Users with Critical Authorizations


This section suggests us to review all our Authorization Roles and Profiles on a regular basis. for additional info SAP Note : 863362


Missing Indexes


This section indicates us to have an eye on whether Primary indexes are exists on the tables in database. Because, missing indexes can lead to severe performance issues.


Data Distribution


Largest InfoCubes : We should make sure we do Compression on a regular basis.

InfoCube Name# Records


Largest Aggregates : Large Aggregates high run times for Roll ups and Attribute Change runs. So we should check them periodically and modify them at least on quarterly basis.

InfoCubeAggregate Name# Records


Analysis of InfoProviders : This table basically shows the Counts


Info ProvidersBasis CubesMulti ProvidersAggregatesVirtual CubesRemote CubesTransactional CubesDSO ObjectsInfo ObjectsInfo Sets


DSO Objects : This table basically shows the Counts


# DSO Objects# DSO Objects with BEX flag# DSO Objects with unique flag# Transactional DSO Objects

InfoCube Design of Dimensions : You can check this by running SAP_INFOCUBE_DESIGNS in SE38


InfoCube# rowsMax % entries in DIMs compared to F-table


Aggregates Overview : We can take a call to delete unused Aggregates by observing this table


#Aggregates#Aggregates to consider if to be deleted#Aggregates with 0 calls#Basis Aggregates


Aggregates to be considered for deletion (Most important Section to take a quick action)



Cube nameAggr.-cube# entriesAvg. reduce factor# callsCreated atLast Call# Nav. Attr.# Hier.

DTP Error Handling : You can deactivate error stack if you don't expect errors often. It's better to use "No Update No Reporting" option.


# DTPs with error handling# Total DTPs% of DTPs with error handling

BW Statistics


All your BI Admin Cockpit information will be provided in detailed tabular presentation about OLAP Times, Run time etc., Many tables will be there for every aspect which I cannot show you in this blog, as it is already too big now




"SAP Early Watch Alert" gives us a complete picture of our BW System in all aspects. This is a fantastic service gives by SAP to keep us alert before any damage happens to the system. I have tried to show you almost all important things in this blog.

Process Chains triggering through Macro in Excel

I have got requirement recently working on the Excel based process chains. Everyday new records will be added into the excel sheets and we need to trigger all these files into the BW server through process chain automatically. As we have all these Excel files in the desktop.

I found the solutions

1. Save the files in AL11 (SAP Directories) by using the below function modules.



If we use the above FM you have develop the ABAP program in SE38.

2. Writing the logic in Excel Macro. (No need ABAP Program)

Select the excel sheet go to view menu -> select the Marco -> click on Marco->

Create the new Macro -> give the new Macro name -> select Macro

Next screen will appear.



Sub saveSheetsAsCSV()

Dim i As Integer

Dim fName As String

Application.DisplayAlerts = False

For i = 1 To Worksheets.Count

fName = "D:\usr\sap\DEV\DVEBMGS03\work\STAFFING_PROJECT\ " & i

ActiveWorkbook.Worksheets(i).SaveAs Filename:=fName, FileFormat:=xlCSV

Next i

End Sub

above code will convert xls format to csv format automatically while running the Macro by user

user will update the excel sheets and every day they will run the Macros

It will automatically overwrite the records while running the Macro in excel.

We saved the excel sheet in AL11 – SAP Directories.

Check the data source path – AL11 Directories


Note:- I am not explaining the how to create the DSO, CUBE, Process chains.

Please look at the Process chains daily scheduling.

Process chain triggered daily once a day and every day it will load full. I used the delete PSA request and

delete data target content form cube before loading to cube

Check the daily scheduling

in start process i given the time every morning 6:00 AM.



Hope it will help.





The changes in the Financial Accounting Line Items are stored in the table BWFI_AEDAT which enables the BW system to pull the Delta data using the time stamp procedure. As a standard SAP recommendations the extraction from the Financial Accounting Line Items are limited to once a day and hence ad-hoc data loads from the following extractors will bring Zero Records:


  • 0FI_GL_4  - General ledger: Line Items
  • 0FI_AP_4  - Accounts payable: Line Items
  • 0FI_AR_4  - Accounts receivable: Line Items


This document describes the procedure to activate the ad-hoc data loads which will bring the delta data from the line items from the listed data sources into BW.




The FI line item delta Data Sources can identify new and changed data only to the day because the source tables only contain the CPU date and not the time as time characteristic for the change. This results in a safety interval of at least one day.


The standard behavior can be changed. For more information, see note 485958. Follow the Link


Settings to activate Ad-hoc loads:


Frequent data loads into BW from Line Items will allow each extraction can be done more efficiently and this will also reduce the risk of data load failures due to the huge volume of data in the system.


Following Manual Changes in the source system has to be performed in the table – BWOM_SETTINGS:



BWFINSAF = 3600 (for hourly extraction)


With the change in the above parameters the Safety Interval will now depend on the flag of BWFINSAF which is defaulted to 3600 Sec (1hr). And this value can be changed depending on the requirement.

The change in the table against to BWFINSAF and BWFINEXT will lead to ignore the other flags like BWFIOVERLA, BWFISAFETY and BWFITIMBOR.


Flags BWFILOWLIM, DELTIMEST will work like before.






Once the changes are updated in the table, the delta extraction can be initiated in BW which will bring the data loads as per the scheduling done at the Process chain or the ad-hoc manual data loads.




With the new extractor logic implemented you can change back to the standard logic any day by switching off the flag BWFINEXT to ' ' from 'X' and extract as it was before. But ensure that there is no extraction running (for any of the extractors 0FI_*_4 extractors/data sources) while switching.


On the version validity and for more information please refer to the Note: 991429


Side Effects:


There are no side effects in the current versions of ECC but if the source system is of older version like SAP_APPL is between 600 to 605 and 2004_1_46c to 2004_1_500 with the different patch levels then please refer to the note: 1152755 where the data extraction of the following data sources will fail:


  • 0FI_AA_11
  • 0FI_AA_12

This is due to Asset Accounting data sources and FI Line item data sources uses the same Function Modules to fetch and update the time stamps for extraction.


Correction specified in the note: 1152755 have to be incorporated in order to resolve the issue with the AA data sources.










Filter Blog

By author:
By date:
By tag: