Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

Many times user complain of queries taking a lot of time to execute and we are clueless how to proceed on this. RSRT transaction can provide us a lot of help in exactly analyzing where the issue is and then we can work on the improvements.

Query runtime mainly consist of two components.

  1. Database Management Time : Time taken by the query to read the data from database layer.
  2. OLAP Time : Time taken by the query for calculation of CKF,RFK, exception aggregation and all the other formulas and condition which we have applied at the Query Level.

How to find out where exactly the query has taken time.

Goto RSRT and enter the query name. Click on Execute + Debug


 
After that we would get the following screen


Click on Display statistics data as we would like to see where exactly query is taking time.


Please also select Do not use cache to get the exact time otherwse , data would be fetched from the cache and you would net get the correct time. In case you are using Accelerator please also select the option DO not use BIA index option. Click on the tick option at the bottom.

You would get the selection screen. Enter the input parameter and run the query.


After you get the result click on the back button in green at the top. It would take you to the statistics of query

We would see in the frontend/Calculation Layer time taken by the query  at various stages.



We can identify the time taken at various layers with the help of event ID.

Data Manager time: We can identify the time taken by the query to read the data from database layer  by event ID’s starting from 9000 like (9000,9001,9002,9010,9011,etc) . We can sum up the time for all the event ID's in the 9000 to 9100 range to get the database time.

OLAP time: This is the frontend time taken by the query to calculate CKF, RFK, formula variable, text variable, exception aggregation,etc.We can identify the OPLAP time with event ID’s like 3000, 3010, 3100, 3110, etc.They would be in the 3000 series. We can sum up the time for all the event ID's in the 3000 range to get the OLAP time.

There would be some other smaller component also but the majority of time would be taken either at the database level or OLAP level.

Once you have identified where the problem is, we could  further propose solution to improve the query time.

If query is taking a lot of time at database level, possible steps can be.

  1. Check whether you have aggregates or not. If already aggregates are there what is there ratio. Its valuation should be on the higher side otherwise take proposal from the system on the basis of query execution time.
  2. You can go for BW accelerator which would greatly improve the database time but customer may not like to pay for it as itis expensive
  3. Check whether you have logical and physical partitioning in cube
  4. Always built queries on top of MP as it provides parallel processing
  5. Always update the DB stats of the cube as it improves the database read time
  6. You don’t have any high cardinality dimension in the cube as it affects performance.Sorting technique changes when we have a high cradinality dimension and it affects the performance whenever there is a drill down on the characteristices involved in high cardinality dimension.


       

If query is taking a lot of time at OLAP layer then possible steps can be.



  1. Check all the CKF, RFK, formula variable and text variables. Try to use them as less as possible.
  2. Try not to use exception aggregation as it would take a large amount of time for large volume of data. Every time you drill down or add a characteristic, it will consume a lot of time.If the volume of data is less, then it would not affect the performance much
  3. Don’t go for customer exit if we already have a SAP exit.
  4. You can prefill cache through broadcasting if the data involved is too huge.
  5. Try to use as minimum number of cells possible.
  6. You can think of RRI concept having 2 layers.1st layer having all th key figure directly from the cube and the 2nd having local CKF,RFK, key figures with exception aggregation and other things.


        

Event ID and there relation can be find out from this link.


http://help.sap.com/saphelp_erp60_sp/helpdata/en/45/f0488a1aa03115e10000000a1553f7/content.htm

Awaiting your inputs for further improving the document

Thanks,

Shyam

4 Comments
Labels in this area