Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

This paper describes how to use the proper modeling techniques to design the Process chains in order to reduce the data load runtime where the data coming from the different source systems. The modelling techniques include the backend data model optimizations, utilization of work processors, utilizing the parallelization's excluding the serializations, utilizing the proper design settings, process chains design etc.


The below are the procedures followed to reduce the data load runtimes while uploading the data through the Process Chains.

  • Always need to load the Master data first and then to load transaction data. By using this technique data load runtime can be reduced while executing the transaction data load (This will be achieved by reducing the time consumption at the SID processing).

  • Use the parallel object execution technique rather than the serialization. This technique needs more work processors

  • Complex programming logics in BW data model to be avoided wherever possible by using formulas, read master data concepts etc.

  • Utilizing the maximum parallel processing of the data packets in the DTP execution by making the setting at the DTP level. This technique needs more work processors

  • Use Data sources processed by Delta mechanism wherever applicable.

  • Avoid creating Secondary indexes unless they are very much required because it will consume significant time.

  • If there is no reporting involved on the DSO's uncheck the SID generation flag in the DSO (Data Store Objects) settings to save time in run time of data load.

  • Need to utilize maximum table space to achieve good performance of data loads.

  • By Utilizing the maximum number of work processors in parallel will have the positive effect on the data load runtime (Data load runtime will be less).

  • Info Cube index deletion - Delete the cube indexes before loading data and generate again after loading. This will reduce the data load runtime.

  • Create Master data process chains separately (Attribute and Text) where the data changes are less and scheduling those process chains on a Weekly / Monthly basis based on the customer requirement to capture these changes.
Labels in this area