Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
*Update: 03.02.2011* You can find this blog as article to download {code:html}here{code} in SDN. In BI 7.x you have three different kinds of DataStoreObjects (DSO): Standard, write-optimized and direct update. A Standard DSO consists of a new data and an active date table and a changelog table, which records the changes. Write-optimized DSO and DSO for direct update consist of an active table only. In BI 7.x the background process how data in standard DataStoreObjects is activated has changed in comparison to BW 3.5 or prior. In this blog I will explain the DSO activation job log and the settings / parameters of transaction "RSODSO_Settings". I will describe how the parameters you can set in this transaction influence DSO activation performance. I will not describe the different activation types. h1. 2.    Manually activation of a request If you loaded a new request with a Data Transfer Process (DTP) in your standard DSO, data is written to new data table. You can manually activate the request or within a process chain. If you manually activate requests, you get following popup screen:  +Picture 1: Manual DSO Activation+ Button "Activate in Parallel" sets the settings for parallel activating. In this popup you select either dialog or background. In background you select job class and server. For both you define the number of jobs for parallel processing. By default it is set to '3'. This means, you have two jobs that can be scheduled parallel to activate your data, the BIBCTL* jobs. The third job is needed for controlling the activation process and scheduling the processes. That's the BI_ODSA* job. h1. 3.    BI_ODSA* and BIBCT* Jobs The main job for activating your data is "*BI_ODSAxxxxxxxxxxxxxxxxxxxxxxxxx*" with a unique 25-letter-GUID at the end.  Let's have a look at the job log with SM37. Picture 2: Job log for BI_ODSA* job Activating data is done in 3 steps. First it checks status of request in DSO if it can be activated, marked green in the log. If there is another yellow or red request before this request in the DSO, activation terminates. In a second step data is checked against archived data, marked blue in the log. In a third step the activation of data takes place, marked red in the log.  During step 3 a number of sub-jobs "*BIBCTL_xxxxxxxxxxxxxxxxxxxxxxxx*" with a unique 25-letter-GUID at the end are scheduled. This is done for the reason to get a higher parallelism and so a better performance. But how is the data split up into the BIBCTL* jobs? How does the system know, how many jobs should be scheduled? I will answer this question in the next chapter.  But often, the counterpart seems to happen. You set a high parallel degree and start activation. But activation even of a few data records takes a long time. In  {code:html}DSO activation collection note{code} you will find some general hints for DataStore performance. I will show you in chapter 4, which settings can be the reason for these long-running activations. After the last "BIBCTL*" has been executed the SID activation will be started. Unfortunately this is not written to the job log for each generation step but at the end if the last of your SID generation jobs has been finished.  Let's look at the details and performance settings, how they influence DSO activating so that you may reduce your DSO activation time. h1. 4.    Transaction for DSO settings You can view and change this DSO settings with "Goto->Customizing DataStore" in your manage DSO view. You’re now in transaction RSODSO_SETTINGS. In [help.sap.com | http://help.sap.com/saphelp_nw70ehp2/helpdata/en/e6/bb01580c9411d5b2df0050da4c74dc/content.htm] you can find some general hints for runtime parameters of DSO.  Picture 3: RSODSO_SETTINGS As you can see, you can make cross-DataStore settings or DataStore specific settings. I choose cross-DataStore and choose "Change" for now. A new window opens which is divided into three sections: Picture 4: Parameters for cross-datastore settings Section 1 for activation, section 2 for SID generation and section 3 for rollback. Let's have a look at the sections one after another. h1. 5.    Settings for activation In the first section you can set the data package size for activation, the maximum wait time and change process parameters. If you click on the button "Change process params" in part "Parameter for activation" a new popup window opens: Picture 5: Job parameters The parameter described here are your default '3' processes provided for parallel processing. By default also background activation is chosen. You can now save and transport these settings to your QAS or productive system. But be careful: These settings are valid for any DSO in your system. As you can see from picture 4 the number of data records per data package is set to 20000 and wait time for process is set to 300, it may defer from your system. What does this mean? This means simply that all your records which have to be activated are split into smaller packages with maximum of 20000 records each package. A new job "BIBCTL*" is scheduled for each data package.

The main activation job calculates the number of "BIBCTL*" jobs to be scheduled with this formula: *One important point: You can change the parameters for data package size and maximum wait time for process only if there is no request in your DSO. If you have loaded one request and you change the parameters, the next request will be loaded with the previous parameter settings. You first have to delete the data in your DSO, change the parameter settings and restart loading.* h1. 6.    Settings for SID generation In section SID generation you can set parameters for maximum Package size and wait time for processes too. With button "Change process params" popup described in picture 5 appears.  In this popup you define how many processes will be used for SID generation in parallel. It's again your default value. Minimum package size describes the minimum number of records that are bundled into one package for SID activation. With SAP Layered Scalable Arcitecture (LSA) in mind, you need SID generation for your DSO only if you want to report on them and have queries built on them. Even if you have queries built on top of DSO without SID generation at query execution time missing SIDs will be generated, which slows down query execution. For more information to LSA you can watch a really good webinar from {code:html}Webinars{code}. Unfortunately SID generation is set as default if you create your DSO. My recommendation is: +Switch off SID generation for any DSO+! If you use the DataStore object as the consolidation level, SAP recommends that you use the write-optimized DataStore object instead. This makes it possible to provide data in the Data Warehouse layer 2 to 2.5 times faster than with a standard DataStore object with unique data records and without SID generation! See performance tips for details. From[ performance tips for DataStore Objects in help.sap.com | http://help.sap.com/saphelp_nw70ehp2/helpdata/en/48/146cb408461161e10000000a421937/content.htm] you can also find this performance table and how the parameters “SID generation” and “Unique records” influence DSO activation: | Flag | | Saving in Runtime | | Generation of SIDs During Activation Unique Data Records | x x | approx. 25% | | Generation of SIDs During Activation Unique Data Records | | approx. 35% | | Generation of SIDs During Activation Unique Data Records | x | approx. 45% | The saving in runtime is influenced primarily by the SID determination. Other factors that have a favorable influence on the runtime are a low number of characteristics and a low number of disjointed characteristic attributes. h1. 7.    Settings for Rollback Finally last section describes rollback. Here you set the maximum wait time for rollback processes and with button “Change process params” you set the number of processes available for rollback. If anything goes wrong during activation, e.g. your database runs out of table space, an error during SID generation occurs, rollback will be started and your data is reset to the state before activation. The most important parameter is maximum wait time for Rollback. If time is over, rollback job will be canceled. This could leave your DSO in an unstable state. My recommendation set this parameter to a high value. If you've large amount of data to activate you should take at least double the time of maximum wait time for activation for rollback. You should give your database enough time to execute rollback and reset your DSO to the state before activation started. Button "Save" saves all your cross-datastore settings. h1. 8.    DataStore-specific settings For a DataStore-specific setting you enter your DSO in the input field as you can see from picture 3. With this DSO local setting you overwrite the global DSO settings for this selected DSO. Especially if you expect to have very large DSOs with lot of records you can change your parameters here. If you press button "Change process params" the same popup opens as under global settings, see picture 5. h1. 9.    Activation in Process chains I explained the settings for manual activation of requests in a standard DSO. For process chains you have to create a variant for DSO activation as step in your chain, see picture 6. In this variant you can set the number of parallel jobs for activation accordingly with button "Parallel Processing".

9 Comments
Labels in this area