Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

Attribute change run (ACR) is the process of adjusting the aggregates whenever there is any change in the master data being used in that aggregate. In the process of loading data in BW, attribute change run plays a vital role post any master data attribute and hierarchy loads in order to get the correct data in reports.


Many a times during our batch loads, we encounter the situation where the ACR gets stuck for long without any progress. As a result, all the other processes which uses the same cube (eg. delete overlapping, create/delete indexes etc) running in other process chains, starts getting failed due to lock created by ACR on that cube.

As a workaround, we need to follow the below steps for correcting the failure.

  1. Identify and kill all the jobs and sub-jobs of that attribute change run. This can be done through SM37 or by using the program RSDDS_CHANGERUN_MONITOR in SE38.
  2. Deactivate the aggregates of that infocube manually from RSA1 for which job was stuck.
  3. Repeat the attribute change run either through process chain or manually through RSA1-> Tools -> Apply Hierarchy/Attribute Changes -> Monitor and Start terminated Change Runs.
  4. Check for change run job to finish (Should finish soon as aggregates are now deactivated).
  5. Repeat the other failed processes.
  6. Activate the deactivated aggregates (Only after checking if no other processes dependent on that cube is yet to finish and the available time slot, as the rebuilding of aggregates can take lot of time).


But the point here is that why do the ACR jobs get stuck for long and how can we avoid the failures and workarounds.


For adapting the aggregates to the changes, the change run works based on some strategies and parameters.


Strategies to adapt Aggregates:

There are 3 different strategies used to adapt aggregates in change run.

  1. Rebuild the aggregate (Adapt by Reconstruction)
  2. Delta Mode (Adapt by Delta)
  3. Rollup from previously adapted aggregate


Note: Infocubes having key figures with aggregation MIN/MAX needs to adapt the aggregates only by rebuilding them during change run.

BW: Parameter for Aggregates

Parameters for Aggregates can be set through below path:

SPRO >> SAP Reference IMG >> SAP Customizing Implementation Guide >> SAP NetWeaver >> Business Intelligence >> Performance Settings >> Parameters for Aggregates. (Tcode : RSCUSTV8)

The parameters defined for the aggregates determines the adaptation strategy to be used while change run. Based on the threshold value and percentage of masterdata change, the reconstruction or delta strategy is decided.

  • Limit with Delta: Threshold Value (0-99): Delta -> Reconstruct Aggregates

The value defined here determines aggregate adaptation strategy to be used for change run. If the percentage change in master data is greater than the threshold value mentioned, the Adapt by Reconstruction strategy is used which rebuild the aggregates else the Delta mode is used where the old records can be updated negatively and the new records positively.

  • Block Size

If the E or F table of the source for the aggregate structure is larger than the BLOCKSIZE parameter in table RSADMINC, the source is not all read at once, but is divided into blocks. This prevents an overflow of the temporary table space PSAPTEMP. A characteristic, with a value range divided into intervals, is used to divide the source into blocks. Only data from this type of interval is read from the source and written to the aggregate.

If no value is maintained for the BLOCKSIZE parameter in Customizing or if the value is 0, the default value of 100,000,000 is used. (exception: DB6 = 10,000,000).

  • Wait Time for Change Run Lock (in Minutes)

The waiting period (in minutes) specifies how long a process is to wait when it encounters a lock created by other parallel processes, such as for loading hierarchies or master data, another change run, or rolling up of aggregates.

If the system does not find a relevant lock, the change run waits the length of time specified here without creating its own lock.

For an example, below screenshot from the change run monitor screen shows the changed and total records for master data.

Based on this, the percentage change in master data is calculated which is 11.98 and 11.73 percent.

This percent value is compared with the threshold value defined in “Limit with Delta” parameter for aggregates (10 in this case).

As the "Limit with Delta" parameter set here is less than the percent of master data changes, cube X and Y uses Rebuild (Adapt by Reconstruction) strategy for adapting its aggregates.


The standard value of “Limit with Delta” should be 20. However, setting this value purely depends on the volume of changes that occur in master data.It is recommended that the threshold value be kept above the maximum percentage change expected in the master data as rebuilding of aggregates could take
enormous time leading to the ACR running into deadlock or getting stuck.

Many of you might already be aware about these concepts, but for the people who run into errors and data load delays due to such issue, hope this helps.

Regards,

Nikhil

2 Comments
Labels in this area