Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

SAP BW on HANA Data Classification

This blog is regarding the Data Classification or Multi temperature data for SAP BW and HANA as database, below details shows how to set Hot to Warm and Vice-Versa.

Data Classification has got importance in cases like when there are bottlenecks in main memory, it is preferably non-active data that is removed / unload from main memory to disk. when we would like to access the non-active data(Warm Data) that is not available in memory the system loads possible smallest amount of data into memory based on the columns of relevant partition. Below example is only based on HOT/WARM.

HOT data:

Data that is accessed very often, for example, for reporting or for processes in Data Warehouse Management. (Queries for Info Cubes, Data Store objects)

WARM data:

This data is no longer or rarely accessed. (Write-optimized Data Store objects of the corporate memory, or Persistent Staging Areas or write-optimized Data Store objects of the acquisition layer).

COLD data:

Data of a BW system that is no longer required, and that can be or was saved using Near-line Storage.

Optimization of the data stores with regard to non-active data

In a BW system, most data with the classification WARM is stored in the Persistent Staging Areas of the DataSources and in write-optimized DataStore objects of the acquisition layer and the corporate memory. These objects often contain data that is no longer used. however, new data is loaded to these objects on a daily basis. As a result, the period of time since the last usage is normally not longer than 24 hours. Despite this, it should be rather such objects that are removed from the memory than objects that, for example, are used for reporting. You should also avoid that the data that is no longer used is loaded to the main memory when loading new data.

Early Unload settings and displacement on SAP HANA DB are how Persistent Staging Areas and write-optimized Data Store objects have been optimized with regard to non-active data.


What is  Displacement ?

Displacement of columns of a partition is carried out if a bottleneck occurs in the main memory, this means if usage of main memory by a database process exceeds a threshold value. SAP HANA DB uses last-recently-used (LRU Algorithm) concept for displacing table columns. First of all, the columns of a table partition are removed from the main memory whose data has not been accessed for the longest period of time.


Setting EARLY UNLOAD of a table in the SAP HANA DB

For some BW objects, you can make the EARLY UNLOAD setting. If a bottleneck occurs in the main memory, the data of an object that is flagged in such a way is prioritized for displacement from the main memory. As a consequence, these objects are displaced quicker than objects that have not been accessed for a long time but that do not have this setting.

Implementation of non-active data in BW:

As of Support Package 08 and SAP HANA Support Package 05, the "non-active data" concept is introduced in the BW system due to the following settings that are implemented in the system automatically.

Persistent Staging Area tables, change log tables, and write-optimized Data Store objects are flagged as EARLY UNLOAD by default. This means that these objects are displaced from the memory before other

BW objects (such as Info Cubes and standard Data Store objects).

As part of RS_BW_POST_MIGRATION program we have to select Step 14 and run, So that all the PSA tables are set to Unload Priority ‘7’.

Persistent Staging Areas, change logs, and write-optimized Data Store objects are also partitioned by Request, Partitions that have once been displaced are no longer loaded because new data is loaded only to the newest partition, and older data is normally no longer accessed.

However, if old data should be accessed, this data is loaded to the main memory. If only certain columns are accessed in the displaced objects, only these columns are loaded to the main memory. The other columns remain on the disk. (for example, lookups in transformations that select only certain columns)

Due to this concept, the main memory resource management is improved using an automatism. This affects sizing. If Persistent Staging Areas, change logs, and write-optimized Data Store objects contain large amounts of non-active data, this data remains on the disk, and the main memory can be defined as smaller in accordance with this.

EARLY UNLOAD for other BW objects

You can also flag Info Cubes and other Data Store objects as EARLY UNLOAD. Since these objects are not partitioned by request, the complete object is loaded to the main memory, or it is never displaced because it is accessed too often (for example, by a daily loading process). This can even lead to counterproductive behavior if they are displaced due to the EARLY UNLOAD flag, but then are reloaded to the main memory a short time later for a loading process or a query. As a consequence, you should only use this setting in a very restricted manner for such objects. (For example, if an Info Cube exists for every year, but you only report to the current year. For Info Cubes from the past years, you could make this setting because they are no longer connected to loading processes, and no reporting is carried out.)

Note : Persistent Staging Areas and write-optimized Data Store objects are set to EARLY UNLOAD by default. You can use transaction RSHDBMON to reset this behavior or to set it for other BW objects. You can make the setting directly on the database. This means that you can or must change the setting for a BW object in a production system. The setting is not transported but it is also not impaired by transports.

Unload Priorities:

There are Three Unload Priorities 0/5/7.

Below example shows changing the Early Unload Priority flag in BW on HANA. ( Changing the flag from 5 to 7 which is earliest unload)

Check the Unload Priority for F table in SE14 -> Storage Parameters

Unload Priority is 5

Now go to Transaction : RSHDBMON

After Clicking on the details, select the Object type and Object.

Execute

Check Activate Early Unload

Early Unload is checked as marked above

Note : If we check Deactivate Early Unload for any Activated Early Unload would mean we are setting Warm to Hot ( the Early Unload Priority would be set from '7' to '5')

Now go to SE14 -> Storage Parameters and check the Unload Priority

Unload priority has changed to 7 i.e., earliest unload to disk.


Thanks..!!!


4 Comments
Labels in this area