Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
williams_ruter3
Active Participant

In the first part of the documentation i have explain how install and setup Hana MDC with dynamic tiering including the deployment of DWF on tenant database, in the second part of the document i'll explain now how to configure DWF (DLM part) to create external storage and move table from hana to external storage

Create external storage

The DWF installed I now able to make some movement of table to external destination, but before doing it I need to make create the destination over DLM

Note: When creating a storage destination DLM provides a default schema for the generated objects, this schema can be overwritten

Dynamic Tiering

IQ 16.0

Note: for the parameter to use, the information must be according the SDA connection

SPARK

Note: for spark the schema of the source persistence object is used for the generated objects,

Before to create the remote I have to specify to my index server that I will use my Spark connection for aging data

I run the following sql statement from the studio:

ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'SYSTEM')

SET ('data_aging', 'spark_remote_source') = 'SPARK_LAB' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('xsengine.ini', 'SYSTEM')

SET ('data_aging', 'spark_remote_source') = 'SPARK_LAB' WITH RECONFIGURE;

Also from the Spark Controller, the hanaes-site.xml file needs to be edit in order to set the extend storage

My 3 external storage are now created but as we can see they are inactive, so to activate hit “Activate”

Once activated

Move table to external storage

My external storage added to DLM, in order to move table into them I need to lifecycle profile for each of them

Which will allow me to specify if I want to move group of table or only specific table, the way I want to move them “trigger based or manual”

Note: When using SAP IQ as the storage destination type, you need to manually create the target tables in IQ. (use the help menu to generate the DDL)

From a destination attribute option you can specify the reallocation direction of the table transfer and the Packet Size to be transfer:

Note: Spark doesn’t support the packaging

Depending on the option chosen above a clash strategy can be define in order to handle unique key constraint violation

Note : Spark doesn’t support the clash strategies. This means that unique key constraint violations are ignored and records with a unique key might be relocated multiple times, which can result in incorrect data in the storage.

Once the destination attribute define you will need to setup the reallocation rule in order to identifies the relevant records in the source persistence to be relocated to the target persistence

When satisfied save and activate your configuration, eventually run a simulation to test it.

When the configuration is saved and activate for IQ and DT, the generated object “aka: generated procedure” is created

For my document purpose I’ll trigger all my data movement manually

When the trigger job is running, according the rule define in the reallocation rule, the amount of record count should match. For each external destination the log can be check

Query table from external source

Inorder to query the data from the external since that table has been moved, I first need to check in the destination schema the generated object

I can see the 2 tables moved, 1 in dynamic tiering “Inusrance” and the other one as a virtual table fir IQ “Crime”

One additional table “PRUNING” show the scenario and the criteria define from the rule editor for the table

For Spark the schema of the source persistence object is used for the generated objects

My configuration is now completed for dynamic tiering on Hana multi tenant database with DLM.

3 Comments