Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
williams_ruter3
Active Participant

In my documentation I’ll explain how install/ configure SAP Hana MDC with Dynamic Tiering and deploy SAP Hana Data Warehouse Foundation 1.0 in order to support data management and distribution within my landscape including Hadoop (spark) and Sybase IQ.

For my setup I’ll use my own lab on Vmware Vsphere 6.0, run SAP Hana revision 112.02 and use Sybase IQ and Hadoop HDFS stack 2.7.2.

I’ll will create a new environment by using vm template explain in my previous documentation.

Disclaimer: My deployment is only for test purpose, I make the security simple from a network perspective in order to realize this configuration and use open source software.

In order execution


  • Install Hana in MDC mode
  • Connect tenant database to IQ and Hadoop over SDA
  • Install Dynamic Tiering
  • Setup Dynamic Tiering for Tenant database
  • Install SAP Hana Data Warehouse Foundation
  • Create external storage
  • Move table to external source
  • Query tables from external

Guide used

SAP HANA Multitenant Database Containers

SAP HANA Dynamic Tiering: Administration Guide

SAP HANA Data Warehousing Foundation Installation Guide

SAP HANA Data Warehousing Foundation 1.0 Planning PAM

Note used

2225582 - SAP HANA Dynamic Tiering SPS 11 Release Note

2092669 - Release Note SAP HANA Data Warehousing Foundation

2290350 - Spark Controller Compatibility Matrix

2183717 - Data Type Support for Extended Tables

2290922 - Unsupported Features and Datatypes for a Spark Destination

Link used

Help SAP Hana

High Level Architecture overview

From a high level architecture point of view I’ll deploy 4 vms all registered in my internal DNS

  • vmhana01 – master hana node multi-tenant
  • vmhana07 – dynamic tiering worker node
  • vmiq01 – Sybase IQ 16.0
  • Hadoop – Horthonworks Haddop HDFS stack 2.7.2

Detail overview

From a detail point of view, my Hana MDC database will be setup with one tenant database (TN1) connected over SDA to Sybase IQ and Hadoop by Spark controller.

The TN1 database will have DWF 1.0 deployed on it and will be configured with the DT host as a dedicated service.

The dynamic tiering host share the /hana/shared file system with the vmhana01 host in order to be installed with HDT database.

Install Hana in MDC mode

I my previous documentation I have already explain how to install and configure Hana in MDC mode by command line and sql statement.

I’ll re-explain how to do it this time by using the graphical tool (hdblcmgui) and setup tenant database by the Hana cockpit.

My media downloaded I’m ready to start

Note : I’ll just capture the important screen

Note: I do my system as a single, because I’ll add my DT host after in my process

Note: Dynamic Tiering doesn’t support high tenant isolation.

My system is now up and running

The system ready, from the cockpit I’ll create my tenant database

My tenant is now up and running

Now from a network perspective if I want to access my tenant database cockpit some change needs to be done at the system database layer

From the configuration panel filter on “xsengine.ini” and open the “public_urls” parameter

Double click on the http_url or https_url to setup the url (alias) access of the tenant database

Once done you can see that the url to access the tenant TN1 database is setup

Note: make sure if you are working with a DNS that the alias is registered, if you are not using a DNS enter the entry into the /etc/hosts of Hana

My alias added I can access the cockpit

Connect tenant database to Sybase IQ and Hadoop over SDA

My tenant database is now running I need to connect it to remote source to store my aging data, let start with my IQ database, before create the connection in SDA install and set the lib on Hana server.

To create my connection I will use the following statement:

create remote source IQHOMELAB adapter iqodbc configuration 'Driver=libdbodbc16_r.so;ServerName=IQLAB;CommLinks=tcpip(host=vmiq01:1113)' with CREDENTIAL TYPE 'PASSWORD' USING 'user=iqhomelab;password=xxxxx';

My IQ connection is working I can add the other on Hadoop over Spark controller

Install Dynamic Tiering

Install dynamic tiering is done in 2 part, you need to first install the add-on component and then add the host which will execute the query, this can be done in one step.

Note: before to start the installation make sure the necessary folder or file system are created

And the /hana/shared file system is mounted on the dynamic tiering host

The installation can be done from graphical interface, command line or web interface for my documentation I’ll use the second option (command line) since last I did by web interface

Once the installation completed we can see now the Dynamic Tiering installed but not configured yet

From a service perspective the DT appear as a “utility” from the SYSTEMDB hosts tab and is not visible for tenant database

Setup Dynamic Tiering for tenant database

Make the setup of DT for tenant database consist to make the DT service (esserver) visible to tenant database. Keep in consideration that DT and tenant database work as 1:1

The first step is to modify properties in the global.ini file to prepare resources on each tenant database to support SAP HANA dynamic tiering.

On the SYSTEM database run the following SQL to enable the tenant database to use DT functionalities:

ALTER SYSTEM ALTER CONFIGURATION ( 'global.ini', 'SYSTEM' ) SET( 'customizable_functionalities', 'dynamic_tiering' ) = 'true'

And check if the parameter is set to “true” in the global.ini

The next step will be to isolate the “log” and the “data” of the tenant database for DT, to do so I will first create at OS layer two specific directory which belong to my tenant DB “TN1”

And run the two following SQL statement to make is active:

ALTER SYSTEM ALTER CONFIGURATION ( 'global.ini', 'DATABASE' , 'TN1' )

SET( 'persistence', 'basepath_datavolumes_es') = '/hana/data_es/TN1' WITH RECONFIGURE

ALTER SYSTEM ALTER CONFIGURATION ( 'global.ini', 'DATABASE' , 'TN1' )

SET( 'persistence', 'basepath_logdatavolumes_es') = '/hana/log_es/TN1' WITH RECONFIGURE

Then check in the global.ini

The preparation completed I can now provision the DT service to my tenant DB by running the following SQL command at the SYSTEMDB layer:

ALTER DATABASE TN1 ADD 'esserver'

TN1 service before the DT provisioning

After the provisioning we can that DT is now available to TN1

Note: after the service (esserver) affected to the tenant database, it’s no longer visible to the SYSTEMDB

The configuration ready, I need to deploy the dynamic tiering delivery unit in to TN1 in order to administrate it. From modeler perspective select your tenant DB and select the HANA_TIERING.tgz and HDC_TIERING.tgz file from server to be imported.

Once the DU imported to the tenant I assign the necessary role to my user

Now done I can access the cockpit and finish the configuration of it

Once successfully created we can check at os layer if the data are written at the correct place

Dynamic Tiering on tenant database completed, I can start the deployment DWF

Install SAP Hana Data Warehouse Foundation

SAP DWF content is delivered in software components, each software component contains a functional delivery unit (independent delivery units) and a language delivery unit.

  • Functional delivery units provide core services
  • SAP HANA Data Warehousing Foundation applications
  • Language delivery units
  • Documentation for the applications

Once the DWF zip file is downloaded store it but do not decompress it, form the tenant database cockpit load the zip file in order to install the new software

Run the installation

The component installed now, in order to configure SAP HANA Data Warehousing Foundation some parameter needs to be added at the xsengine.ini of the tenant database.

From the SYSTEMDB expand the xsengine.ini and add the following parameter and value

Data Distribution Optimizer and Data Lifecycle Manager use this mechanism use SQL statements from server-side JavaScript application when generating and executing redistribution plans in Data Distribution Optimizer.

To enable this functionality, I need to enable it from the XS Artifact Administration of my tenant database.

Located the two component sap.hdm.ddo.sudo and sap.hdm.core.sudo and activate them

Now activated I can provide the necessary role to my user so I can administrate DWF from the cockpit

Notre: I gave my account all admin role, but in real work it won’t happen 😉

And now from the cockpit I can see it at the following url:

http://<tenant>:<port>/sap/hdm/dlm/index.html

http://<tenant>:<port>/sap/hdm/ddo/index.html

Finally generate default schema for Generated Objects and roles needed for Data Lifecycle Manager by the following statement:

call "SAP_HDM_DLM"."sap.hdm.dlm.core.db::PREPARE_BEFORE_USING"();

The component in place i can now start the configuration to move tables to external storage.

See Part II

7 Comments