1 2 3 72 Previous Next

SAP HANA and In-Memory Computing

1,068 Posts

If you try to work with SAP HANA Studio on HiDPI (High resolution display), like Apple retina or Microsoft Surface, you will see that there is a problem with the size of the icons:


In Surface 4 at 2736x1824, icons are tiny, unusable, as you can see in the screenshot (compare it with the size of the fonts):




As HANA Studio is based in Eclipse, I tried some recommendations that I found in https://bugs.eclipse.org/bugs/show_bug.cgi?id=421383#c60 with correct results:




Windows instructions


Create a new registry key with REGEDIT


Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SideBySide\

And create a new entry (DWORD VALUE)


Name: PreferExternalManifest

Value: 1


Create a Manifest file


Open hdbstudio.exe location (by default C:\Program Files\sap\hdbstudio)

Create a new file: hdbstudio.exe.manifest  (or use the attached file, and remove .xml extension)

with this content:



<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3">


    <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2">



                <requestedExecutionLevel xmlns:ms_asmv3="urn:schemas-microsoft-com:asm.v3"








        <asmv3:windowsSettings xmlns="http://schemas.microsoft.com/SMI/2005/WindowsSettings">

            <ms_windowsSettings:dpiAware xmlns:ms_windowsSettings="http://schemas.microsoft.com/SMI/2005/WindowsSettings">false</ms_windowsSettings:dpiAware>







Now, you can open your HANA Studio with "normal" icons :-)



Disclaimer: Note that modifying the registry can cause serious problems that may require you to reinstall your operating system.

Far fetched...

A colleague asked me over a year ago (2015 and SPS 9 ... sounds ancient now, I know) whether it is possible to leverage information models in a different SAP HANA instance via SDA (Smart Data Access - look it up in the documentation if you didn't know this yet).

The scenario in mind here was a SAP BW on HANA system reading data from a Suite on HANA system and using the SAP HANA live content (http://scn.sap.com/docs/DOC-59928, http://help.sap.com/hba) installed there.

The Open ODS feature of SAP BW on HANA was to be used here as it allows reading from tables and views exposed via SDA in the local SAP HANA instance.


Now this idea sounds splendid.

Instead of having to manually build an extractor or an data export database view (both of which can be extensive development efforts), why not simply reuse the ready made content of SAP HANA live for this?

As usual the proof of the pudding is in the eating and as soon as it was tried out a severe shortcoming was identified:


    ('PLACEHOLDER' = ('$$userNameFilter$$', 'USER_NAME= LARS'))
Could not execute 'select * from "LARS"."IMACCESS_LBPB/SCV_USERS"('PLACEHOLDER' = ('$$userNameFilter$$', 'USER_NAME= ...'
SAP DBTech JDBC: [7]: feature not supported:
Cannot use parameters on row table: IMACCESS_LBPB/SCV_USERS: line 1 col 22 (at pos 21)


I just created an Information Model similar to the ones provided with the SAP HANA Live content including the heavily used Input Parameters to enable the model to be flexible and reusable (and also to allow filter push-down) but SAP HANA tells me:

"Nope, I'm not doing this, because the PLACEHOLDER syntax only works for information views and not for 'row tables'."


This 'row table' part of the error message stems from the fact that SAP HANA SPS 9 showed SDA tables as row store tables. This also means that all data read from the SDA source gets temporarily stored in SAP HANA row store tables before further processed in the query.

One reason for doing that probably was that the mapping from ODBC row format to column store format (especially the data type mapping from other vendors DBMS) was easier to manage with the SAP HANA row store.

Having said that, when accessing another SAP HANA system, such format mapping surely should be no problem, right?


And in fact there is an option to change this: the parameter "virtual_table_format" in the "smart_data_access" section on of the indexserver.ini:


= Configuration

Name                     | Default

  indexserver.ini          |       

    smart_data_access      |       

     virtual_table_format  | auto 


This parameter can be set to ROW, COLUMN or AUTO (the SPS 11 default value, automatically using the right format depending on the SDA adapter capabilities).

For more on how "capabilities" influence the SDA adapter behavior, check the documentation.


Back last year I wasn't aware of this parameter and so I couldn't try and see if, after changing the parameter, the query would've worked.

Anyhow, like all good problems the question just popped up again and I had an opportunity to look into this topic once more.


"Smarter" at last...

And lo and behold, with SAP HANA SPS 11 the PLACEHOLDER syntax works like a charm even for virtual tables.


SELECT -- local execution ---
     sum("KF1") AS "KF1",
     sum("KF2") AS "KF2",
FROM "_SYS_BIC"."devTest/stupidFactView"
    ('PLACEHOLDER' = ('$$IP_FACTOR$$','34'))
WHERE "D10_VAL" = 'DimValue9'
and "D100_VAL" = 'DimValue55'



D10_VAL     D100_VAL    KF1         KF2         CC_KF1_FACTORED

DimValue9   DimValue55  -1320141.70 525307979   -44884817     



successfully executed in 352 ms 417 µs  (server processing time: 7 ms 385 µs)

successfully executed in 356 ms 581 µs  (server processing time: 8 ms 437 µs)

successfully executed in 350 ms 832 µs  (server processing time: 8 ms 88 µs)



OPERATOR_NAME       OPERATOR_DETAILS                                         EXECUTION_ENGINE

COLUMN SEARCH       'DimValue9',




                     TO_BIGINT(TO_DECIMAL(SUM(FACT.KF1), 21, 2) * '34')






                        SUM(FACT.KF2)                                        COLUMN

    JOIN            JOIN CONDITION:

                    (INNER) FACT.DIM100 = DIM1000.ID,

                    (INNER) FACT.DIM10 = DIM10.ID                            COLUMN

      COLUMN TABLE                                                           COLUMN

      COLUMN TABLE  FILTER CONDITION: DIM1000.VAL = n'DimValue55'            COLUMN

      COLUMN TABLE  FILTER CONDITION: DIM10.VAL = n'DimValue9'               COLUMN



See how the SPS 11 SQL optimisation is visible in the EXPLAIN PLAN: since the tables involved are rather small and only two dimensions are actually referenced, the OLAP engine (usually responsible for STAR SCHEMA queries) didn't kick in, but the execution was completely done in the Join Engine.


Also notable: the calculated key figure was reformulated internally into a SQL expression AFTER the parameter value (34) was supplied.

This is a nice example for how SAP HANA does a lot of the query optimisation upon query execution.

If I had used a placeholder (question mark - ?) for the value instead, this whole statement would still work, but it would not have been optimised by the SQL optimizer and instead the calculation view would've been executed "as-is".


Now the same statement accessing the "remote" view:


SELECT -- SDA access ---
     sum("KF1") AS "KF1",
     sum("KF2") AS "KF2",
FROM "DEVDUDE"."self_stupidFactView"
    ('PLACEHOLDER' = ('$$IP_FACTOR$$','34'))
WHERE "D10_VAL" = 'DimValue9'
and "D100_VAL" = 'DimValue55'


D10_VAL     D100_VAL    KF1         KF2         CC_KF1_FACTORED

DimValue9   DimValue55  -1320141.70 525307979   -44884817     


successfully executed in 351 ms 430 µs  (server processing time: 12 ms 417 µs)

successfully executed in 360 ms 272 µs  (server processing time: 11 ms 15 µs)

successfully executed in 359 ms 371 µs  (server processing time: 11 ms 914 µs)


OPERATOR_NAME           OPERATOR_DETAILS                                                       EXECUTION_ENGINE

COLUMN SEARCH           'DimValue9', self_stupidFactView.D100_VAL,





  COLUMN SEARCH         SUM(self_stupidFactView.KF1),




                        (ENUM_BY: REMOTE_COLUMN_SCAN)                                          ROW

    REMOTE COLUMN SCAN  SELECT SUM("self_stupidFactView"."KF1"),




                        FROM "_SYS_BIC"."devTest/stupidFactView"

                            ( PLACEHOLDER."$$IP_FACTOR$$" => '34' )  "self_stupidFactView"

                        WHERE "self_stupidFactView"."D10_VAL" = 'DimValue9'

                        AND "self_stupidFactView"."D100_VAL" = 'DimValue55'

                        GROUP BY "self_stupidFactView"."D100_VAL"                               EXTERNAL



Because of the mentioned parameter setting, SAP HANA now can create a statement that can be send to the "remote" database to produce the wanted output.

Note how the statement in the REMOTE COLUMN SCAN is not exactly the statement we used: the aggregated columns are now the first in the statement and the parameter syntax used is the new "arrow"-style syntax (PLACEHOLDER."$$<name> $$" => '<value>'). This nicely reveals how SDA actually rewrites the statement in order to get the best outcome depending on the source systems capabilities.


For a better overview on what happens in both scenarios please look at this piece of ASCII art in awe


|[ ]| = system boundaries


local statement execution

|[SQL statement ->    Information view -> Tables +]|


|[       RESULT < -------------------------------+]|



SDA statement execution

|[SQL Statement -> Virtual Table -> SDA connection ->]| --- ODBC transport --> |[ Information view -> Tables +]|


|[       RESULT < -----------------------------------]| <-- ODBC transport --- |[--<  RESULT <---------------+]|


For more on SDA, BW on HANA and how both work together have a look here:


And while there, don't miss out on the other "new in SPS 11"- stuff (if not already familiar with it anyhow)


The Web, Stars and the importance of trying things out


For the question discussed above I of course needed to have a test setup ready.

Creating the SDA remote source was the easiest part here, as I just created a "self" source system (BW veterans will remember this approach) that simply pointed to the very same SAP HANA instance.


In order to emulate a proper SAP HANA live view I needed to create an Information model with Input Parameters, so I thought: easy, let's just quickly build one in the Web based development workbench.


So far I've done most of the modelling in SAP HANA studio, so I took this opportunity to get a bit more familiar with the new generation of tools.

I wanted to build a classic Star-Schema-Query model, so that I could use the Star Join function.

From SAP HANA Studio I knew that this required calculation views of of FACT and DIMENSION to work.


Not a problem at all to create those.


A CUBE type view for the fact table


One of the Dimension type views


I then went on and created a new calculation view of data type CUBE and checked the WITH STAR JOIN check box.



Next I tried to add all my FACT and DIMENSION views to the join, but boy was I wrong...


Clicking on the button should allow to add the views.



But no option there to add the fact view into the STAR JOIN node - while adding dimension just worked fine:


Now I had all my dimensions in place but no way to join them with fact table:



After some trial and error (and no, I didn't read the documentation and I should have. But on the other hand, a little more guidance in the UI wouldn't hurt either) I figured out that one has to manually add a projection or aggregation node that feeds into the Star Join:


Once this is done, the columns that should be visible in the Star join need to be mapped:

And NOW we can drag and drop the join lines between the different boxes in the Star Join editor.


Be careful not to overlook that the fact table that just got added, might not be within the current window portion. In that case either zoom out with the [-] button or move the view around via mouse dragging or the arrow icons.



After the joins are all defined (classic star schema, left outer join n:1, remember?) again the mapping of the output columns need to be done.


Here, map only the key figures, since the dimension columns are already available in the view output  anyhow as "shared columns".


For my test I further went on and added a calculated key figure that takes an Input Parameter to multiply one of the original key figures. So,nothing crazy about that, which is why I spare you the screen shot battle for this bit .


And that's it again for today.

Two bits of new knowledge in one blog post, tons of screenshots and even ASCII art - not too bad for a Monday I'd say.


There you go, Now you know!




SAP Hana Vora is a 'Big Data' In-memory reporting engine sitting on top of an Hadoop Cluster.

Data can be loaded into the Hadoop Cluster memory from multiple source e.g. HANA, The Hadoop File system (HDFS), remote files systems like AWS S3


With the release of SAP Hana Vora 1.2 it's now possible to graphically model views (e.g. joining multiple datasets) similar to a Hana calculation view.

The following link has all the details to get you started  with Vora SAP HANA Vora - Troubleshooting


This blog contains a very basic introductory example of using the new graphical modelling tool.

The steps are:

  1. Create 2 example datasets in HDFS, using scala and spark
  2. Create Vora tables, linked to these files
  3. Model a view joining these tables, and filtering on key elements


Firstly the following  2 datasets need to be created for transactional and master data (reporting attributes).


Transactional Data



Master Data

AU01Australia 1AU
GB01United Kingdom 1UK
US01United States of America 1US
US02United States of America 2US


In the following steps open source Zeppelin is used to interact with Vora, Spark and HDFS.


Open Zeppelin and create a new notebook.


Next create the sample data using Spark and Scala.

Create sample Company Data and save to HDFS

fs.delete(new Path("/user/vora/zeptest/companyData"), true)

val companyDataDF = Seq(

    ("GB01","Revenue", 1000.00),

    ("US01","Revenue", 5000.00),


    ("US02","Revenue", 700.00),

    ("AU01","Revenue", 300.00)).toDF("Company","AccountGroup","Amount_USD")



companyDataDF.repartition(1).save("/user/vora/zeptest/companyData", "parquet")


Create sample Company Master Data and save to HDFS

fs.delete(new Path("/user/vora/zeptest/companyAttr"), true)

val companyAttrDF = Seq(

    ("GB01","United Kingdom 1", "UK"),

    ("US01","United States of America 1", "US"),

    ("US02","United States of America 2", "US"),

    ("AU01","Australia 1", "AU")).toDF("Company","Description", "Country")

companyAttrDF.repartition(1).save("/user/vora/zeptest/companyAttr", "parquet")



Lets now check in HDFS that the directories/files have been created

Directory listing in HDFS

import org.apache.hadoop.fs.FileSystem

import org.apache.hadoop.fs.Path

val fs = FileSystem.get(sc.hadoopConfiguration)

var status = fs.listStatus(new Path("/user/vora/zeptest"))

status.foreach(x=> println(x.getPath))



Next use the %vora option in Zeppelin to create the Vora tables


Create the Vora Tables






USING com.sap.spark.vora


    tableName "COMPANYDATA",

    paths "/user/vora/zeptest/companyData/*",

    format "parquet"







USING com.sap.spark.vora


    tableName "COMPANYATTR",

    paths "/user/vora/zeptest/companyAttr/*",

    format "parquet"




Next use the %vora option in Zeppelin to check the Tables have been loaded correctly

Check the Vora Tables

%vora show tables



Now with the tables created we are ready to use the modelling tool

Launch the Vora tools (running on port 9225 on Developer edition)

Vora Tables created in Zeppelin or or other instances of the Spark context may not yet be visible in the Data Browser.

To make the visible then use SQL Editor and register the previously created tables using the following statement.

REGISTER ALL TABLES USING com.sap.spark.vora OPTIONS (eagerLoad "false") ignoring conflicts

The tables are now visible for data preview via the 'Data Browser'.

Now the  'Modeler' can be used to create the view VIEW_COMPANY_US_REVENUE

In this example the modelling tool is used to:

  • Join Transactional data and Master data  on COMPANYCODE
  • Filter by COUNTRY = 'US' and ACCOUNTGROUP = 'Revenue'
  • AMOUNT_USD Results summarised by COUNTRY


The generated SQL of the view can be previewed

Once saved the new view VIEW_COMPANY_US_REVENUE can be previewed via the 'Data Browser'.

The new view will be accessible via external reporting tools, Zeppelin and other Spark Context.

I hope this helps gets you started exploring the capabilities of Vora.



In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 12.


The topic of this blog is backup and recovery.


For the complete list of blogs see: What's New with SAP HANA SPS 12 - by the SAP HANA Academy


For the SPS 11 version, see SAP HANA SPS 11 What's New: Backup and Recovery - by the SAP HANA Academy



Tutorial Video


SAP HANA Academy - SAP HANA SPS 12: What's New? - Backup and Recovery - YouTube




What's New?


Schedule Data Backups (SAP HANA Cockpit)


You can now schedule complete data backups or delta backups to run at specific intervals using the Backup tile of the SAP HANA cockpit. Backup scheduling relies on the XS Job Scheduler and requires the SAP HANA database to be up and running.


For each schedule, you define backup and destination type, prefix and destination.

Screen Shot 2016-05-27 at 09.46.50.png

The schedule requires a name, start time, and recurrence: daily, weekly, monthly with time.

Screen Shot 2016-05-27 at 09.47.26.png


Schedules listing with PAUSE || button. Once created, schedules cannot be modified (only deleted).

Screen Shot 2016-05-27 at 09.50.39.png


Estimated Backup Size (SAP HANA Cockpit)


When you create a backup, SAP HANA Cockpit now also displays the estimated backup size. This feature was earlier available in SAP HANA studio.


By toggling between the backup types, you can easily compare the estimated backup sizes of complete, incremental and differential backups.


Screen Shot 2016-05-27 at 09.52.48.png


You can view the backup prefix in the Backup Overview page.



Resuming an Interrupted Recovery


As of SPS 12, it is possible to resume an interrupted recovery, instead of repeating the entire recovery from the beginning. For this you need to have both a full backup with - optionally delta backups and - log backups.


During a recovery, SAP HANA automatically defines fallback points, which mark the point after which it is possible to resume a recovery. The fallback points are recorded in backup.log, which indicate whether it is possible to resume a recovery.


The Log Replay Interval is configurable [global.ini: log_recovery_resume_point_interval = [18000 - 0; default = 1800s].

Screen Shot 2016-05-26 at 15.49.35.png

Note that it is normally only necessary to resume a recovery in exceptional circumstances.



Recovery Enhancements


As of SPS 12, it is now possible to

  • recover an SAP HANA database using a combination of a storage snapshot and delta backups (incremental and differential backups)
  • reconstruct the SAP HANA backup catalog using file-based delta data backups
  • identify a specific data backup by specifying backup destination, prefix, and SID (when using BACKINT in case that the backup catalog is not available)





For more information see:


SAP Help Portal



SAP Notes




SCN Blogs




Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.


Follow us on Twitter @saphanaacademy.


Connect with us on http://linkedin.com/in/saphanaacademy.



This blog provides an overview of all SAP HANA What's New playlists and SCN blogs published by the  SAP HANA Academy together with other related information.


Screen Shot 2016-05-26 at 12.45.07.png


SCN Blogs - by the SAP HANA Academy




SAP HANA Academy playlist on YouTube





What's New blogs on blogs.saphana.com




Introducing SAP HANA on hana.sap.com




SAP Help Portal




SAP Notes




Product Availability Matrix (PAM)



Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.


Follow us on Twitter @saphanaacademy


Connect with us on LinkedIn

Join the SAP HANA Distinguished Engineer (HDE) Webinar (part of SAP HANA iFG Community Calls) to learn about SAP HANA on-premise deployment options.

Title: Overview of SAP HANA On-Premise Deployment Options

Speaker: Tomas KROJZL, SAP HANA Distinguished Engineer, SAP HANA Specialist, SAP Mentor, IBM

Moderator: Scott Feldman

Date: June 2nd, 2016  Time: 8:00 - 9:30 AM Pacific, 11:00 - 12:30 PM Eastern (USA), 5:00 PM CET (Germany)


See all SAP HANA Distinguished Engineer (HDE) webinars here.


SAP HANA can be deployed on-premise in many different ways: single-node or scale-out, bare metal or virtualized, appliance or TDI. With these infrastructure options there are multiple ways to share one environment between multiple applications. We provide a basic orientation between individual deployment options and share best practice experience on what are good combinations and which choices should be avoided.

Join the session to get an overview of SAP HANA on-premise deployment options.

To join the meeting: https://sap.na.pgiconnect.com/i800545

Participant Passcode: 110 891 4496

Germany: 0800 588 9331 tel:08005889331,,,1108914496#

UK: 0800 368 0635 tel:08003680635,,,1108914496#

US and Canada: 1-866-312-7353 tel:+18663127353,,,1108914496#

For all other countries, see the attached meeting request.


About Tomas:

SAP HANA Specialist (SAP Mentor, SAP HANA Distinguished Engineer), Certified SAP HANA Specialist/Architect focused on SAP HANA data centric architecture (infrastructure, High Availability, Disaster Recovery, etc.), integration (Monitoring, Backups, etc.), deployment (implementation projects) and operation.

Background: SAP HANA Distinguished Engineers are the best of the best hand picked by HDE Council that are not only knowledgeable in implementing SAP HANA but also committed to sharing their knowledge with the community.


As part of the effort to share experiences made by HDEs, we started this HDE webinar series.


This webinar series is part of SAP HANA International Focus Group (iFG).

Join SAP HANA International Focus Group (iFG) to gain exclusive access to webinars, access to experts, SAP HANA product feedback, and customer best practices, education, peer-to-peer insights as well as virtual and on-site programs.

You can see the upcoming SAP HANA iFG session details here.


Note: If you get "Access Denied" error while accessing SAP HANA iFG webinar series / sessions, you need to first join  the community to gain access.


Follow HDEs on Twitter @SAPHDE

Follow me on Twitter @rvenumbaka

Just to share some tips on converting single container with Hana System Replication configured to MDC.


MDC system can only be replicated as the whole system, it means that the system database and all tenant databases are part of system replication. A take over happen for the whole Hana database (system database + all tenant databases) and is not possible to take over just for a particular container.


In our scenario, we have system replication setup for single container systems running on 112.02, and we decided to convert them into MDC. As we know that primary and secondary must be identical (N+N, nodes (except Standby) and services) during system replication setup, there's no exception for MDC.


Hence, i don't see any other way than breaking the system replication between primary and secondary, convert them into MDC individually, and reconfigure the system replication.


Steps performed as below:

1) Stop Secondary

# HDB stop


2) On secondary, clean up replication config

# hdbnsutil -sr_cleanup --force


3) Start up secondary. Now the secondary startup as active database

# HDB start


4) On primary, clear system replication config.

# hdbnsutil -sr_disable --force


once done, you can check with command # hdbnsutil -sr_state --sapcontrol=1


It is critical to clear the system replication config to avoid hitting error below during MDC conversion:

/hana/shared/SID/exe/linuxx86_64/hdb/python_support> python convertMDC.py

Stop System

Convert Topology to MDC

Set database Isolation low

Export Topology

Reinit SYSTEMDB persistence




error: 'hdbnsutil -initTopology' is not allowed on system replication sites.





'hdbnsutil failed!'


i believe above error is due to 2281734 - Re-Initialize secondary site in HANA system replication where hdbnsutil -initTopology is prohibited on system replication on primary and secondary site to avoid data loss.


If you hit above error, you can't redo the MDC conversion as its topology already converted to multidb. Workaround is bring up nameserver to reset user SYSTEM password manually. Refer to administration guide, section resetting system user password in MDC.


4) Convert both primary and secondary to MDC by running python convertMDC.py at the same time.


5) MDC conversion completed and system were started


shutdown is completed.

Start System

Conversion done

Please reinstall your licenses and delivery units into the SYSTEMDB.

Tenant SID can now be started by execution:



6) go to Primary and startup tenant



7) in Primary, reconfigure system replication by running below command to enable system replication

# hdbnsutil -sr_enable --name=UrName


8) Stop secondary and perform the replication setup

hdbnsutil -sr_register --remoteHost=PrimaryHost --remoteInstance=## --replicationMode=syncmem --operationMode=delta_datashipping --Name=UrName


9) On Studio -> Primary -> Landscape -> System Replication, you will notice full data replication is needed.


10) Once full data shipping completed, your replication should be active now with MDC



On secondary you'll see:



11) redeploy delivery_unit by running below on Primary:

# /hana/shared/SID/global/hdb/install/bin> ./hdbupdrep


Now, your MDC conversion with system replication setup is completed.



Also, i've tested below scenario:


a) on primary, convert single container to MDC whilst system replication is running, and encountered below error:

error: 'hdbnsutil -initTopology' is not allowed on system replication sites.



b) on primary, convert single container to MDC with system replication config on, but shutdown secondary, and encountered same error:

error: 'hdbnsutil -initTopology' is not allowed on system replication sites.



c) Converted only primary to MDC. Tried to startup secondary to resume replication, but secondary refused to startup with due to the replication port is different, 4XX00 is used instead of 3XX00 for SAP HANA system replication with MDC.


Hopefully in future revision, MDC conversion running on existing system replication setup would be much easier without the need of breaking and synchronize again with full data shipping.


Please share if there's an alternate way of doing this, for whoever has done the MDC conversion on Hana system replication configured. Would interested to know ;-)


Hope it helps and enjoy!



Nicholas Chang



In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 12.


For the complete list of blogs see: What's New with SAP HANA SPS 12 - by the SAP HANA Academy


The topic of this blog is SAP HANA Platform Lifecycle Management new features.


For the related installation and update topics, see SAP HANA SPS 12 What's New: Installation and Update - by the SAP HANA Academy



Tutorial Video


SAP HANA Academy - SAP HANA SPS 12: What's New? - Platform Lifecycle Management - YouTube




What's New?


Converting an SAP HANA System to Support Multitenant Database Containers


SAP HANA SPS 9 introduced the multitenant database container concept, where a single SAP HANA system contains one or more SAP HANA tenant databases. This allows for an efficient usage of shared resources, both hardware and database management.


At install time, you select the SAP HANA database mode: single container or multiple containers. Should you want to change the mode after the installation you have to perform a conversion. In earlier revisions, this task was performed on the command line with the tool hdbnsutil, as you can view in the following tutorial video:



As of SPS 12, an SAP HANA system can now be converted to support multitenant database containers using the SAP HANA database lifecycle manager (HDBLCM) resident program. With every installation of SAP HANA, hdblcm is included and enables you to perform common post-installation and configuration tasks. The tool is hosted by the SAP host agent and not, like the SAP HANA cockpit, by the SAP HANA database.


The Convert to Multitenant Database Containers task is available for all interfaces: web, windows and command line, but the web interface allows you to set advanced parameters:

  • Import delivery units into the system database (default = Y)
  • Do not start instance after reconfiguration
  • Do not start tenant database after reconfiguration
  • Set instance startup and shutdown timeout


During the conversion, the original system database is configured as tenant and a new system database is created. This operation is quick as we only need to shutdown the SAP HANA database, update a few settings and restart the instance. Importing the standard HANA content (web IDE, SAPUI5, cockpit, etc.) takes most time and can optionally be postponed or not performed all together.


Note that the conversion is permanent.



Screen Shot 2016-05-25 at 10.09.06.png


The web UI allows to set advanced parameters.

Screen Shot 2016-05-25 at 10.11.56.png


Adding and Removing Host Roles


It is now possible to add and remove host roles after installation in a single-host or multiple-host SAP HANA system using the SAP HANA database lifecycle manager (HDBLCM) resident program.


As of SPS 10, you have the option to install SAP HANA systems with multiple host roles - including database server roles and SAP HANA option host roles - on one host, or give an existing SAP HANA host additional roles during system update. This enables you to share hardware between the SAP HANA server and SAP HANA options. This concerns the MCOS deployment type: Multiple Components One System.


Typical roles are worker and standby and exist for the the SAP HANA database, dynamic tiering, accelerator for SAP ASE, and the XS advanced runtime. Additionally roles are available for smart data streaming and remote data sync.


Database worker is the default role. For distributed systems, with multiple SAP HANA hosts, systems can assigned the standby role for High Availability purposes.


Screen Shot 2016-05-25 at 10.36.03.png


System Verification Tool


You can check the installation of an SAP HANA system using the SAP HANA database lifecycle manager (HDBLCM) resident program in the command-line interface for troubleshooting. The check tool outputs basic information about the configuration of the file system, system settings, permission settings, and network configuration and you can use the generated log files as a reference in the case of troubleshooting.


Screen Shot 2016-05-25 at 10.05.38.png




A new guide is available that documents how to configure, manage, and monitor an SAP HANA system that supports SAP HANA multitenant database containers


Screen Shot 2016-05-25 at 11.02.07.png





For more information see:


SAP Help Portal



SAP Notes




SCN Blogs




Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.


Follow us on Twitter @saphanaacademy.


Connect with us on http://linkedin.com/in/saphanaacademy.



One of the new SPS 12 features for monitoring and managing performance in SAP HANA is the ability to capture and replay workloads. This feature enables you to take a performance snapshot of your current system -- a captured workload -- and then execute the same workload again on the system (or another system from backup) after some major hardware or software configuration change has been made. This will help you evaluate potential impacts on performance or stability after, for example, a revision upgrade, parameter modifications, table partition or index changes, or even whole landscape reorganisations.


In this blog, I will describe the required preparation and the operational procedures.




Import Delivery Unit


To capture, replay and analyze workloads you use the three new apps in the equally new SAP HANA Performance Monitoring tile catalog of the SAP HANA cockpit.




The Analyze Workload app is included with a standard installation of SAP HANA but the apps Capture Workload and Replay Workload are not. For these, you need to import the delivery unit (DU): HANA_REPLAY.


To import delivery units, you can use the SAP HANA Application Lifecycle Management (ALM) tool, which is part of SAP HANA cockpit, the ALM command line tool (hdbalm) or use SAP HANA studio (File > Import > SAP HANA Content > Delivery Unit).


Screen Shot 2016-05-20 at 15.29.48.png


Grant Roles


The following roles are available for capturing and replaying workloads:

  • sap.hana.replay.roles::Capture
  • sap.hana.replay.roles::Replay
  • sap.hana.workloadanalyzer.roles::Administrator
  • sap.hana.workloadanalyzer.roles::Operator


Typically, you would grant a user with system administration privileges the Capture replay and Replay replay role. This could be the same user or a different user.


The workloadanalyzer roles are granted to users who need to perform the analysis on the target system. Operators have read-only access to the workload analysis tool.



Configure SAP HANA cockpit


The Analyze Workload app is added automatically to the SAP HANA cockpit if you have any of the two workloadanalyzer roles. The Capture Workload and Replay Workload apps need to be added manually from the tile catalog.



Configure Replayer Service


On the target system you need to configure and start the replayer service before you can replay a workload.


For this, you need to have access as the system administrator the SAP HANA host and create the file wlreplayer.ini in directory $SAP_RETRIEVAL_PATH, typically /usr/sap/<SID>/HDB<instance_number>/<hostname>.


This file needs to contain the following lines


listeninterface = .global



filename = wlreplayer

alertfilename = wlreplay_alert


Next, start the replayer service with the hdbwlreplayer command:

dbwlreplayer -controlhost hana01 -controlinstnum 00 -controladminkey SYSADMIN, HDBKEY -port 12345


Use the following values for the parameters:


controlhostdatabase host name
controlinstnumdatabase instance number
controladminkeyuser name and secure store key (separated by comma)
portavailable port
controldbnameoptionally, database name in case of multitenant database container system


Secure Store Key


In case you are not familiar with secure store keys, or need a refresher, see SAP HANA database interactive terminal (hdbsql) - by the SAP HANA Academy or the video SAP HANA Academy - SAP HANA Administration: Secure User Store - hdbuserstore [SPS 11] - YouTube






Once you have performed the preparation steps, the procedure is simple.


1. Capture Workload


Connect with SAP HANA cockpit to the system, open the Capture Workload app and click Start New Capture in the Capture Management display area. Provide a name and optional description and use the ON/OFF switches to collect an explain plan or performance details. The capture can be started on-demand or scheduled. Optionally filter can be set on the name of the application name, database user, schema user, application user, client or statement type (DML, DDL, procedure, transaction, session, system). Also, a threshold duration can be set and the passport trace level.


Screen Shot 2016-05-20 at 17.26.48.png


When done, click Stop Capture.


Screen Shot 2016-05-20 at 17.26.08.png


Optionally, you can set the capture destination, trace buffer size and trace file size for all captures with Configure Capture.


Screen Shot 2016-05-20 at 17.03.26.png


2. Replay Workload: Preprocess


Once one or more capture have been taken, open the Replay Workload app from the HANA cockpit to preprocess the capture. The captured workloads are listed in the Replay Management display area. Click Edit and then click Start Preprocessing on the bottom right.


Screen Shot 2016-05-20 at 17.25.06.png


3. Replay Workload


Once the capture has been preprocessed, you can start the replay from the same Replay Workload app.


First select the (preprocessed) replay candidate that you want to replay, then select Configure Replay.

Screen Shot 2016-05-20 at 17.30.49.png


In the Replay Configuration window, you need to provide

  • Host, instance number and database mode (Multiple for a multitenant database container system) of the HANA system
  • Replay Admin user (with role sap.hana.replay.roles::Replay) with either password or secure store key
  • Replay speed: 1x, 2x, 4x, 8x, 16x
  • Collect Explain plan
  • Replayer Service
  • User authentication from the session contained in the workload


Screen Shot 2016-05-20 at 17.33.39.png

When the Replay has finished, you can select Go to Report to view replay statistics.


Screen Shot 2016-05-20 at 17.40.57.png


Screen Shot 2016-05-20 at 17.42.58.png



4. Analyze Workload


Third and final step is to analyze the workload. For this start the Analyze Workload app from the SAP HANA cockpit. You can analyze on different dimensions like Service, DB User, Application Name, etc.


Screen Shot 2016-05-20 at 17.45.43.png



Video Tutorial


In the video tutorial below, I will show you in less than 10 minutes the whole process, both preparation and procedures.


SAP HANA Academy - SAP HANA SPS 12: What's New? - Capture and Replay Workloads - YouTube




More Information


SAP HANA Academy Playlists (YouTube)


SAP HANA Administration - YouTube


Product documentation


Capturing and Replaying Workloads - SAP HANA Administration Guide - SAP Library



Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy, follow us on Twitter @saphanaacademy., or connect with us on LinkedIn.



In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 12.


For the complete list of blogs see: What's New with SAP HANA SPS 12 - by the SAP HANA Academy


The topic of this blog is performance monitoring.


For the SPS 11 version, see SAP HANA SPS 11 What's New: Performance Monitoring - by the SAP HANA Academy.



What's New?


SAP HANA Performance Monitoring Apps


SAP HANA Performance Monitoring is a new tile catalog available in SAP HANA cockpit with the deployment of the new Workload Replay delivery unit.


In this tile catalog, three new apps have been included:

  • Capture Workload
  • Replay Workload
  • Analyze Workload




Capturing and replaying workloads from an SAP HANA system can help you evaluate potential impacts on performance or stability after a change in hardware or software configuration.


Possible use cases are:

  • Hardware change
  • SAP HANA revision upgrade
  • SAP HANA INI parameter change
  • Table partitioning change
  • Index change
  • Landscape reorganization for SAP HANA scale-out systems


For a complete overview, see Capturing and Replaying Workloads - by the SAP HANA Academy


Tutorial Video


SAP HANA Academy - SAP HANA SPS 12: What's New? - Capture and Replay Workloads - YouTube





SAP HANA Administration Apps


Several apps in the SAP HANA Database Administration catalog of the SAP HANA cockpit have been enhanced for performance monitoring features.


Screen Shot 2016-05-18 at 15.45.36.png


Performance Monitor


Select Export All in the footer bar in the Performance Monitor app to export KPI data as a single data set. This ZIP file can be imported into the new Support app (Support Tools tile). You can also save your own set of KPIs using the new variant Custom, and select Show Jobs to display jobs above the load graph to show which jobs had an effect on your system performance.


Screen Shot 2016-05-18 at 15.52.43.png


Import and Export of Performance Data for Support Process


To analyze and diagnose database problems, you can now import performance monitor data from a ZIP file into SAP HANA cockpit. You can export data using the Performance Monitor app.


Screen Shot 2016-05-18 at 15.58.17.png




You can now monitor for long-running threads and analyze quickly any blocking situation using the Threads app. The tile indicates the number of currently active and blocked threads.


Screen Shot 2016-05-18 at 15.41.27.png


Statements Monitor


The Monitor Statements tile indicates the number of long-running statements and blocking situations. The app displays information about the memory consumption of statements. New for SPS 12 is that Memory Tracking can be enabled or disabled in the footer bar. Memory tracking is required for Workload Management.




Workload Management Configuration


Manage all workload classes using the new Workload Classes app. Workload classes and workload class mappings can be created to configure workload management for the SAP HANA database. Memory tracking needs to be enabled for workload management.


Screen Shot 2016-05-18 at 15.46.12.png




A new paragraph has been added to the SAP HANA Troubleshooting and Performance Guide about network performance and connectivity problems.


The following topics are addressed:

  • Network Performance Analysis on Transactional Level
  • Stress test with SAP's NIPING tool to confirm the high network latency (or bandwidth exhaustion)
  • Application and Database Connectivity Analysis
  • SAP HANA System Replication Communication Problems
  • Analysis steps to resolve SAP HANA inter-node communication issues


Screen Shot 2016-05-18 at 14.41.35.png


Additional Information


Help Portal: SAP HANA Platform Core SPS 12



SAP Notes



SCN Blogs



Capturing and Replaying Workloads - by the SAP HANA Academy



Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.


Follow us on Twitter @saphanaacademy.


Connect with us on http://linkedin.com/in/saphanaacademy.

A number of predefined operating system and database users are required for installing, upgrading, and operating SAP HANA. Further users may exist depending on additionally installed components. A brief overview is available here: Predefined Users in SAP HANA

The usability of the database trace for authorization issues has been improved with SPS 12. Read about it here: Enhanced database trace information for authorization issues in SAP HANA SPS 12

SUSE Linux Enterprise Server for SAP Applications is now available, on demand, via Amazon Web Services (AWS) and the AWS Marketplace; With no minimum fee and you pay only for the compute hours used.



This solution also includes high availability resource agents for deployments of the SAP HANA platform. These agents allow SAP HANA instances to failover between AWS Availability Zones and were jointly engineered by SAP, SUSE and Amazon to run on the AWS infrastructure.


And, of course, using SUSE’s “bring-your-own-subscription” program, you can use your existing SUSE Linux Enterprise for SAP Applications subscription to build and test SAP workloads on AWS. See aws.amazon.com/suse for more details on that program.

Swing by the SUSE (655) or Protera (473) booths to apply for a free proof of concept to help plan your SAP HANA deployment on AWS at SAPPHIRE NOW 2016 in Orlando.

AWS will also be speaking on the SUSE Booth at 1:50pm May 17 - come and listen!


“AWS provides the on-demand, highly reliable and scalable cloud computing services that meet the evolving needs of our customers,” said Naji Almahmoud, head of global business development for SUSE. “Expanding the availability of SUSE Linux Enterprise on AWS gives them more flexibility to take advantage of the leading Linux platform for SAP solutions.”

Dave McCann, vice president, AWS Marketplace and Catalog Services, Amazon Web Services, Inc., said, “SUSE is a pioneer in managing their customers’ complexity, reducing cost and delivering mission-critical cloud-based services, offering an innovative approach that brings business and IT together to innovate. We are excited to see the expanded availability of SUSE Linux Enterprise Server for SAP Applications on AWS through AWS Marketplace. The access will ensure SAP users experience the advantages provided by the on-demand, highly reliable and scalable cloud computing services of AWS.”


For technical papers and more information see: Amazon Web Services and SUSE

Microsoft CEO Satya Nadella and SAP CEO Bill McDermott announced together, today on stage at SAPPHIRE NOW 2016 joint plans to deliver broad support for the SAP HANA® platform deployed on Microsoft Azure.

SUSE and Microsoft today announce that SAP HANA is coming to Microsoft Azure running on SUSE Linux Enterprise Server.


So if you're looking to spin up SAP HANA instances on Azure you’ll now be able to (using the same pricing as for SUSE Linux Enterprise Server for SAP Applications).

“We’re excited that our partnership with SAP is delivering powerful, new options for SAP HANA deployments on Azure – including support for SUSE Linux Enterprise Server.” – Madhan Arumugam Ramakrishnan, Principal Manager, Microsoft Azure.

“SUSE, SAP, and Microsoft Azure continue to develop and deliver solutions for cloud data centers — with a focus on the enterprise,” said Kristin Kinan, Director of Public Cloud Alliances at SUSE.  “Bringing SAP HANA to Azure, running on SUSE Linux Enterprise Server, is the latest step in enabling maximum power and flexibility to our enterprise customers.”

Microsoft Azure will speak on the SUSE Booth (655) at SAPPHIRE NOW 2016 on Wednesday, May 18 at 11:10 - please join.

SUSE was the development and first Linux platform for SAP HANA and SAP and SUSE has shared a long-standing partnership in the SAP Linux Labs in Walldorf, Germany

If you're looking for technical guides and tech-casts on security hardening, scale-out and system-replication fail-over, please see here. For more info on SUSE and Azure - see here SUSE + Microsoft Azure | SUSE

And watch this video to see why SUSE Linux Enterprise Server for SAP Applications is the Leading Linux Platform for SAP!




In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 12.


For the complete list of blogs see: What's New with SAP HANA SPS 12 - by the SAP HANA Academy


The topic of this blog is installation and update.


For the SPS 11 version, see SAP HANA SPS 11 What's New: Installation and Update - by the SAP HANA Academy



Tutorial Video


SAP HANA Academy - SAP HANA SPS 12: What's New? - Installation and Update - YouTube




What's New?


Supported Operating Systems


For SAP HANA Platform SPS 12 on Intel-based hardware platforms the minimum operating system versions are:


For SAP HANA Platform SPS 12 on IBM Power Systems the minimum operating system version is:

  • SLES 11 SP4 for IBM Power


Download Components


Components and component updates can be downloaded from the SAP Support Portal using the Download Components tile in the SAP HANA Platform Lifecycle Management (HDBLCM) web tool.




In previous editions, SAP HANA studio was used to perform this task. Here, as elsewhere, we see the gradual move of functionality from the full Windows client to the web interface.



Note also that the new Download Components tile very much resembles the new SAP ONE Support Launchpad, section System Operations and Maintenance > Software Downloads (on premise).



Extract Components


Component archives which were downloaded from the SAP Support Portal can be prepared for the update using the extract_components action of the SAP HANA HDBLCM resident program in the command-line interface.


In previous editions, you had to perform this task using the SAPCAR tool (and not forget the signature validation as described in Note 2178665). The extract_component parameter now performs these tasks for you, so only a single tool, hdblcm, is needed for all tasks.




Usability Improvements


The interactive modes of the SAP HANA database lifecycle manager (HDBLCM) have been optimized to deliver improved user experience. If a list contains only a single option, it is selected as the default value.



SAP HANA XS Advanced Runtime


The database lifecycle manager tool (hdblcm) now supports the installation and update of the SAP HANA XS Advanced Runtime.


The following XS Advanced Runtime parameters are available:

  • xs_components_cfg - Specifies the path to the directory containing MTA extension descriptors (*.mtaext)
  • xs_customer_space_isolation - Run applications in customer space with a separate OS user
  • xs_customer_space_user_id - OS user ID used for running XS Advanced applications in customer space
  • xs_domain_name - Specifies the domain name of an xs_worker host
  • xs_routing_mode - Specifies the routing mode to be used for XS advanced runtime installations
  • xs_sap_space_isolation - Run applications in SAP space with a separate OS user
  • xs_sap_space_user_id - OS user ID used for running XS advanced runtime applications in SAP space




The SAP HANA Installation and Update Guide has been updated for the above-mentioned topics but also has been extended to include a paragraph on SAP HANA and Virtualisation. For the latest support status, see SAP Note 1788665 - SAP HANA Support for virtualized / partitioned (multi-tenant) environments.


Additionally, all database lifecycle manager parameter references are now part of the Parameter Reference chapter in the SAP HANA Server Installation and Update Guide. For the resident lifecycle manager tool, these were previously documented in the Administration Guide only.





For more information see:




Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.


Follow us on Twitter @saphanaacademy.


Connect with us on http://linkedin.com/in/saphanaacademy.


Filter Blog

By author:
By date:
By tag: