1 2 3 51 Previous Next

SAP HANA and In-Memory Computing

758 Posts
Martin Maruskin

Multitenancy in HANA

Posted by Martin Maruskin Jan 29, 2015

Motivation: I heard a lot of regarding multitenancy recently. In this blog I wanted to sort out basically what the multitenancy is especially with regards to HANA. Eager to hear from you regarding the topic and even to correct me!

Release SPS09 of SAP HANA 1.0 came into spotlights in October of last year. It was flourished that one of the main features of this release is multitenancy. Term tenant is very important within today’s very hot cloud computing. The multitenancy in software architecture terminology means that a single instance of software runs on a server and it is serving multiple tenants. The tenant is a group of users sharing same view on the software they using. Software designed according the multitenant architecture provides every tenant a dedicated share of instance its data, configuration, user management and tenant specific functionality.

To put it simple imagine it as multiple customers running same app on the same server. An infrastructure is shared among them and there is software infrastructure that keeps their data separated even the data is stored in the same database.

It can be said even simpler. To compare it with classic NetWeaver ABAP stack – the tenants are clients in SAP system. All the data is separated in the clients but all system configuration and metadata (e.g. ABAP Data Dictionary) is shared. ABAP system is multitenancy system. All SQL statements like Selects are by default only taking the data from that particular client where they run.

In contrast there is another approach called multi-instance architecture. In this case several separate software instances operate on behalf of different tenants. Note: some of definitions used above are quotes are from wiki.

When it is said that HANA is capable of multitenancy it means there is one single HANA system with its database. In this DB there is configuration stored. Configuration is meant from system wide landscape information which allows configuration and monitoring of the overall system. Additionally there can be N tenant databases in that HANA system. These are storing application data and user management. App and user data is strictly isolated from each other. They share only hardware resources as they all run on the same instance. Users of one tenant database cannot access application data of another tenant. From DB backup and recovery perspective – it is done for every tenant independently from one to another. The HANA’s multitenancy feature is called Multitenant Database Containers (MDC).

By the MDC it is meant that there is one HANA system represented by one SID. Such a system supports multiple applications in different databases and schemas.

Note: these are quotes from SAP Note 2096000 - SAP HANA multitenant database containers.

Following are options available in HANA SPS09 with regards to multitenancy:






Standard deployment





Multiple Components One Database (MCOD)





Multiple Components One System (MCOS)


>1 (one DB per system)



Multitenant Database Containers (MDC)


>1 (one DB per system)





Further information:

2096000 - SAP HANA multitenant database containers - Additional Information

2075266 - SAP HANA Platform SPS 09 Release Note

1826100 - Multiple applications SAP Business Suite powered by SAP HANA

1661202 - Support for multiple applications on SAP HANA

Hi.My name is Naomy and I'm an intern in SAP. Recently, I have used TPC-H benchmark(2.17.1) to run some tests on HANA database.Although all of the queries are written in the SQL-92 language, some queries still need to be rectified so that they can be executed on HANA.But the syntax differences don't necessary make SAP HANA SQL92 incompatible as the SQL standard leaves many aspects to the system implementation.

The fourth query ,for example, uses 'interval' to calculate a period of time.


But it couldn't be executed on HANA .It seems that HANA SQL doesn't support 'Interval' as a reserved word.


So I used the ADD_MONTHS function to replace 'Interval'.


Likewise,when I met the queries using 'interval n day' or 'interval n year', I used ADD_DAYS function or ADD_YEARS function to replace the part of queries.

What's more, SAP HANA doesn't support 'AS' to set alias in parentheses.

The 13th query, for instance, couldn't be executed in HANA.


So, I rectified it as shown in the following picture and it works.




Above information is what I have learned when I used TPC-H benchmark queries in my HANA project.I hope it helps fellows who meet http://scn.sap.com/#the same problem.http://scn.sap.com/#

Daniel Gray


Posted by Daniel Gray Jan 27, 2015

Hi everyone,


My name is Daniel Gray and I’m a Product Support Engineer. I work in SAP’s Vancouver office and I support the growing database technology known as HANA.


The goal of this blog is to provide an overview of various SSL configurations in HANA. There are already a number of guides that detail how to configure SSL on HANA; I hope these posts will help you understand why each step is necessary. This is a series I will be adding to in the future; I’m currently planning on covering single and multi-node HANA systems with and without certificate authorities, and internal communication in multi-node systems.



As these posts are related to SSL and HANA, I won’t give an in-depth explanation of the mechanics behind SSL. If you are new to SSL the following resource helped greatly when I started learning SSL:




SSL protocols provide methods for establishing encrypted connections and verifying the identity of an entity (client, server, etc). Verifying the identity of a communication partner, however, isn’t mandatory. Many clients will allow you to establish connections with untrusted parties. For the following posts I will assume that our clients will reject untrusted servers.


Configuring clients to reject untrusted connections depends on the client itself. For HANA Studio, this option is found in Systems view -> right click the system connection <SID> (<DBUSER>) -> Properties -> Database User Logon -> Additional Properties tab -> Validate the SSL certificate checkbox.


Additionally, in the following examples I’ll assume that clients already trust certificates (i.e. the trust store contains the root certificate) signed by common CAs such as Verisign and DigiCert.


Should you encounter a term you’re not familiar with, please refer to the glossary at the bottom of this page.

  • SSL and single-node HANA systems
  • SSL and distributed (multi-node) HANA systems <under construction>
  • SSL and internal communication in distributed systems <under construction>



  • Certificate Authority (CA): An entity, such as DigiCert, that verifies the identity of another entity, such as Facebook.
  • Public key: The key used to encrypt messages/decrypt signatures in asymmetric cryptography.
  • Public key certificate: A digital certificate that contains, and identifies the owner of, a public key; this is distributed publicly.
  • Private key: The key used to decrypt messages and sign objects in asymmetric cryptography; this is kept private.
  • Root certificate: A public key certificate that identifies the root CA. Root certificates from common CAs are generally distributed with clients (e.g. web browsers).
  • Certificate Signing Request (CSR): Contains the information required to generate a signed certificate.
  • Common Name (CN): Contained in public key certificates and identifies the host the certificate belongs to. The CN of a certificate must match the FQDN the client is connecting to.
  • Fully Qualified Domain Name (FQDN): A name that uniquely identifies a host on the internet.
  • Key store: A file that contains the information necessary for an entity to authenticate itself to others. Contains the server’s private key, signed server certificate, and intermediate certificates if necessary.
  • Trust store: A file that contains the information of trusted entities. Generally contains root certificates of CAs.

Cross Tenant Database Recovery



Source Tenant DB = TS1 (Multi Container Installation on Host A)

Target Tenant DB = TS3 (Converted to Multi Container DB from Single DB on Host B)


Encountered below error for Multitenant Database Recovery due to the volume_ID is different from source and target DB.


2015-01-22T09:44:15+08:00  P002495 14b0f4fae01 INFO    RECOVERY RECOVER DATA started

2015-01-22T09:44:15+08:00  P002495 14b0f4fae01 INFO    RECOVERY command: RECOVER DATA FOR TS


2015-01-22T09:44:15+08:00  P002495 14b0f4fae01 INFO    RECOVERY state of service: indexserve

r, hostb:30540, volume: 0, RecoveryExecuteCatalogRecoveryInProgress

2015-01-22T09:44:15+08:00  P002495 14b0f4fae01 INFO    RECOVERY state of service: indexserve

r, hostb:30540, volume: 0, RecoveryError

2015-01-22T09:44:15+08:00  P002495 14b0f4fae01 INFO    RECOVERY state of service: indexserve

r, hostb:30540, volume: 3, RecoveryExecuteTopologyRecoveryInProgress

2015-01-22T09:44:16+08:00  P002495 14b0f4fae01 INFO    RECOVERY state of service: indexserve

r, hostb:30540, volume: 3, RecoveryExecuteTopologyRecoveryFinished

2015-01-22T09:44:16+08:00  P002495 14b0f4fae01 INFO    RECOVERY state of service: indexserve

r, hostb:30540, volume: 3, RecoveryPrepared

2015-01-22T09:44:16+08:00  P002495 14b0f4fae01 INFO    RECOVERY start of progress monitoring

, volumes: 1, bytes: 0

2015-01-22T09:44:16+08:00  P002495 14b0f4fae01 INFO    RECOVERY state of service: indexserve

r, hostb:30540, volume: 3, RecoveryExecuteDataRecoveryInProgress

2015-01-22T09:44:16+08:00  P002495 14b0f4fae01 ERROR   RECOVERY RECOVER DATA finished with e

rror: [448] recovery could not be completed, volume 3, reached log position 0, [2000004] Cannot o

pen file ""<root>/COMPLETE_DATA_BACKUP_databackup_3_1" ((mode= R, access= rw-r-----, flags= DIREC

T|MUST_EXIST|MULTI_WRITERS|UNALIGNED_SIZE), factory= (root= "/mnt/hostb/data/DB_TS3/"

(access= rw-r-----, flags= <none>, usage= DATA_BACKUP, fs= nfs, config= (async_write_submit_activ


ues=1,size_kernel_io_queue=512,max_parallel_io_requests=64))", rc=2: No such file or directory



By referring to 2101737 - Recovery of a Multitenant Database Container fails, we need to map the volume ID from source DB to target DB to ensure a successful recovery.


Important: Before apply below steps; please ensure a complete SYSTEMDB backup and all TENANT DB Backup (if any). This is to safeguard your HDB in case system topology screwed up.


  1) Source TS1 Tenant Indexserver Volume ID which is 2 (look at the data are with hdb00002)

On Target DB: Check the view on SYS_DATABASES.M_VOLUMES to take note on the VOLUME_ID and SUBPATH

2) On Souce DB: To confirm the volume ID by executing “hdbbackupdiag” on the data backup.


hdbbackupdiag -v -d /mnt/hosta/data/DB_TS1 -b COMPLETE_DATA_BACKUP | grep "\ServiceName\|VolumeId"

        ServiceName: indexserver

VolumeId: 2

        ServiceName: indexserver

VolumeId: 2


3) Determine the volumes and services of the target database. From below query, we know that the VOLUME_ID for indexserver on target DB is 3.






  1. 4. Determine the configuration values for every volume of the target database by running the query below.










        '@')) PATH,


        '@') DB_VOLUME_ID,


        '@') NAME,



WHERE PATH = '/volumes/*+|@'



        '@') DB_ID

              FROM M_TOPOLOGY_TREE

              WHERE PATH='/databases/*+|@'

              and NAME like '%name'

              and value = 'TS3'),





*Run the query again for each available tenant DB to avoid any data volume being overwritten for the same volume_id used by other services. If this happens, append or change the subpath, eg: hdb00002.0000X. (X to higher integer)

5) bring down your target tenant DB is not already down;



6) Change the volume ids of the services of the target database to match the volume_id of source DB. In our case, the source DB volume_id is 2.


ALTER SYSTEM ALTER CONFIGURATION ('topology.ini', 'system') SET ('/host/hostb/indexserver/30540', 'volume')= '2'

7) Delete configuration values for every volume from target database

ALTER SYSTEM ALTER CONFIGURATION ('topology.ini', 'system') UNSET ('/volumes', '4:3');

Once the query finished, the result would be empty if you run again the query in (4)


8) Insert new configuration values for every volume into the target database.


  • § Here, you will change the source volume_id earlier '/volumes/4:3' to target source volume_id '/volumes/4:2'
  • § Pay attention to 'path', where you just need to change hdb0000X to the new volume_ID. In our case, which is 2. Please use 'mnt00001/hdb00002.00004'; instead of 'mnt00001/hdb00002:00004'as instructed in note 2101737.


ALTER SYSTEM ALTER CONFIGURATION ('topology.ini', 'system') SET ('/volumes/4:2', 'active')= 'yes';

ALTER SYSTEM ALTER CONFIGURATION ('topology.ini', 'system') SET ('/volumes/4:2', 'catalog')= 'yes';

ALTER SYSTEM ALTER CONFIGURATION ('topology.ini', 'system') SET ('/volumes/4:2', 'database')= '3';

ALTER SYSTEM ALTER CONFIGURATION ('topology.ini', 'system') SET ('/volumes/4:2', 'location')= 'hostb:30540';

ALTER SYSTEM ALTER CONFIGURATION ('topology.ini', 'system') SET ('/volumes/4:2', 'path')= 'mnt00001/hdb00002.00004';

ALTER SYSTEM ALTER CONFIGURATION ('topology.ini', 'system') SET ('/volumes/4:2', 'servicetype')= 'indexserver';

ALTER SYSTEM ALTER CONFIGURATION ('topology.ini', 'system') SET ('/volumes/4:2', 'tenant')= '-';

9) Proceed to recovery




10) Once recovery completed successfully, target tenant will be up and running with the changed volume_id.



Hope it helps,



Nicholas Chagnb

After my blog on introducing "Grouping Sets", I would like to introduce the WorkdaysBetween and AddWorkdays functions. This blog would provide a basic idea about these functions and how to write simple SQL queries using these functions.


In order to work with these functions, table TFACS (Factory calendar (display)) in ECC, should be replicated to SAP HANA. The TFACS table stores days in binary format ex. 010101000010100101, where 0 indicates a non-working day while 1 is a working day.


Below is a screenshot of the output of TFACS table:



The WorkdaysBetween function:


As the name suggests, the WorkdaysBetween function is used to compute the number of working days in between two dates for a particular country. The inputs for this function are:

  • Country
  • from date
  • to date
  • Schema name, in which the table TFACS has already been replicated.


The syntax of the WorkdaysBetween function is:

workdays_between(<Country name>, <from date>, <to date>, <schema name>)


For example, we would try to compute number of workdays between 01-01-2014 to 01-01-2015 for US. Hence, the SQL for this would be:


The table TFACS has been replicated to ‘HANA_SP7_01’ schema in the above example.



The AddWorkdays function:


The AddWorkdays function is used to compute the next working date on adding a specific number of days to a particular date for a particular country. The inputs to the AddWorkdays function are:

  • Country
  • from date
  • number of days
  • schema name, in which table TFACS has been replicated.


The syntax for AddWorkdays function is:

add_workdays(<Country name>, <from date>, <number of days>, <schema name>)


For example, we would try to compute the 16th working day since 01-01-2014 for US. Hence, the SQL for this would be:


As mentioned earlier, the table TFACS had already been replicated to ‘HANA_SP7_01’ schema in the above example.


Hope this blog aided in better understanding of functions: WorkdaysBetween and AddWorkdays, which could be useful in dealing with various computations involving dates. I would appreciate your feedback on usefulness of this blog











In the first part of these blog series I gave a high level introduction of Birst. In this second blog I would like to show you how to connect to HANA via Birst and how to use HANA as a Live source for your models. In a later blog, I will also dive into the possibilities of connecting to a BW system and connect to the DSOs which reside in BW. A likely scenario when you’ve implemented the LSA++ concept where you would no longer report on the reporting layer, but on the underlying DSOs. Additionally, Birst could also connect to a BEx query via a MDX connection.



Birst provides data extraction and connectivity options for a wide variety of databases, flat and structured files, as well as popular cloud and on-premises applications. You can query on-premises data sources in real-time with Birst Live Access, directly from the Birst Business Model (semantic layer)—without the need to extract it to the Cloud. For the latter scenario, BirstConnect is the tool which will make the connection to required datasources.


Before being able to extract data into the Birst cloud, you would have to make some settings, mainly creating a “Birst space”. A Birst space makes it possible to group data sources, models and reporting which logically would belong together.


ScreenHunter_01 Jan. 20 00.36.jpeg


In our example where I would be connecting to HANA as a Live data source, I choose the option “Discovery”, followed by the Admin option to get data from one of my HANA sources.


ScreenHunter_03 Jan. 20 00.41.jpg


In the “Define sources” tab, I need to create a new configuration which I’ve named HANA. If you would press the launch button, this would start a BirstConnect, but one which cannot be directly used. The HANA connection in BirstConnect requires a file (ngdbc.jar) which comes with the HANA client tools. Due to some legal restrictions this cannot be part of the installation of Birst, therefore you would have to download a local copy of BirstConnect and copy the required file to it.


ScreenHunter_05 Jan. 20 00.45.jpg


Place the file in the “lib” directory of BirstConnect:




The second step you need to do in order to be able to run the downloaded BirstConnect is to make settings which point to your Birst environment. You do that by downloading the so-called “jnlp” file from Birst. After pressing launch in the Birst admin console (in my case on the created “HANA” connection you download the jnlp file and place that in the root directory of BirstConnect:


ScreenHunter_06 Jan. 20 00.53.jpg


We’re almost there. Last step is to make changes to a batch file which will be fired off to make the connection for us. In the commandline folder of BirstConnect you will find a file called cmdUI.bat which needs to be changed:


set JAVA_HOME=C:\Program Files\Java\jdk1.7.0

set BirstConnect_Home=C:\BirstConnect

"%JAVA_HOME%\bin\java" -cp "%BirstConnect_Home%\dist\*;%BirstConnect_Home%\dist\lib\*" -Djnlp.file="%BirstConnect_Home%\ef989147-2c42-496d-bbe4-4858e58be40c.jnlp" -Xmx1024m com.birst.dataconductor.DataConductorApp


Important are:

  • point BirstConnect to the correct directory where you installed Java. In my case version 7 of the jdk.
  • the second part is to point to the install of BirstConnect, in my case it was installed in the root directory
  • final part is to point to your jnlp file you just created. After that, we can make our connection to HANA!


After running the cmdui.bat file you will be presented with the following screen where you can make the connection to your HANA environment:




I had quite some issues when using the connection the first time, as I used the HANA “SYSTEM” user. As our HANA system has over 50.000 tables and Birst at the time of this blog has no possibility to limit the number of tables based on the schema, I had to restrict via user authorization. KONIJNR (what’s in a name ;-), has limited access to the HANA tables and is used in the connection.


After pressing the ok and save button, the config will be stored on the Birst server and we are good to go to connect to HANA:





Accessing HANA tables in Birst

Go back to your Birst space and add a new datasource. You will find the defined live source back which can be used to connect:


ScreenHunter_08 Jan. 20 01.09.jpg


If you look at the BirstConnect console, you can follow what the connection is doing:




In my example I will use an Analytical view showing “US wage statistics”. Unfortunately Birst has no way to search for a specific table or view. That means that I have to plunge through the list to find the source I need. This is something which in my opinion needs to be improved in a future release.


I can select the table in Birst where the import of the metadata will start:


ScreenHunter_09 Jan. 20 01.24.jpg



The nice part of Birst is that it is able to pull in the complete metadata of the used view or table:


ScreenHunter_10 Jan. 20 01.41.jpg


However, there are some restrictions. A very notable one is that hidden measures and attributes are still being pulled in, even though they are hidden in the HANA view. This may lead to confusion with end-users. The good part is that Birst is able to remove those columns from the modify data source view in order to circumvent this. A second restriction is that the labels of the fields are not taken over, but the technical names. In real life that would mean that column names will have to be renamed into semanticly friendly names:




An interesting option in Birst is that it can use caching to make results retrieval even faster. I did some checks and making the tick will indeed instantly refresh reports on same selections. To do a small performance test, I will switch it off:


ScreenHunter_12 Jan. 20 01.57.jpg


Once the SAP HANA object is in Birst and afjustments have been made, the measures and attributes can be interacted with via the Designer, Visualizer and Dashboard modules.


Using the visualization option and selecting weekly earnings per state name shows the results instantaneously. Not bad for a select on over 2 million records not using the cache!


ScreenHunter_11 Jan. 20 01.48.jpg


The query fired off to HANA shows the same instant results:




In the next blog my colleague Gerard Bot and I will deep dive into the visualization capabilities of Birst and we'll also do some advanced modelling using HANA as a source.


Thank you for reading this blog!



To the point:

Log in Sap, Generate sample data for SWND* tables using transaction SEPM_DG, Import tables and data in Hana.

No need to read more.


Long story


As meny of You know, There is no tables from HA300 examples in fresh Hana installation. You can import SHINE model though.

I recommend using SHINE even for HA300 examples. It will take some time going thru examples, but that time will be well spent on thinking in "Modelling mode".

Any table from HA300 example has its similliar table that can be used instead. There's no NODE_KEY, PARENT KEY fields, but thats not as important.

If You must use HA300 data model, here are the steps

1. Log in Sap, check out tables of interest,  SWND* in SE16n or SE11.  They are empty probably. (for instance table SNWD_BPA_CONTACT)

2.  Use SEPM_DG transaction in SAP to generate sample data.

Check out: http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0a7f673-59da-2f10-6e8b-bc7fe5dd4d0f?QuickLink=index&…

3. Re-check table in SE16n.

4. Use Data Provisioning method You prefer to load tables and data to Hana.

You can use Sap Data Services, Sap LT Replication server would be an overkill

IMPORT flat file

IMPORT flat file scenario is simple and you don't need extra tools.

For each table you want to import into Hana:

1. Download to file system:

Se16n, Download table to xls (Change header name if necessary)

2. Sap Hana Studio File->Import  ->SapHana Content Folder  "Data from Local file"

Change data types if necessary )


Alternative to download table:

You can download table to file system with FM 'SAP_CONVERT_TO_XLS_FORMAT', or write ABAP program (you can tweak something I used for csv download to write Your own code )

Hope it helps,

    Dubravko Katulic.

SAP HANA DB Update paths-



An upgrade from maintenance revision 69.01, 69.02, 69.03, 69.04, 69.05, 69.06, 69.07 to SP revision 70.00 (SPS 07) will not be possible due to incompatibility of these versions.

An upgrade from maintenance revision 69.04 to SP revision 70.00, 71.00, 72.00, 73.00 will not be possible due to the incompatibility of these


An upgrade from maintenance revision 74.01, 74.02, 74.03 to SP revision 80.00 (SPS 08) will not be possible due to the incompatibility of these


An upgrade from maintenance revision 74.03 to SP revision 81.00 (SPS 08) will not be possible due to the incompatibility of these versions.





Current maintenance revision               Possible target SP or maintenance revisions


Revision 69.00                                    Upgrade to revision 70.00 (or any higher SP or maintenance revision)

Revision 69.01                                    Upgrade to revision 71.00 (or any higher SP or maintenance revision)

Revision 69.02                                    Upgrade to revision 71.00 (or any higher SP or maintenance revision)

Revision 69.03                                    Upgrade to revision 71.00 (or any higher SP or maintenance revision)

Revision 69.04                                    Upgrade to revision 74.00 (or any higher SP or maintenance revision)

Revision 69.05                                    Upgrade to revision 74.01 (or any higher SP or maintenance revision)

Revision 69.06                                    Upgrade to revision 74.01 (or any higher SP or maintenance revision)

Revision 69.07                                    Upgrade to revision 74.01 (or any higher SP or maintenance revision)

Revision 74.01                                    Upgrade to revision 74.02 (or any higher SP or maintenance revision)

Revision 74.02                                    Upgrade to revision 74.03 (or any higher maintenance revision) and revision 81 (or any higher SP revision)

Revision 74.03                                    Upgrade to revision 82.00 (or any higher SP or maintenance revision)

Revision 74.04                                    Upgrade to revision 82.00 (or any higher SP revision)


Please note: The revision numbers 75 to 79 are skipped.


The following table lists all SAP HANA Revision to which this applies:


SAP HANA Datacenter Service Point                SAP production system verified SAP HANA Revision

SPS 07 (released in March 2014)                       SAP HANA database Revision 73

SPS 08 (released in August 2014)                       SAP HANA database Revision 82

SPS 09 (planned for March 2015)

SPS 10 (planned for September 2015)


Please note the following -


->If you upgrade from a Revision prior to SPS 06, first upgrade your system to the latest SPS 06 Revision (Revision 69.07) before upgrading

to this Revision.



->In order to run the SAP HANA Database Revision 80 (or higher) on SLES 11, additional operating system software packages are required.


->With SP09 "SAP HANA STUDIO 2" comes.





In latest release (SAP HANA SP09) SAP released new SAP HANA feature – Multitenant Database Containers (MDC). I am personally excited by this news and I am seeing there quite a huge potential. But what are the real use cases for this feature? Is it making SAP HANA really cloud product which can be used to serve different customers? Or it is feature for one customer which can help him consolidate his workloads...


I am getting these questions quite often and honestly I did not yet formed my own opinion myself. Although this might be little illogical I decided to write a blog on this topic because it might be good way to draw some attention to this point and maybe (hopefully) trigger some discussion on this topic.


This blog is containing my own ideas and opinions – it is NOT reflecting opinion of my employer in any way.


Concept of Multitenant Database Containers (MDC)

MDC containers are nicely explained in many other materials, blogs and articles – so I will not attempt to document something what is already covered. For explanation I can point for example to SAP HANA Master Guide (page 25) available here: http://help.sap.com/hana_platform


In short – with latest SAP HANA release you are able to create multiple separate database containers which will be listening on specified ports and which will be independent on each other. This brings many benefits against traditional MCOD and MCOS deployment models (see SAP HANA Master Guide mentioned above for definition and explanation).


I would not hesitate to say that this new option might be seen as replacement for MCOD and MCOS making them obsolete and I would not expect big disagreement from the community.


Can SAP HANA be used for multiple customers?

But does this feature really replace virtualization? Can one SAP HANA installation be used by different customers? Is this concept safe enough?


Currently I would be very careful in deploying SAP HANA in this way. By saying this I do not mean it is not possible – all I am trying to say is that extra effort is required before such deployment can be used.


What is my main concern? Typically shared environments are offering very strong separation which is achieved on network level. Customers are really using same infrastructure however this infrastructure is configured in a way that network packet cannot leave from one tenant into another tenant - unless of course this is desired and it is intentionally configured – and even in such case these packets are traveling across one or more firewalls controlling that this traffic is really something expected.


This is very important for security because humans are very creative and they tend to find most unbelievable ways how to break into places which were (until that moment) seen as impenetrable.


Probably all hypervisors (including VMware) are offering this strong separation. Individual Virtual Machines (VMs) are having own IP address and hypervisor is ensuring that packets are delivered only to the particular VM which is expected to receive them.


Issue with SAP HANA being used by multiple customers is that such strong separation on network level is not possible. I have no reason to not trust SAP when they say that SAP HANA is internally multitenant. But I know for sure it is not externally multitenant on network level – it simply cannot be on its own at this phase. It is still one environment accessible by many customers. If customer can connect to one port (associated with their database container) then there is chance he might be able to connect to the other port which is associated with database container of different customer. At least this will happen if no additional actions are taken to secure such setup.


What could be done to improve such setup? Well after talking to different people I found that there are number of ways how you might increase the security of such setup.


For example you might encrypt communication channels and encrypt the storage to make it harder to access the data from other tenants. However this is not blocking the access to the other tenants – it is only making it more difficult.

Another alternative might be to put firewalls around SAP HANA to filter the traffic and to ensure that each tenant is blocked from connecting to ports (representing database containers) that do not belong to given tenant. This might be working solution however it is increasing the costs and overall complexity of such solution. Also might impact bandwidth and latency of particular flows spoiling performance. And last but not least – effort to automate such setup is increasing considerably.


Last area worth mentioning is openness of SAP HANA itself. SAP HANA is not “only” database – it is more than this – it is platform in which we can develop applications. However from security perspective this brings a lot of risks. I am not SAP HANA developer so I might be wrong here (and feel free to correct me here if you think so) but I can imagine smart developer coding application which will allow him to connect to port belonging to another tenant's database container which might be network flow not controlled by firewall because it is on same server.


Bottom line – I am seeing all options above only as obstacles which are making it more difficult for attacker to breach into database containers belonging to other tenants. And honestly at this point I do not know what is the best infrastructure architecture to securely deploy SAP HANA used by different customers.


And this is where I am seeing additional effort required. This is either something individual providers will have to figure out themselves or something where SAP can create reference architecture.


Of course it is obvious that such reference architecture would boost MDC adoption among various providers offering SAP HANA to their customers while absence of it will be strong inhibitor. And since objective of sharing workloads is motivated by intention to decrease the costs this will in turn impact adoption of SAP HANA itself.


Is there any smarter solution?

I am seeing approach I described above as very complex and not very transparent and I believe there is better option however SAP would have to step into the new area where they are not yet operating. This act might also have some drawbacks which were described in following blog: http://diginomica.com/2013/12/20/multi-tenant-multi-instance-saas-spectrum


Here I would like to outline that all descriptions below are my own speculations of what could be done to make SAP HANA truly multitenant. This is NOT what SAP suggested that they plan to do or what is on their roadmap.


In my opinion major improvement would be if SAP HANA would adopt some principles from virtualization – in particular Software Defined Networking (SDN) approach. There is no need to virtualize complete VMs – it would be sufficient to allow SAP HANA to associate each database container with its own IP address and then ensure routing of particular network packets to the right destination. In short it would be doing similar network service VMware hypervisor is doing to individual VMs.


On top of this SAP HANA would need similar options like VMware to define internal routing so that it is clear which database containers inside one SAP HANA installation belong to the same customer and are allowed to see each other and which are blocked on internal network level (inside SAP HANA instance).


Why is this critical? Because if done properly it will push down the separation between tenants to the network level (although virtualized) ensuring that no breach could happen on application level – and all this without the need to build overcomplicated network architectures.


It would also enable additional features which are seen in virtualized world (here I will use VMware as reference). I can imagine having similar features like vMotion – where database container could be moved to another node without any disruption – as IP address would remain same it could be stateful move completely transparent to external applications. Feature like VMware Distributed Resource Scheduler (DRS) where SAP HANA itself could relocate containers based on actual utilization and respecting preconfigured rules. Or features like VMware Fault Tolerance where container would be internally redundant preventing any disruption in case that one node will fail.


All this could be complemented by allowing differences in revisions on individual nodes – which could help to ensure that any updates are completely without any downtime – where nodes will be upgraded one by one and containers would be relocated without any disruption to the node with latest revision.


In summary such step might open quite a lot of options for SAP HANA where SAP HANA would become virtualization layer on its own – completely separating application (database container) from underlying infrastructure (hardware, OS, instance processes).



To summarize – I believe that SAP HANA Multitenant Database Containers (MDC) are interesting option how to consolidate workloads for large customers having multiple SAP HANA landscapes or in other similar situations where strong separation is not that critical.


On the other hand I am not yet convinced that SAP HANA MDC containers can be used (at least not out-of-the box) as shared solution for different customers on same SAP HANA installation. It might be possible – but not without very careful assessment of risks and creation of infrastructure architecture which would ensure full separation of individual clients.


I do not know how much is SAP ambitious with SAP HANA and if they really intend to turn SAP HANA into fully multitenant software but I am curious to see how things will develop with next releases of SAP HANA.

Hi All,


Just to share the result for simple queries performed on in memory Column Store table with SAP Dynamic Tiering Extende table on disk storage.



HANA DB Version - HANA SPS09 revision 91

SAP Dynamic Tiering - SAP IQ 16 SP9



a) RLV (delta) enable on Extended Storage

b) Number of entries for CS table and ES table is identical as data in ES table was copied from CS table.


-> Column Store


-> Extended Storage


Simple queries performed and its results:






From the result above, we see that complete search (select *) is faster for Extended Storage and queries with selection process is much faster on CS.


This probably Extended Storage is ROW-BASED storage as per for visualize plan below? Remote Row Scan is used all queries on Extended Storage table.


Trying to figure out more how exactly Dynamic Tiering works for its Extended Table especially during data insert/import, unfortunately, couldn't dig more as there's no useful info written on esserver trace, indexserver trace, etc, and also log files in SAP IQ directories.


What I can see is just the size growing on file system, HANA admin cockpit and view M_ES_*


Input for this missing technical piece on SAP Dynamic Tiering Extended Table is greatly welcomed and appreciated.


Nicholas Chang



The purpose of this document is to provide a list of the top knowledge base articles (KBA's) and SAP Notes (SNotes) from the HANA components.  I'll be including the top KBAs of the month and also the most recently added and updated KBAs/SNotes.  The goal here is to keep you up to date on the most frequently searched and the most recent added/updated HANA issues that we are seeing in Product Support.


Please leave any feedback in the comments section.


20 Most Recently Added/Updated SAP Notes and Knowledge Base Articles (Last Refresh Jan 12, 2015)


Note NumberDescription
2029252Compatibility information for SAP HANA and SAP Operational Process
1925684ABAP adjustments for the new Embedded Statistic Server
1793345Sizing for SAP Suite on HANA
2031385SAP Release Note for SL Toolset 1.0 SPS 13
888210NW 7.**: System copy (supplementary note)
2055470HANA on POWER planning and installation specifics central note
1650046IBM SAP HANA Appliance Operations Guide
2067859Potential Exposure to Digital Signature Spoofing
2043039SAML 2.0 Authentication via HTTP Request Header
2068693Replacing Key Pairs in SAP NetWeaver Application Server for ABAP and SAP
2080798SAP Lumira Server 1.19 gives errors after upgrading SAP HANA to Revision
1924115DSO SAP HANA: partitioning of change log tables
1847431SAP NetWeaver BW ABAP Routine Analyzer
1908367SAP NetWeaver BW Transformation Finder
2043919DMIS Installation Error: RCG_BAG_RSSOURCE Missing
2015986HANA Indexing for the SAP BankAnalyzer
1883147Use of a third-party load balancer with XS
1934114SAP HANA DEMO MODEL - SHINE Release & Information Note
2076842SAP_HANA_Accelerator_SAP_ASE 1.0 Release Notes Information
1577128Supported clients for SAP HANA



** The hyperlinks will require access to the SAP Service Marketplace **

I happened to be in Las Vegas when Steve Lucas and Brad Peters gave an introduction to Birst running on the HCP during the TechEd (yeah, yeah && D-Code ) keynote. You can find the recording here:



Interesting combination which came (at least to me) as a surprise. Why on earth would Steve go on stage to give a competitor "Kudos" I thought? I believe it’s quite simple, Birst gave SAP a big opportunity to showcase the possibilities of the HCP to run 3rd party applications. Even when that meant showing competition to SAP BW and Business Objects. Honestly though, Birst is not BW, nor is it Business Objects. It can certainly be a full BI suite running in the Cloud with a multitude of possibilities to be used as a federator to a lot of sources coming from enterprise software and beyond. To top that off, also the reporting suite is fully cloud enabled. Running and building reports on Birsts semantic layer is just a click away from the modelling and ETL possibilities. Next to running Birst in a public Cloud, Birst can also be deployed in a private Cloud by implementing the company’s virtual appliance (a VM based instantiation of the Birst Cloud offering).







It’s difficult to compare Birst to any of the SAP solutions. It’s more sophisticated than SAPs BI OnDemand SaaS solution, but feels less sophisticated then BW and Business Objects. Birst does not want to be a competitor to those products I believe, It’s a different solution with a different use case. I believe it can help out customers which have a vast amount of data sources and BI solutions and need to find a way to relatively easily combine those and report on top of that, all from a single solution, running in the Cloud.


Birst itself was founded in 2004 by Siebel veterans Brad Peters and Paul Staelin and the Birst SaaS solution originates from 2009. Birst being a startup raised $64 milion the last couple of years to expand their business. Their customers are in the 1000s and they were named a "Challenger" in the most recent Gartner Magic Quadrant for Business Intelligence and Analytics Platforms.


I contacted Birst after the keynote and asked them to showcase their solution at sitNL in Den Bosch which they were happy to do. You can read the details about their session here: Relive the 6th #sitNL


sitNL also gave me the possibility to ask Birst if I could try out Birst and to combine it with HANA. I’m happy to say, they again agreed to help me out. Birst supplied me with an Amazon AMI to be able to put Birst to the test.


This blog is the start of a series of blogs where I will deep dive into two scenario’s using Birst with HANA:


1. Using HANA as a datasource for Birst by using BirstConnect



2. Using HANA as the database for Birst





This second option actually is the same option as running Birst on the HCP. In my blog I will connect Birst to the Interdobs Amazon HANA One instance and show how Birst can leverage the speed of HANA against a fraction of the cost of running an on premise solution.


Stay tuned for part 2!



You have probably seen a few blogs on implementing Year to Date (YTD) calculations in HANA. A couple of these can be found at:


Implementation of WTD, MTD, YTD Period Reporting in HANA using Calculated Columns in Projection


How To...Calculate YTD-MTD-anyTD using Date Dimensions


These work well, however, I have created a simplified version of YTD calculations, which I used to deliver a Proof of concept at a customer.  The reason I call this a simplified version is simply because it has less steps and easier to implement with any data set.


As different models have different requirements; therefore, we have few assumptions here:


  • The financial year is from 1st Jan to 31st Dec
  • The year to date calculation is shown on a monthly basis.
  • This model looks at Current YTD and Previous YTD calculations
  • This model was built in December 2014; therefore, it refers to current year as 2014.
  • We will have time dimension data created.
  • The time dimension data will be joined with Sales table


Creating Simple Time Dimension table:


YTD is a time-based calculation, starting from the beginning of the current year, and continuing up to the present day. The following table is created to reference months in a year with a key field called ‘FLAG’. ‘FLAG’ column is required to filter the data according to the month. This column can be renamed to ‘Month’.


To create the Time dimension table, see SQL below:





The data for the table is attached in this blog below.


The table contains data for 12 months, each month having 12 flags. The table should look like below once the data is loaded.



If requirement is Quarter to Date (QTD), then a quarter column can be added to the time dimension table.


Create a calculation view:


Assuming that an analytical view is already created, which contains the fact table (sales table).


Time generated data from HANA can also be joined into the model to bring fields such as Year, Month, Quarter etc.


  1. In order to calculate the Start of the year and end of the year, following calculations can be created in the projection node:





CY_START_OF_YEAR: This calculation column is created to calculate the start of the year. It takes the last for 4 digits from current year and then adds 01-01 i.e. 1st of January. The data type of CURRENT_DATE is date.




CY_DATE: This calculation column converts the CY_START_OF_YEAR calc column into data type ‘Date’.



PY_CURRENT_DATE: This calculated column calculates start of previous year. ‘addmonths’ function looks at current date and takes away 12 months to calculate date for previous year.


PY_START: This calculation column extracts the last 4 digits to get YEAR.


PY_DATE: converts the PY_START to date data type.


  1. Join the sales analytical view with the Time dimension table created earlier.  Here MONTH is joined with MONTH.


    2. Create final two calculations at aggregation node to get CY_YTD orders and PY_YTD orders.


CY_YTD: In the calculation below, the code looks at the DATE_SQL column, which is the date the orders were created. If the column DATE_SQL is between the start of the year date and current date, then relevant orders are displayed. The data type of DATE_SQL is date and is extracted from Time generated data within HANA.



PY_YTD: This calculation does the same calculation as above but for previous year.



  3. (Optional) Creating an input parameter for entering Month as a filter. This is optional, in this model, the user can either get a prompt to enter month or a selection can be made at the report level. Notice here that the input parameter is on column FLAG, as FLAG is taken as an indicator for month. The input parameter looks like below:





Below is the summary of results shown as part of the YTD calculations. We want to see orders for current year and previous year, we have FLAG column as a filter. The results are shown in the Data preview part of the HANA studio, to enable this, click on data preview and then move to the 'Analysis' tab. Drag and drop relevant dimensions and measures.


  1. No Filter – when there is no filter, the results are shown for all the months.


  2. Filter on month 9 – results are shown from Jan – September for current and previous year.

  3. Filter on month 6 – results shown from Jan to June. Notice, the second image shows results with Date field added, which shows that orders are pulled only from Jan – June.


This example gives you an insight into how the model solves YTD calculation in a performance friendly way. It also shows that by creating a few calculations you can avoid creating complex SQL procedures.  The model has been test with large sets of data as all the calculations are taking the advantage of the calc engine inside HANA and no extra wrappers are created.


The input parameter ensures that the data is pushed down at the right level to avoid transfers of large sets of data.


Please feel free to comment and provide your suggestions. If you have any questions then please do not hesitate to contact me.

As a developer in SAP HANA XS, you came across attachments, sales orders, project templates, etc?  - then this session is for you!


In the free webinar, you will learn how to use SAP Mobile Documents to manage, share and sync your files from SAP HANA XS.


With the HANA XS application you might want to save generated documents. SAP Mobile Documents provides a Java Script API to manage files and folders along with an extended set of properties. You can then directly manage your documents through SAP Mobile Documents standard clients, on both desktop and mobile.


For details and registration check out http://scn.sap.com/docs/DOC-60383.

Everyone knows that testing is important, and anyone who has been involved in software development is likely to have come across a range of testing tools. When you’re building an application you’ve got plenty of unit testing frameworks to choose from (ABAP Unit, JUnit, QUnit, etc.), but what if you’re just building a piece of an application? What if you are building a set of Hana procedures which stand alone, waiting to be integrated into a number of potential applications? How do you test just your piece?


In previous projects, I have used Fitnesse to test applications, and also to run procedures and SQL commands against databases, so I investigated using this framework again. On searching, I came across references to DBFit, and discovered that this was the newest recommended way to test a database using Fitnesse. DBFit offered the functionality and power of Fitnesse, with additional out-of-the-box database operation support without the need to code fixtures.

DBFit meets HANA


Working with HANA means a lot of things, but importantly, for the purposes of DBFit, it meant that just a little bit of work was necessary to get our out-of-the-box solution to really work. DBFit comes with a number of connector classes to support different databases, and instructions on how to go about creating your own connector if your database is not supported.

To customise DBFit to work with Hana, a HanaEnvironment java class is necessary, which provides a mapping between the HANA database types (e.g. NVARCHAR) and java.sql types, and also implements two key queries – one which yields all the column metadata of a given table or view, and one which yields all the parameter information of a given procedure.

Under the hood


Creating a new HanaEnvironment class was as simple as implementing a new class which extended the AbstractDbEnvironment class provided by DBFit. This class does the bulk of the work in preparing queries and managing results, with the HanaEnvironment class just taking care of specific syntax and requirements for our environment.

The three most basic requirements to get the connector up and running were:


Adding HANA types to the java.sql type matching



Other connectors bundled with DBFit had examples of these type lists, so it was easy to create a list mapping HANA data types (e.g. NVARCHAR, NCLOB, etc.) to their java SQL types within the HanaEnvironment class.


A query to return table metadata


In order to allow proper mapping of table columns for querying the database, a getAllColumns method must be implemented in the HanaEnvironment class. At its most basic, this method searches the TABLE_COLUMNS system table in HANA to get information about the columns belonging to the given table.

Extra checking had to be implemented here to account for database objects that are defined using HANA repository objects (e.g. objects which do not take the form “MY_SCHEMA”.”MY_TABLE” but are defined as “my.package.object::MY_TABLE”). The code checks for the presence of a double colon, denoting this package structure.


A query to return procedure metadata



A similar method, getAllProcedureParameters, also has to be implemented in the HanaEnvironment class. This method searches the PROCEDURE_PARAMETERS system table for information about the given procedure. Special checking for the package structure (with the double colon notation) was also implemented here. Inverted commas are stripped from copies of the names of the procedure and schema to allow for searching the system table, but maintained in the actual schema and procedure name objects in order to ensure that they are called properly when reaching the DB.



Unfortunately, there is one limitation that was identified during the development and use of this connector that remains outstanding – if the output of a procedure is a table variable, the DBFit framework cannot currently process this output.

An automated regression suite, at the click of a button


Developing the DBFit connector enabled our QA colleagues to write a comprehensive test suite for our procedures, which can be run at the click of a button. The flexibility of Fitnesse allows us to group these tests with common setup and teardown pages, set global variables that can be inherited through multiple tests, and view simply and quickly whether changes have broken any of our existing functionality. The wiki syntax is easily understood by everyone, and the built in fixtures from DBFit have allowed many database operations be performed without explicit knowledge of SQL (e.g. calling procedures, verifying the output of the procedures, etc).

I want to use DBFit


DBFit is open source, so you can get it from their github repository. There is information about how to build the project, as well as how to run DBFit, on the DBFit public site. You’ll need to make sure that you have a copy of the SAP ngdbc.jar (to be placed in the custom-libs folder) too.


Filter Blog

By author:
By date:
By tag: