1 2 3 55 Previous Next

SAP HANA and In-Memory Computing

820 Posts

More than 100 participants from over 50 companies from all over Europe working with SAP HANA operation related topics joined the SAP HANA Operation Expert Summit on April 20-21 2015 in Walldorf, Germany!

It was in the late afternoon of a warm spring Monday in Walldorf where the first participants who have arrived are having drinks and snacks in the foyer of the event room in building 21.

They are waiting for the second SAP HANA Operation Expert Summit to be kicked-off by Joerg Latza Head of SAP HANA Platform Product Management.

The event room is dark and filled with silence as the newest SAP HANA trailer brings the audience’s full attention to the screen. After Joerg heartily welcomed the attendees the event started with three customer experience presentations from Geberit, Ferrero and T-Systems followed by a panel discussion to answer questions from the audience.

This kind of SAP HANA Operation experience sharing was really appreciated by the participants.

The next two presentations of this early evening were given by Rudi Hois (Vice President for SAP S/4HANA) talking about how the evolution continues with SAP S4/HANA and by Alan Priestley (Director of Strategic Marketing EMEA from Intel) on an overview of the Intel roadmap.

After this intense almost two-hour session all attendees were invited to join and meet the SAP Experts of the SAP HANA Development and Product Management organization at a networking dinner.

Tuesday started early in the morning and was completely reserved for deep dive knowledge exchange during networking and group discussion.

With a mixture of 20min teaser presentation slots and 60min breakout sessions the audience got the newest insights on the topics Cloud, Landscape and Deployment Options, Sizing, Mission-Critical Data Center Operations, Multitenancy, Managing Large Volumes of Data, Lifecycle Management, Monitoring and Administration and Troubleshooting.


Separated into smaller breakout groups the customer and partners had time to provide feedback about what’s good, what needs to be improved, discuss planned features, raise questions, and exchange own experiences.

In the early evening with still lovely sunny spring weather outdoors, the event closed with a gathering of all participants and SAP Experts having some soft drinks, beer and snacks.

I know I already wrote this last year but I can only repeat myself for the SAP HANA Operation Expert Summit 2015:

The IT experts attending this summit really impressed us at SAP!

Thanks a lot for participating, for sharing your insights and experiences and making this happen!

And if you are interested in upcoming SAP Operation Expert Summits in 2015, my counterparts around the globe are currently planning and preparing for the following events in:

  • Melbourne, Australia at June 10, 2015
  • Bangalore, India at June 16, 2015
  • Additionally two events in Newtown Square and Palo Alto, USA in May/ June timeframe. These events are combined with a new format called the SAP HANA Developers Expert Summit, which will take place on the second day.

Stay tuned!



The new Cisco Validated Design document is out:



It describes a wholistic ACI-ready datacenter design based on FlexPod for SAP applications, including HANA. It also gives an introduction to Cisco and NetApp hardware. Definitely one of the most comprehensive design guides for SAP solutions today!

Generating Core Data Services files with Sybase PowerDesigner



Generating Core Data Services files with Sybase PowerDesigner. 1

What is Core Data Services?. 1

The basics: Creating a project, a physical data model and the data model 1

Installing the HDBCDS Extension File (HDBCDS.xem). 6

The funny part: Generating the CDS definition file. 8

Using Domains in the Data Model (CDS Basic Types). 11

Working with multiple Contexts. 13

Conclusion. 16




What is Core Data Services?

As described in the SAP HANA CDS Reference guide, Core Data Services (CDS) is an infrastructure that can be used by database developers to create the underlying (persistent) data model which the application services expose to UI clients.

In that sense, CDS looks to be similar to SQL DDL but the key advantage is that CDS definition files are created as design-time objects that means that can be transported together with HANA Models.

Design-time objects for Data Model definition + logic + UI layer is indispensable for Native HANA Apps.

With this SPD Extension you can create CDS (.hdbdd) files from Physical Data Models.

So, let’s take the following example from the SAP HANA CDS Reference Guide


This code creates two tables after activation in the schema MYSCHEMA.

The CDS Guide explains in details all the elements that composes a CDS file (.hdbdd).


The basics: Creating a project, a physical data model and the data model

Go to File -> New Project and give a name to the project



1)      Go to File -> New Model and select Physical Model under the Information category.


2)      Create the Book and Author tables



3)      Create the SCHEMA

Go to Model -> User and Roles -> Schemas



4)    Set the Table’s owner

Go to Model -> Tables


And set the Owner column to MYSCHEMA



5)      Preview the Data Model (press Ctrl+G or Right click on the Model -> Properties )



Installing the HDBCDS Extension File (HDBCDS.xem)

Before proceeding we must create a folder to store the .XEM extension file.

Go to Model -> Extensions -> Attach an Extension


Click on Path and select the folder that contains the .XEM file



After the Path is added, select the HDBCDS extension




Once the HDBCDS extension is added, a new object is added to the toolbox for creating contexts



The funny part: Generating the CDS definition file


Before generating the .hdbdd file from the PDM we must create a Package and a Context.

Using the package tool in the toolbox under SAP HANA Database 1.0 category, create a package named com.acme.myapp1




Use the context tool to create a context named Books.




Set the context Books as main context for the data model.

Go to Model Properties (right click on DemoCDS Model -> Properties)
In the CDS Tab set the Books context for the Context property



Finally, generate the CDS Definition files. Go to Tools -> Extended Generation


In the Targets tabs select HDBCDS, let the Selection by default selecting all objects and in Generated Files select the .hdbdd and .hdbschema files.
Set an output directory and click Ok to generate the two files


Using Domains in the Data Model (CDS Basic Types)

CDS supports the definition of shared types so that your columns definition can reference the types instead of declaring explicit the data type.




To use this feature, go to Model -> Domains and define a Domain, for example “TextoLargo” defined as VARCHAR(100)


Set the Domain in the columns definition, instead of setting the Data Type.

Go to Model -> Columns



And set de Domain value to “TextoLargo” for those fields of Varchar(100) type

Tip: If the Domain column is not present in the Columns properties, press Ctrl+U and select “Domain”.

Go to Tools -> Extended Generation and generate the model.hdbddfile once again.



Working with multiple Contexts


As described in the CDS Guide, we can define multiple contexts for grouping tables that belongs to the same functional group.

For instance, let’s rename the Book context to Main context, and create 2 more contexts named “Datamodel” and “Application”

Go to the Book table’s properties, and select the CDS tab, then set the Context to “Datamodel



Create a 3rdtable named UILogic and set the “Application” context in the CDS Tab




Finally, generate the hdbdd file to see the context definition.




It is possible to use Sybase PowerDesigner to generate SAP HANA CDS files. Using an enterprise modeling tool such as PowerDesigner gives more transparency in the model definition and simplifies the data model maintenance. There is no big effort to generate the CDS file once the PDM is defined.

There are some limitations in this extension, for example structured typed cannot be created.

If you want to enhance this extension, feel free to modify the code.

Go to Extensions -> HDBCDS -> Properties to see all the code behind this extension




Chris Dinkel, Director of IT for Deloitte Consulting, shares how real-time general ledger has transformed their finance functions and enabled them to streamline their service delivery to clients.




We hope you enjoy hearing Deloitte’s first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.


To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.


Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please let us know.


Transcript: SAP HANA Effect Episode 14 FINAL


Sponsored by:


SAP loves to make a point of eating its own dog food and so lowly SAP clerks like yours-truly are offered the experience to participate in multiple user communities be it SCN or one of the many SAP JAM groups.


In one of the SAP internal groups I recently answered a question, that I believe will be interesting to SCN readers, too.

So this is it:


"... a  user want to have the client display in English format.

Decimal values are displayed in german format "123,5", but they want to have it the english format "123.5" - 'decimal point is point'.


I set the properties for the system in HANA Studio (or Eclipse) with Properties --> Additional Properties --> Locale "English".

I also tried to set the User Parameter 'LOCALE' for the user to "EN" or "EN-EN", also without effect.

Has anybody an idea, which client setting has to be chosen to get decimal values displayed in english format? ... "


To answer this question there are a couple of things to be aware of.


Pieces of the puzzle


Piece 1: SAP HANA Studio has a preference setting to switch on or off result data formatting.

Piece 2: SAP HANA Studio is a Eclipse based product.

Piece 3: Eclipse is written in JAVA.

Piece 4: JAVA provides a rich set of API functions to format data, exposed via Formatter objects.

Piece 5: The JAVA Formatter objects are using so-called Locales, which are objects that bundle localization specific settings.


Putting the pieces together

Whether or not the SAP HANA studio formats the data in the SQL result set grid depends on the setting SAP HANA -> Runtime -> Result -> Format values.

studio result setting.png

Now, having activated the formatting, we can see that e.g. numeric data with decimal places gets formatted.


When I am logged on with a German Windows language setting, I will see that the number 1234.56 will be printed out as '1.234,56' - as the thousands delimiter for Germany is the dot '.' and the decimal separator is the comma ','.

With English language settings this would have been the other way round.


If you ask yourself "Why would I ever switch this off?", then it's probably good to know that formatting lots of data is not a trivial task and can take up a considerable time when printing your results. That might slow down your development cycle of re-running queries considerably...


"Stop talking, show us how it looks like!"


... I hear you say, so here you go:

Formatting Enabled, German locale settings

german form.png

Behold and compare to the output with formatting switched off:


Formatting disabled, output as raw as possible in SAP HANA Studio

default form.png

SAP HANA Studio provides a place where one can specify the session language for a user connection and it is even called 'Locale':

studio result language.png

Irritatingly this setting only affects the SAP HANA behavior within a SAP HANA DB session. That is, text joins and language filters use that setting to return language dependent data.


An important part in understanding why this setting doesn't help here is to understand that the whole business of printing data out to the screen is done by the client application and not by the database server. This includes formatting the output.


Technically speaking, numbers don't have a format to the database.

In the database numbers have a scale and a number of significant fractional digits.

If and how those are printed out is a different matter - just like your good old TI-30 would calculate with 11 significant digits internally, while displaying 8 of them at max.


Having said that, I would agree with the notion that when we have a setting called LOCALE this should either change the 'whole experience' or there should be more specific setting options. Something like 'output locale' or so... (like here API: Internationalization).


Anyhow, the point is: the LOCALE setting with the connection doesn't fix our formatting requirement.


Fiddling with the invisible pieces

So, we know that the data gets formatted via a JAVA Formatter, but apparently the LOCALE setting doesn't set this thing up for us.

Checking the JAVA SDK documentation (LOCALE and FORMATTER again) the avid reader figures out:


If the application doesn't specify the LOCALE for the FORMATTER it uses the LOCALE that is currently active in the JAVA VM.


As it turns out, the default JVM behavior is to try to figure out the locale setting of the operating system user that is running the JVM.

But there are options to overwrite this, as we find here.


Working the puzzle from the edges

So, we can specify command line arguments when starting a JVM to set locale parameters:

-Duser.language=EN , -Duser.country=UK and -Duser.variant=EN


This is all great, but how to start the SAP HANA Eclipse with such parameters?


Putting in the final pieces

This last piece is actually easy.

Eclipse uses a parameter file called eclipse.ini located in the folder where you find the the Eclipse executable.

SAP HANA Studio simply uses a renamed version of those files, so the parameter file we're looking for is called hdbstudio.ini just as the executable is called hdbstudio.exe.



Looking into this file we find that it contains a part that starts with -vmargs.

This section allows to specify parameters for the JVM used by Eclipse respectively SAP HANA Studio.


Putting in the desired locale parameters like this


will make provide the setting we are looking for.

Be aware that in order to be able to safe this file on MS Windows you will need to have elevated privileges.


The final picture

To set the new settings active it is required to restart SAP HANA Studio.

We can see that the formatting option now uses the specified locale.


Formatting Enabled, English locale settings

english format.png

Unfortunately the setting is active for all connections used by this SAP HANA Studio.

So we don't have a proper user specific setting for the data formatting - a workaround at best.


This puts SAP HANA Studio clearly into not being a data consumer/end-user tool.

It's an administration and development tool, just like it had been positioned since day one.


Having the picture completed, we can take a last look at our puzzle and pack it away again.




There you go - now you know.






An alternative approach would have been to set the JVM arguments via environment variables.

This approach would allow to easily create several different startup scripts that first set the parameter and then start SAP HANA Studio.

If you got the gist of this post, you shouldn't have issue plugging this together.

Few days ago I passed certification C_HANATEC_142.

I'll quote myself from http://scn.sap.com/blogs/HanaKahuna/2014/12/18/hanasup-certification-experience

Best way to test this studying method and long time remembering is to take a HANATEC exam (cca 80% of material is the same as for HANASUP).  “

C_HANASUP was in december 2014, C_HANATEC April 2015, 4 months to forget what I have learned, without current Hana Project

(daily Job is Abap, WD4A and Hana development for fun when I have time).


In between I did not do nothing in Hana, I prepared and got certificated as Sap Hana Development Professional ( P_HANAIMP_142 ),

so I did have to reread and learn Administration and Support material for that exam.

That was in February, so anyway 2 months to forget little details is more then enough.


What I did to prevent that ?

Whenever I can I read questions people ask in SCN. Real problems from real projects (more or less).

When I can,  I engage, even if its just posting where one can find material to work out an answer.


I actually did not plan to go to HANATEC,  but there was an opportunity, so why not. As I said 80% of material is the same as for HANASUP.



There was only cca one week to prepare for this exam. So I did what I thought was the best in light of methods I described in my first blog post.

I read HA200 one more time.

I read Hana Installation one more time.


Time to test SQ4R

I used Re-Read and Review for my old Cornell Notes for C_HANASUP.

I did not go thru Hana system doing HA200 examples, cause I know that basic stuff from working on development system to setup things for development or test and tweak things when needed.


Actual Exam

Classic 80q, 180 minutes


Much harder for me then I anticipated (I'm no administrator). I will never ever take an exam with so short notice for preparing.

I realise now that You just need time to get in Administrator frame of mind to anticipate questions when reading material.

There's just too much things that are SAP BC baseline knowledge that does not come naturally to me.



S4R is ok, better then cramming for sure. Long time retention is much better. Rereading and Reviewing are needed.


Is it worth for Developer to study that ?

Absolutely !  You have to know your way around administration.

When I was MS SQL developer, almost a rule was that one is actually a 90% an administrator also. Nobody's gonna review your security permissions for You while You wait.

So if You have a chance, learn all You can learn, with or without certification. Although it is not unimportant to have certificate,  more important is to know things and have fun with your Job.

8 hours is too long if You do what you don't like.


Have Fun !

SAP HANA system copy Procedure-  Below is the HANA System copy (SWPM) using Backup/Recovery method from PRD to QAS.


--- SID of HANA production is PRD- > So, schema will be SAPPRD

--- SID of HANA Quality is QAS- > So, Schema will be SAPQAS


1. Take the backup of HANA Prd system

2. Copy/Move the the backup form PRD Host to QAS Host

3. Ensure that QAS is having enough space for Backup.

4. Download SWPM latest

5. Download Kernel.

6. Download SAP HANA License of QAS


Start SWPM and select Database refresh or DB Move




Profile Directory Of App server



Provide default Master Password for all users



Provide passwords for SIDADM & SAPServiceSID


Provide path of kernel Directory


Select Homogeneous System Copy method



Provide your DBSID as QAS , DatabaseHost will be QAS Host name & Instance number of QAS



SWPM will take the below schemas



Here Schema name should be SAPPRD



Provide SIDADM Password



Provide the location of the backup path as per Point 2 . Backup name should be the prefix name of the complete data backup. I have used the default backup name (COMPLETE_DATA_BACKUP).


Provide the location of SAP HANA QAS License as per Point 6.



Provide DDIC password



Select Local Client directory as HANA Client software Patch. Perhaps you can select "central directory also" Depends on the requirement.


After clicking "Next" You will be asked to check the final list of parameters and after confirming your actual HANA Refresh will start



Post Successful Completion, You need to update the hdbuserstore on SAP Application server. -> This will connect the SAP Application and HANA DB.

Run Report RS_BW_POST_MIGRATION to adjust SAP HANA Calculation views.


P.S: I am not including Post refresh activity steps. This will be common like other refresh activity.


Refrence Note:


1844468 - Homogeneous system copy on SAP HANA

1709838 - BW 7.3 on HANA: System copy using backup and recovery



Pavan Gunda

Michael Harding , SAP Enterprise Architect at EMC, talks about their very ambitious and very fast HANA evolution, from sidecar reporting, to CRM on HANA to Sales & Operations Planning.  Make sure to follow @putitonhana to keep up with all their awesome HANA learnings.


We hope you enjoy hearing EMC’s first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.


To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please let us know.


Transcript: SAP HANA Effect Episode 13

Sponsored by:


Join the "Application Development Based on SAP NetWeaver Application Server for ABAP and SAP HANA" by Thomas Jung and Rich Heilman

Monday, May 4, 1:00 p.m. – 5:00 p.m.




This session will provide an overview on how to leverage SAP HANA from SAP NetWeaver AS for ABAP applications that integrate with the SAP Business Suite. Speakers will explore concrete examples and best practices for customers and partners based on SAP NetWeaver AS for ABAP 7.4. This includes the following aspects: the impact of SAP HANA on existing customer-specific developments, advanced view building capabilities, and easy access to database procedures in the application server for ABAP; usage of advanced SAP HANA capabilities like text search or predictive analysis from the application server for ABAP; and best practices for an end-to-end application design on SAP HANA. Finally, with SAP NetWeaver 7.4, SAP has reached a new milestone in evolving the application server for ABAP programming language to a modern expression-oriented programming language. The new SAP NetWeaver Application Server for ABAP features covered in this session will include inline declarations, constructor expressions, table expressions, table comprehensions, and the new deep move corresponding.

Rita Lefler, Global BI Director at Tom's Shoes walks us through their amazing BW on HANA migration.  Hear how Tom's Shoes uses real-time analytics to maximize product availability and profitability.




We hope you enjoy hearing Tom's Shoes first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.


To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.


Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please let us know.


Transcript: SAP HANA Effect Episode 12 TOMS_FINAL


Sponsored by:


In a series of 14 videos the SAP HANA Academy's Tahir Hussain "Bob" Babar shows how to preform the most common SAP HANA Cloud Integration for Data Services tasks. SAP HANA Cloud Integration (HCI) is SAP's strategic integration platform for SAP Cloud customers. HCI provides out-of-the-box connectivity across cloud and on-premise solutions. Beneath the real-time process integration capabilities HCI also contains a data integration part that allows efficient and secure usage of extract, transform and load (ETL) tasks to move data between on-premise systems and the cloud.

Bob's 14 tutorial videos are linked below with accompanying synopses. Please watch the videos here or on the SAP HANA Academy's HCI-DS playlist.


HCI Data Services: Overview

Screen Shot 2015-03-25 at 10.48.40 AM.png

To open the series Bob a provides quick chalkboard overview of how data services works within SAP HANA Cloud Integration. This series will show you how to install HCI-DS agents and how to build various datastores to connect to SAP ERP or BW system, flat files, a OData provider, AWS, and a weather service in WSDL format. In addition the series will teach you how to build tasks that will extract data from these various sources, transform it using HCI-DS functions and load it into a SAP HANA schema in the SAP HANA Cloud Platform.

Screen Shot 2015-04-08 at 11.40.58 AM.png

Bob finishes this introductory video by walking through the basics of the seven HCI-DS tabs.

HCI Data Services: Agent Install

Screen Shot 2015-03-25 at 10.50.16 AM.png

The second video in the series shows how to download, install and configure the data services agent on a Windows machine.


This video assumes you’ve already activated your HCI-DS account. Once logged in to HCI-DS, click the link to download agent package in the agents tab and download the 64 bit for Windows version from the service market place.


While following along with the simple download check the box next to specify port numbers to view the four port numbers that will be used for inter-process communication. For the agent to function we only need the HCP port open for outbound traffic. Leave all of the defaults while finishing the installation of the data services agent.


To configure the connectivity to HCI first log in with the administrator user. This is usually the SCN ID of your HCP account. You can allocate the administrator role to your HCP user from the administration tab of HCI-DS.


Next you will need an agent configure file. To find this go to the HCI-DS agents tab and select new agent. Name the agent and allocate it to an existing or new group. Download the configuration text file on the next screen and save it to your machine. Copy the configuration file and then create and save a new file that can be linked to the HCI-DS agent configuration page. 


Once you’ve uploaded the file select yes in the wizard to successfully start the agent. Refreshing the agents tab will display a green box that confirms the HCI-DS agent has been started and is properly connected.

Screen Shot 2015-04-10 at 9.54.48 AM.png

HCI Data Services: SAP HCP HANA Datastore

Screen Shot 2015-03-25 at 11.11.58 AM.png

Continuing the series, Bob shows how to create a datastore that connects to a SAP HANA schema in HCP. Access to the schema is secured through an access token generated via the HCP client console.


To create a HCP connection navigate to the HCI-DS datastores tab, click on the configuration option and enter the HCP administrator account name and schema ID. To get the access token, you must connect to the agent machine that stores your HCP client. In that machine open a command prompt window and set the proper proxies. Bob outlines the commands to enter that will eventually generate the access token. Paste the token into the datastores tab and save. Clicking connect will successfully connect to the HCP datastore.


Access to the various objects that will be created must be given to the granted role we are using with HCP. After logging into the SAP Web-based Development Workbench open a new SQL console and execute the line of code displayed below. This will generate a role name.

Screen Shot 2015-04-08 at 12.45.10 PM.png

Next execute the SQL statement below using that role name. This will allow the user to insert, update and delete on the schema. Now no security ride issues will occur when using HCI-DS to load data into the SAP HANA Schema in HCP.

Screen Shot 2015-04-08 at 12.51.11 PM.png

HCI Data Services: OData Datastore

Screen Shot 2015-03-25 at 11.13.22 AM.png

Bob’s next video will take data from an OData provider and store it in a SAP HANA Schema in HCP. To build a new datastore click the add button in the datastores tab. Choose adapter as the type and OData as the adapter type. Set the depth to 2 so all the relationships can be expanded one level deep.


In a browser go to services.odata.org/ and click on the Read-Only Northwind Service link and change V3 in the URL to V2 as currently only OData version 2 is supported. 


Note you may need to change the proxy for the OData adaptor on your agent machine. If so, in the Stat menu go to SAP Data Services Agent and then click on configure agent. Choose to configure adaptors and set the adaptor type to OData. Bob’s agent is internal, so he adds wdf.sap.corp to his host, http, and https proxy in the additional Java Launcher text box.


Copy the modified Northwind Service URL and go into the datastores tab of HCI-DS. Build a new Datastore named OData with an OData adaptor type. Paste in the Northwind URL as the endpoint URL and click save to create the adaptor.

Screen Shot 2015-04-10 at 9.56.59 AM.png

Click the test the connection button to confirm the connection is working. The ultimate test is to see if you can import an object. So click on the import objects option, choose Alphabetical_list_of_products and click import.  Now you have created an OData datastore, set the proxies to successfully connect and imported OData services from the provider.


HCI Data Services: OData to HCP Task

Screen Shot 2015-03-25 at 11.14.30 AM.png

This tutorial video shows how to take data from an OData datastore and load it into a SAP HANA Schema in HCP.


First create a table in the SAP HANA Schema in HCP. Next build a project with a task that contains a data flow. This task's data flow will take data from the OData provider and store it in SAP HANA. Bob shows how to create an Extract, Transform, and Load dataflow that joins the data source to the target query.

Screen Shot 2015-04-10 at 9.59.32 AM.png

Mapping the OData input to the SAP HANA output is done by dragging input column names to their corresponding output column names. A green box will appear if the OData to HCP task has been executed successfully. The view history provides a trace log, monitor log and error log that displays what was processed.


HCI Data Services: Flat File Datastore

Screen Shot 2015-03-25 at 11.16.24 AM.png

The sixth video of the HCI-DS series shows how to configure the DS agent to extract data from a file that sits on a Windows client. First Bob has to configure a change to the directories of the SAP DS agent by adding a simple CSV file that contains employee names. After that Bob builds a new datastore with a file format group as the type and a specific root directory. Now that a link to the file directory has been established, the file formats need to be built.


In this tutorial Bob elects to create the file format from scratch. Bob chooses comma as his column delimiter, default as his newline style, none for his text qualifier, and marks so that the first row contains column names. After Bob specifies a pair of columns as integer and varchar(50) he has finished setting up the file format.

Screen Shot 2015-04-10 at 10.00.57 AM.png

HCI Data Services: Flat File to HCP Task

Screen Shot 2015-03-25 at 11.17.30 AM.png

Continuing from the previous video this tutorial shows how to use HCI-DS to load data from a flat file (CSV) on a Windows client to a SAP HANA schema on HCP. In HCI-DS create a new task in the projects tab with the previously loaded flat file as the data source.


Next Bob builds a new target in the SAP HANA Web-based Development Workbench using a script he has already written that creates a column table named FILE_EMPLOYEES. After selecting HANA_ON_HCP as the target datastore, Bob saves the task and defines the data flow.


Bob imports the newly created table as a the data flow’s target object. Bob uses EMPLOYEES.csv as the source file and then joins it to the target query. After auto mapping the columns by name the data flow is created, verified and executed.

Screen Shot 2015-04-10 at 10.03.14 AM.png

HCI Data Services: MySQL to HCP Task

Screen Shot 2015-03-25 at 11.18.40 AM.png

Bob shows how to use the HCI-DS platform to load data from a MySQL database on a Windows client into a SAP HANA Schema within HCP. Bob has built an ODBC user and connects to his MySQL schema. Bob creates a new datastore that has a MySQL database type and ODBC as the data source.


After importing the table, Bob creates a new task in the projects tab. Bob selects his source and target and then builds a MySQL table in the SAP Web-based Development Workbench. Bob imports the table as his target object in the data flow and then adds a source table before then joining it to the target query. Once mapping is completed and validation is successful Bob executes the task.

Screen Shot 2015-04-10 at 10.04.59 AM.png

Bob further verifies the connection’s success by inserting a new value into one of the rows in his MySQL table. After re-executing the task Bob sees the new value displayed in the SAP Web-based Development Workbench.


HCI Data Services: WSDL Datastore

Screen Shot 2015-03-25 at 11.19.52 AM.png

In the next HCI-DS tutorial video Bob demonstrates how to use HCI-DS to create a datastore for a WSDL Web Service that will provide current weather data based on an inputed zip code.


The XML for the WSDL can be viewed here: wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL.


The input parameter will be a zip code. The output will contain the zip code's forecast, temperature, city, wind, pressure, etc.


Bob builds a new WSDL datastore with a SOAP Web Service as the type and the URL listed above as the path. Then he imports GetCityWeatherbyZIP as an object. Instead of columns WSDLs have a request schema and a reply schema. For this WSDL the zip code is the request schema and corresponding weather information is the reply schema.

Screen Shot 2015-04-10 at 10.06.48 AM.png

HCI Data Services: WSDL to HCP Task

Screen Shot 2015-03-25 at 11.21.18 AM.png

Continuing along from the previous video Bob shows how the weather data from the WSDL is outputted into a SAP HANA schema in HCP.


First Bob creates a new column table in the SAP Web-based Development Workbench called WSDL Weather by executing a SQL statement.


At the time of this tutorial’s creation HCI-DS doesn’t yet have a row generation transform that will output one row. So Bob needs to force HCI-DS to output one row by creating a table with just a single row. Bob creates a new flat file that contains 1 as the value in its first and only row in his Windows client. Now Bob creates a new datastore for the single rowed table and adds an integer column named ID.


Bob creates a new task with file format as the source and HANA_ON_HCP as the target. Then Bob starts building the data flow by importing the WSDL weather table and adding it as the target object.


Bob then imports the One_Row.csv table and joins it to the transform. Within the transform Bob selects the GetCityWeatherByZip WSDL as the output and enters a New York City zip code in the mapping.


Next Bob adds a Web Service transform to the data flow and joins it to the first transform. After choosing GetCityWeatherByZip as the output in the transform, Bob maps the pair of WSDLs together. So he has joined the source table that contains the zip code input parameter to the WSDL. Continuing on Bob joins the WSDL function call to an XML map and joins the XML map to the target query.


In the XML map Bob adds all of the reply columns (city, wind, temperature, etc.) from the input to the output. Finally in the target query Bob elects to auto map by name. Then Bob saves, validates and executes the task.

Screen Shot 2015-04-10 at 10.09.38 AM.png

Bob verifies the connection by running a select on the WSDL table in the SAP Web-based Development workbench to see the current weather data for the SAP office in New York City.

Screen Shot 2015-04-10 at 10.08.52 AM.png

HCI Data Services: ERP to HCP Task

Screen Shot 2015-03-25 at 11.22.22 AM.png

In this tutorial video Bob shows how to use HCI-DS to extract data from an ERP provider, filter the data and then load it into a SAP HANA Schema in HCP.


First in the SAP HANA Web-based Development Workbench Bob creates a column table called ERP_Customers via a SQL statement. In HCI-DS Bob creates a new datastore with SAP Business Suite Applications as the type and uses an agent that exists on the same SAP system. After entering his personal credentials, client number and system number, Bob saves and then tests the connection of his new datastore.


Next Bob imports the table before creating a new task with an ERP source and a HANA_ON_HCP target. Then Bob imports the ERP_Customers table as the target object. When working with an ERP source you have three additional ABAP transformation options. Bob connects the source to the target query, maps the columns together and then elects to filter for where the region = NY. After validating and executing the data flow, Bob runs a select statement in the SAP HANA schema and sees all of the customers in New York.

Screen Shot 2015-04-10 at 10.12.49 AM.png

HCI Data Services: Using Functions

Screen Shot 2015-03-25 at 11.23.33 AM.png

In this video Bob highlights how to use some of the available functions within the query transform in a HCI-DS dataflow.


Bob replicates and renames the ERP task he built in the previous video. Bob wants to remove all of the trailing zeros in front of each customer number for all US-based customers. So in the target query of the replicated ERP task’s data flow Bob selects the Customer_ID in the output and navigates to the mapping transform details tab below. Bob elects to run a ltrim string function on the Customer_ID column with 0 as the trim_string.

Screen Shot 2015-04-10 at 10.13.58 AM.png

Now after closing, validating and executing the ltrim modified replicated task, Bob confirms that the zeros have been removed from the Customer_ID by running a select statement on his ERP table in the SAP Web-based Development Workbench.


HCI Data Services: Sandbox to Production

Screen Shot 2015-03-25 at 11.27.31 AM.png

In the series’ next video Bob demonstrates how to promote a task from the sandbox environment to the production environment so it can be scheduled. To promote a task first select it and then choose promote task under more actions.


After executing the task in the production environment Bob now has the option to schedule it. When building a new schedule you can determine the starting time and the frequency in which it will run.

Screen Shot 2015-04-10 at 10.15.59 AM.png

The administration tab allows admins to create additional users with a developer, operator or administrator role. Notifications can be set so an email is sent when a task is executed successfully or fails.


HCI Data Services: Loading Data From AWS

Screen Shot 2015-04-08 at 11.06.31 AM.png

Bob’s final video in the HCI-DS series shows how to load data from a Amazon Web Services file into the SAP HANA Cloud Platform with HCI-DS.


First Bob opens a specific port, 8080, on his AWS instance. Next Bob creates a new HCI-DS agent and lets that agent know where the folder containing his AWS data is located on his Windows machine. Bob lists this same file as the root directory in his new datastore to connect to the file.


After creating a new column table in the SAP Web-based Development Workbench, Bob begins to build a new task in the projects tab. In the target query of the task’s data flow Bob maps the source columns from the AWS text file to the corresponding columns in the HCP target table before executing the task.

Screen Shot 2015-04-10 at 10.18.03 AM.png

Bob is able to verify that this task has successfully connected the AWS file to HCP by adding an additional row to his AWS text file on the Windows machine. Then after re-executing the task and re-run running the select statement on the SAP Web-based Development Workbench, Bob’s AWS table in HCP now has that additional row.

SAP HANA Academy over 900 free tutorial videos on using SAP HANA and SAP HANA Cloud Platform.

Follow @saphanaacademy

Fast Lane: Just download attached files, self explanatory.



Easter Sunday, kids asleep, now daddy gets to play. I decided (month ago) to port my code for calculating Easter Sunday to Hana,

for fun of it and to test CTE and UDFs on Hana .


It was not as easy as I thought it would be.  My Hana SPS09 rev93.


I had to abandon idea of using CTE's and connected to that my old MS SQL code.

It was said CTE's should  work, although not official yet, but I was not able to stumble upon syntax to make it work.

When I gave up, kids were asleep no more, so night shift was ahead.

I had my diskette with Clipper Summer'87 Easter calc code, but no floppy in laptop

So I googled and found nice simple algorithm, not elegant but now I did not care any more


Logical place to do calculation in was scalar UDF.

Then I learned that it does not support SQL statements



Should I use procedure ? Honestly, I would rather not do it at all then use procedure for such a thing.


Finnaly I just made it work, it was to late to post it (not easter Sunday any more), so I postponed posting it and added function for Orthodox Easter calculation using Meeus Julian Algorithm



It can be written differently, optimized, part of the code merged, with more variables, or less…certanly better documented , but here it is as it is, code attached in files also.


Gregorian Catholic Easter Calculation


Gregorian Ortodox Easter Calculation


Good Friday

is then peace of cake

select ADD_DAYS(get_easter_for_year(2015), - 2) AS "GoodFriday" from dummy ;


Easter Monday

in Croatia and some other countries is non working day so:

select ADD_DAYS(get_easter_for_year(2015), 1) AS "EasterMonday" from dummy ;

I also learned that now if I select a code in SQL editor and execute that it worked (just like in MS SQL SMS ).

Before it was not and entire script got executed as I recall. I missed that. Great then.



I've also lerned that in Hana integer divisions do not work like in MS SQL, so


select 3/2 "myInt" from dummy;


is not 1  but 1.5

as opose to


select 3/2 "myInt"   in MS SQL


Taking care of that I had to use FLOOR or CAST as INT.

I picked CAST.


So there You go,  fun is over.


Any corrections are wellcomed.


Happy Easter !

First off, I have to recognize that this conversation was started (as always) in some blog comments with Tim Korba, who provided an approach that I tweaked and used in a current project. That's the best part about this community -  a majority of my "aha" moments have always come by collaborating with others or taking an interesting idea and building on it.


Creating Quantity Unit of Measure Conversion in HANA


Problem Statement: A given set of data needs to be converted dynamically according to user input, for example - "I want to see all quantities in CS (cases)". The existing UOM conversions provided with HANA only cover the most basic conversions like length, weight and volume. From an SAP data perspective, we are always interested in converting back and forth between the different UOM’s that a material may exist in, which could be EA, CS, PL, BG, BX, and on and on....


Functional overview

  1. Get a QTY that is in the BUOM that you would like to convert to another UOM. In the case that your transaction data is not in BUOM, convert it before starting the following steps.
  2. Determine if a conversion factor exists for the specific material and the target UOM.
  3. If a conversion exists, use the conversion factor to calculate the correct quantity as associate the target UOM, else use the existing quantity and associate the BUOM.


HANA Implementation Overview

I chose to show this in a graphical calculation view since that's where I spend most of my time. Of course this can be applied to any other view with the same logic. Tim Korba had started with an attribute view on an analytic view, which is the same train of though but implemented with a different artifact.


1. Create the base transaction data set. In this case, it is a union created from a number of calculation views, but this could be any dataset really. This data is from APO and is already in BUOM. The aggregation node simply rolls up the result of the union before any joins.


Sample transaction data at this step would look like this.



2. The first join serves to retrieve the BUOM using the material master (MATKEY in APO, MARA in ECC). Depending on application area you might already have this on the main dataset, in this case I did not. This BUOM will be used in case we have no matching entry from the UOM conversion. Since we know all materials must have an entry in material master, we model this an inner join.



Sample data at this step would look like this.



3. Retrieve all the conversions that are relevant for your target UOM. We use an input parameter to apply a filter on MEINH (alternative unit of measure) and also at the same time calculate a conversion factor we’ll use later on.



FACTOR formula


Input Parameter details - you could also use the MEINH column as a list of values or another table. I just made this direct to keep it simple.


MEINH Filter


An example of results from this branch of the calc view would look like this, assuming that the input parameter value was “CS”.




4. Using a Left Outer Join from the transaction data to the conversion data, we see if there is a conversion available. It’s possible there is no conversion for the specific material, so a LOJ is appropriate - and we’ll deal with cases where those are not found in the next steps.


At this point the data will look like this, assuming we have a match for the requested target UOM



5. Create two measures; one to cover the final UOM and one that cover the final qty. The Final UOM will be the target UOM if a conversion is found OR the original BUOM. The Final Qty will be qty x factor if a conversion is found, or the original quantity.



If we don’t have a target conversion that the user was asking for (via input parameter), then we have to use the BUOM that we started with. So the result of this column will either be “EA” or the target of “CS” for example.






This column will convert into the target unit if there was a target unit found, else we’ll revert to the original qty, the same logic used in deriving the UOM in the other calculated column. The key here is to make sure on the semantic tab we are associating this QTY with the UOM we derived in the previous calculated column.





Sample data here would look like this, we can see that the previous 16,000,000 EA was successfully converted to 16,000 CS based on a conversion rate of 100EA/1CS.



Testing the opposite scenario, where there is no target found (using “XYZ” as the target UOM), the following would be the result at the final aggregation node. Since there was no target found, we have to revert back to the BUOM we already knew.



As seen through a client tool like Analysis for Office, anytime we are mixing units, the aggregation will show an asterisk (*) since two unlike units can’t be aggregated. When you look at the material level however, you can see the qty and the unit associated with it.



Happy HANA,


Dileep Moturi, IT Project Manager, shares how Cisco has used HANA to provide real-time sales reporting and drive top and bottom-line value.  Cisco estimates the HANA system has saved them over 7,000 hours of senior executive data manipulation efforts.


We hope you enjoy hearing Cisco’s first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.


To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please let us know.


Transcript: SAP HANA Effect Episode 11 _CISCO

Sponsored by:


The SAP HANA SQL and System Views Reference was updated to reflect the new licensing options available for SAP HANA. All together it documents 137 SQL statements, 155 SQL functions and 308 system views.


To make it easier to locate the information relating to options the structure of the guide was changed to include separate sections for SQL Reference for Options and System Views Reference for Options.



Some of the highlights include:



New type of multi-level partitioning called range-range, this allows you to use a year as the partition specification and create a number of partitions for each year, for example all records from 1- 20,000 and records greater than 20,000. Additional data types for range partitioning include BIGINT and DECIMAL.


Table re-distribution

Table re-distribution now allows you to assign tables or partitions to a particular node in a distributed SAP HANA system.


Regular Expressions

SAP HANA SPS 09 supports regular expression operators in SQL statements. The search pattern grammar is based on Perl Compatible Regular Expression (PCRE).


Table Sampling

The TABLESAMPLE operator allows queries to be executed over ad-hoc random samples of tables.

Samples are computed uniformly over data items that are qualified by a columnar base table.

For example, to compute an approximate average of the salary field for managers

over 1% of the employee (emp) table you could run the following query:


SELECT count(*), avg(salary) FROM emp TABLESAMPLE SYSTEM (1) WHERE type = 'manager'


Note that sampling is currently limited to column base tables and repeatability is not supported.


Number Functions
Number functions take numeric values, or strings with numeric characters, as inputs and return numeric values.

Counts the number of set bits in the given integer or VARBINARY value


Performs an XOR operation on the bits of the given non-negative integer or VARBINARY values


Performs an OR operation on the bits of the given non-negative integer or VARBINARY values


Performs a bitwise NOT operation on the bits of the given integer value


More information on all of these features can be found in the SAP HANA SQL and System Views Reference on the SAP Help Portal.


Additional Resources


The HANA Academy also has a number of videos on these new SQL features and many more topics. Be sure to check them out:

SAP HANA Academy - SQL Functions: String_Agg



SAP HANA Academy - SQL Functions: BITCOUNT Bitwise Operation

SAP HANA Academy - SQL Functions: BITOR Bitwise Operation

SAP HANA Academy - SQL Functions: BITXOR Bitwise Operation


Additional SQL guides include:


SAP HANA Search Developer Guide

SAP HANA Spatial Reference

Backup and Recovery commands in the SAP HANA Administration Guide


Filter Blog

By author:
By date:
By tag: