A long time ago when I first started blogging on SDN, I used to write frequently in the style of a developer journal. I was working for a customer and therefore able to just share my experiences as I worked on projects and learned new techniques. My goal with this series of blog postings is to return to that style but with a new focus on a journey to explore the new and exciting world of SAP HANA.

At the beginning of the year, I moved to the SAP HANA Product Management team and I am responsible for the developer persona for SAP HANA.  In particular I focus on tools and techniques developers will need for the upcoming wave of transactional style applications for SAP HANA.

I come from an ABAP developer background having worked primarily on ERP; therefore my first impressions are to draw correlations back to what I understand from the ABAP development environment and to begin to analyze how development with HANA changes so many of the assumptions and approaches that ABAP developers have.

Transition Closer to the Database

My first thought after a few days working with SAP HANA is that I needed to seriously brush up on my SQL skills. Of course I have plenty of experience with SQL, but as an ABAP developer we tend to shy away from deeper aspects of SQL in favor of processing the data on the application server in ABAP. For ABAP developers reading this, when was the last time you used a sub-query or even a join in ABAP? Or even a select sum?  As ABAP developers, we are taught from early on to abstract the database as much as possible and we tend to trust the processing on the application server where we have total control instead of the "black box" of the dbms. This situation has only been compounded in recent years as we have a larger number of tools in ABAP which will generate the SQL for us.

This approach has served ABAP developers well for many years. Let's take the typical situation of loading supporting details from a foreign key table. In this case we want to load all flight details from SFLIGHT and also load the carrier details from SCARR. In ABAP we could of course write an inner join:


However many ABAP developers would take an alternative approach where they perform the join in memory on the application server via internal tables:


This approach can be especially beneficial when combined with the concept of ABAP table buffering.  Keep in mind that I'm comparing developer design patterns here, not the actual technical merits of my specific examples.  On my system the datasets weren't actually large enough to show any statistically relevant performance different between these two approaches.

Now if we put SAP HANA into the mixture, how would the developer approach change? In HANA the developer should strive to push more of the processing into the database, but the question might be why?


Much of the focus on HANA is that it is an in-memory database. I think it's pretty easy for most any developer to see the advantage of all your data being in fast memory as opposed to relatively slow disk based storage.  However if this were the only advantage, we wouldn't see a huge difference between processing in ABAP.  After all ABAP has full table buffering.  Ignoring the cost of updates, if we were to buffer both SFLIGHT and SCARR our ABAP table loop join would be pretty fast, but it still wouldn't be as fast as HANA.

The other key points of HANA's architecture is that in addition to being in-memory; it is also designed for columnar storage and for parallel processing.  In the ABAP table loop, each record in the table has to be processed sequentially one record at a time. The current version of ABAP statements such as these just aren't designed for parallel processing. Instead ABAP leverages multiple cores/CPUs by running different user sessions in separate work processes. HANA on the other hand has the potential to parallelize blocks of data within a single request. The fact that the data is all in memory only further supports this parallelization by making access from multiple CPUs more useful since data can be "fed" to the CPUs that much faster.  After all parallization isn't useful if the CPUs spend most of their cycles waiting on data to process.

The other technical aspect at play is the columnar architecture of SAP HANA. When a table is stored columnar, all data for a single column is stored together in memory.  Row storage (as even ABAP internal tables are processed), places data a row at time in memory.

This means that for the join condition the CARRID column in each table can be scanned faster because of the arrangement of data. Scans over unneeded data in memory doesn't have nearly the cost of performing the same operation on disk (because of the need to wait for platter rotation) but there is a cost all the same. Storing the data columnar reduces that cost when performing operations which scan one or more columns as well as optimizing compression routines.

For these reasons, developers (and especially ABAP developers) will need to begin to re-think their applications designs. Although SAP has made statements about having SAP HANA running as the database system for the ERP, to extract the maximum benefit of HANA we will also need to push more of the processing from ABAP down into the database.  This will mean ABAP developers writing more SQL and interacting more often with the underlying database. The database will no longer be a "bit bucket" to be minimized and abstracted, but instead another tool in the developers' toolset to be fully leveraged.  Even the developer tools for HANA and ABAP will move closer together (but that's a topic for another day).

With that change in direction in mind, I started reading some books on SQL this week. I want to grow my SQL skills beyond what is required in the typical ABAP environment as well as refresh my memory on things that can be done in SQL but perhaps I've not touched in a number of years. Right now I'm working through the O'Reilly Learning SQL 2nd Edition by Alan Beaulieu. I've found that I can study the SQL specification of HANA all day, but recreating exercises forces me to really use and think through the SQL usage. The book I'm currently studying actually lists all of its SQL examples formatted for MySQL. One of the more interesting aspects of this exercise has been adjusting these examples to run within SAP HANA and more importantly changing some of them to be better optimized for Columnar and In-Memory. I think I'm actually learning more by tweaking examples and seeing what happens than any other aspect.

What's Next

There's actually lots of aspects of HANA exploration that I can't talk about yet. While learning the basics and mapping ABAP development aspects onto a future that includes HANA, I also get to work with functionality which is still in early stages of development. That said, I will try and share as much as I can via this blog over time. Already in the next installment I would like to focus on my next task for exploration - SQLScript.

Hello SCN,


Firstly, I thank Blag for wonderful blogs on HANA & R like HANA meets R and R meets HANA . *Which introduced me to this amazing language called “R”.


In this blog I will discuss about how ODBC helps HANA to connect with different tools like Crystal reports 2011, R etc.We will also discuss about creating a procedure in HANA and calling the same to create a table in SAP HANA database. We will use ‘R’ to read the data from HANA and to plot a graph on that table. Then we will understand different problems faced while trying to plot a graph on top of tables in SAP HANA database and what is the future road map of HANA & R.


1) ODBC and HANA:


Data Services supports several ODBC data sources natively, including:

  • MySQL
  • Neoview
  • Netezza
  • Teradata


Configuring HANA ODBC:


The following are the necessary credentials for configuring ODBC driver for HANA:


SERVER = <server_name>:3<xx>15
USER = <user_name>
Password = <password>


In my case it is:




Steps for Configuring Data sources for ODBC driver of HANA:


Go to Control Panel -> Data sources (ODBC)


The following screen will appear.


Now Press “Add” to add a new DSN based on ODBC Driver “HDBODBC32” which is a ODBC driver for HANA.


The following screen will appear where you will have to enter the DS name along with its description and Server name.


If you are still facing issues with “Server: Port” number, you can find the number in the properties tab of your system node in HANA
Studio as in the below screen.


With this we created our new data source for ODBC driver of HANA. We can test our connection here by pressing “Connect” in the above
screen. Which will navigate us to the below screen.


On pressing “OK”, we will get the message “Connect successful” as in the below screen.


Press “OK” to continue. With this we have successfully created a DSN for ODBC driver. We can now use this DSN to connect from R to SAP
HANA Database and read the tables.


We can also connect to Crystal reports 2011 with the help of this ODBC connection.

2)  Installing R and R STUDIO (GUI):


To use “R” (Similar to S) , we have to first install “R” language and then install the GUI (windows/Unix)
version.To install R, Use the link  and for R STUDIO use the link .


Using RODBC package:


Now we have to install the Package “RODBC” for using ODBC Driver and connecting to SAP HANA Database. Download the package from RODBC and install it as shown below.



Now we are all ready to use our ODBC Driver and read the tables in SAP HANA Database from RSTUDIO and display
them in different plots or graphs.


Connection statement for SAP HANA Database:


Library ("RODBC")


Here Ch is used for storing the necessary DSN name along with User id and Password to connect to SAP HANA

3) Talking with SAP HANA Database using R:


In this case I would like to create a procedure on SBOOK table in SFLIGHT, which shows the “Revenue per Agency”.  We will use this procedure to fill the table FLIGHT” and connect to this table from R STUDIO and display the result in a plot.


Creating a procedure:








            (5),IN NAME NVARCHAR

            (25),IN COUNTRY NVARCHAR

            (3),IN CURRENCY NVARCHAR










into FLIGHT;


Create a table “FLIGHT” as shown below.


Now call the procedure to load “FLIGHT" table:




CALLS0008208595.STOC ('300','000299', '123321','US', 'FLY','US','USD');


Now you can see the data in FLIGHT table.


Now connecting to SAP HANA Database from R STUDIO *










You can see in the above screen in console how it is getting executed.

In the next experiment I tried to the same on “BIG” data but it thrown me the following the error “finite 'xlim' ”. Means this bar plot doesn’t support BIG data plots.


My observations in this experiment:


  1. I cannot use the views or procedures I have created using the wizard in HANA Modeler.
  2. If data is more the graphical representation becomes clumsy.
  3. I cannot use bar plot on BIG data as there is a possible limit as shown below when I tried to get the output on 900000 records.
  4. I am able to communicate only with tables in SAP HANA Database
  5. R language helps me to represent data mining techniques efficiently with the help of rich library of packages on different statistical formulas.


There are many tutorials available on R in the net for free as R is an open source. With SAP planning to tighten the integration between HANA and R, I hope this blog encourages you all to understand R and play with it on top of HANA.

Hello SCN,

We have another tool from SAP for “Business Users” on HANA to explore "BIG" data in an easier way.

Firstly, I thank Tammy Powlas for a wonderful blog on SAP HANA Information Composer – for the Non-Technical User?  and the HELP GUIDE from SAP.

I will share my experiences and views on using "Information Composer" and also explain how to use “joins".


We have to Log on to our "IC" using URL: http://localhost:8080/IC/ and give the necessary credentials in the below screen.


Then it navigates to the welcome screen, where we have 2 options as shown below.

1) Compose

2) Upload



1) Compose:

We have 5 steps in “Compose”.

Step 1: Specify source of data.


You can see that we have an option of selecting “ALL”. It means all the Analytic views, Attribute Views, Information Views and Data sets created on our server by different users ( depends on the privileges you have) will be displayed. You have an option as shown above to select a specific “Source” you want.

In this example, I took the attribute view which I created on Customer table named “customer” based on resort business data.


Step 2: select Source B.


Now I need to select another source with which needs “customer” data for analysis to know the frequency of a particular customer visit to resort.

I selected an analytic view, which I created on invoice data named as “SERINV”.

Step 3: Combine.


Now I will have to combine this data using union or joins. So when I clicked I got the below message.

Wow! I thought SAP is helping me by to create my information view by creating the necessary union or joins. Then I got the below message.

I tried to understand where it went wrong and tried different scenarios, then I understood that this feature only helps me to identify if “UNION” is possible.

In the below case, I used an information view created by another user named “AMEXANALYSIS” and an analytic view “STOCKS1” which has a similar structure. And now this feature worked in identifying the “UNION” relationship as shown below.

Now let us get back to our scenario.

It means I have to create mappings now. This tool provides me an example illustration on working of Unions and Joins as shown below.

If you want to know what these 3 types of joins means, this tool provides some illustrations which will help us as shown below.

We have some more example illustrations on Union.

So now I went back to create an Right join so that I get all the customer details with respect to my invoice data.

You can view sample data related to the field by selecting a particular field. Then you can preview as shown below.

Now I can see my customer name and details relating to the invoice ids.

In the above screenshot, you can see I have customer ID displayed twice. So I want to hide one of those this is where the next step helps me “REFINE”

Step 4: REFINE.


You can see here now I can select which fields have to be displayed from this screen. You can also see “cust_id_1” was “disabled” as I used this field in my join.

So I have unchecked “cust_id”, now to hide it.

Then I got this below screen which I could not understand. It is not allowing me to hide this field. I was unable to figure out the reason.

So I checked all the fields and I got the data.

I have an option to add a “calculated field” if I want to add any as shown in the below screen.

We can decide if the "calculated field" added is a "attribute" or "measure" as highlighted in the above screen.

I didn’t add any additional fields and proceeded to next step.

Step 5: FINISH.


Now I can share this view with other users and immediately start to use this new “MYVIEW1” in another “information view”.

Now let us discuss about another feature of this tool “UPLOAD”.


2. Upload:


We use “upload” option to load our “data sets” into “IC”.

This tool helps me to upload .xls, .xlsx and .csv files to my “IC”.

There are 3 steps in this wizard.

Step 1: Specify source of data.


You can “Browse” to your file on your local machine and select the required file for analyzing.

You have an option to upload the file along with “column headers”. There is a limit of max size “5 million records” for upload as shown above.

Now we can clean our data using "CLEANSE DATA" option as shown below.

It cleans data by merging items with similar meaning.

Step 2: Classify.


Now we need to classify our data to specify which fields are used for calculations as shown below.

Step 3: Finish.



Thus, I was able to load my data set.

Hope you understood the benefits of using "SAP HANA Information Composer”.


Regarding "External Data upload" using flat files to "SAP HANA information composer read this document by Debjit Singha


SAP HANA Information Composer is positioned as a tool for the Non-Technical user.  Using Information Composer, the user should not have to go to IT to get their SAP HANA modeling done.  With the help and encouragement of Juergen Schmerder, I thought I would give it a try.


In this scenario, I am a "business analyst" combining two SAP HANA Analytic views: one view has NYSE data and the other view has AMEX data (courtesy of Ronald Konijnenburg).  Then I can use SAP BusinessObjects Analysis Edition for Office to analyze the data (or BusinessObjects Explorer) further.

The first step is to log in using this URL on your SAP Hana system.



Figure 1, logging on

Logon with your SAP Hana User name and password as shown in Figure  Figure 2, Compose


Figure 2 Compose


Compose is on the left side at the top as shown in Figure 2.  Click Start + icon to get started with the Information Composer.


Figure 3, Specify Source of Data


Figure 3 shows you select your first source of data.  I am going to select the Analysis Analytic view, which contains NYSE data, courtesy of Ronald Konijnenburg


Figure 4 – a view of the ANALYSIS Analytic View with NYSE data

After showing Figure 4, click Next to select the next source of data.


Figure 5  AMEX Analytic View


Figure 5 shows AMEX Analytic view.  Click the Next button to combine.


Figure 6 Combine


Figure 6 shows the "combine" or union was successful.  As an end user, I did not need to tell SAP Hana what the joins were, which is nice.


Figure 7 Manage Fields


Figure 7 shows how I can select fields to be included in the Information View (like SAP Hana Calculation View) for reporting.


Figure 8 Finish


Figure 8 shows the Finish line.  Now I can share this Information View with others and publish it.


Figure 9 BusinessObjects Analysis Office


I start SAP BusinessObjects Analysis Office, log on to SAP Hana, and select the Information View I just created as shown in Figure 9.



Figure 10 - Analysis Office, view of Combined AMEX/NYSE data


Figure 10 shows the combined AMEX/NYSE data, and now as a business analyst I can start analyzing the data.


Key Takeaways:

1) Was Information Composer easy to use?  Yes, this only took a few minutes

2) This was easier than creating a union in SAP Hana's using a Calculated View

3) The business analyst still needs to know the data. Notice how I did not even look at the joins of the combined table; that could be a risk if you do not know the data.


I could see this used in other scenarios where a business analyst may want to upload data from Excel to enrich the SAP Hana calculated view.  However, if the data is disparate, you will still need a SAP Hana expert modeler to design this a calculated view.

Welcome back to the first instalment of HANA backtrace for 2012.
\   Let's look at what I've seen and heard about SAP HANA lately:
\   Accessing data in a SAP HANA box from ABAP requires the well known database \   library (DBSL) and there are some new notes on this:
\   SQL hints\   have a long tradition with nearly all databases and SAP HANA is no exception.
\   Which of these and how are supported by the DBSL is explained in SAP note:
\   #1622681 - DBSL hints for SAP HANA
\   If you're about to figure out what version of the DBSL is installed on \   your system you should have look into
\   #578324 - Make and release information for MaxDB \   DBSL.
\   An overview about the DBSL versions can be found in SAP note
\   #1600066 - Available DBSL patches for NewDB
\   ++++ ---- ++++ ---- ++++
\   Since also the SAP HANA box should be monitorable via Solution Manager, the saphostctrl \   program needs to be installed on it.
\   SAP note
\   #1625203 - saphostagent/sapdbctrl for NewDB
\   has the specifics for SAP HANA and in SAP note
\    #1031096 - Installation of paket SAPHOSTAGENT
\   you'll find a nice PDF explaining how to install the agent in general.
\   I just installed it on my test machine and it seems to work:

\  \   \     \       \         \       \     \  

/usr/sap/hostctrl/exe/saphostctrl -function GetDatabaseStatus -dbname HAN -dbtype hdb
Database Status: Running
    Component name: hdbdaemon (HDB Daemon), Status: Running (Running)
    Component name: hdbnameserver (HDB Nameserver), Status: Running (Running)
    Component name: hdbpreprocessor (HDB Preprocessor), Status: Running (Running)
    Component name: hdbindexserver (HDB Indexserver), Status: Running (Running)
    Component name: hdbstatisticsserver (HDB Statisticsserver), Status: Running (Running)
    Component name: hdbconnectivity (HDB Connectivity), Status: Running (connect possible)
    Component name: hdbalertmanager (HDB Alertmanager), Status: Running (No alerts on database.)

\  \  

In the same area SAP note
\    #1672908 - sapdbctrl getProperties on NewDB failed
\   fixes a bug.
\   ++++ ---- ++++ ---- ++++
\   Of course since the last HANA backtrace several new revisions have been released and \   hundreds of bugs and enhancements have been made available (though not all listed in \   detail):
\    #1673965 - SAP HANA appliance: Revision 24 of SAP HANA database
\    #1680966 - SAP HANA Modeler - revision 24 - upgrade news and \   fixes.
\    #1680966 - SAP HANA Modeler - revision 24 - upgrade news and \   fixes.
\    #1664657 - SAP HANA Modeler Rev. 23: upgrade news and fixes
\    #1661415 - SAP HANA Studio - Rev 23: Attr.reihenfolge in \   Berechnu.sicht
\    #1663228 - SAP-HANA-appliance: Revision 23 of SAP HANA database
\    #1653292 - SAP-HANA-Appliance: Revision 21 of SAP HANA database
\    #1654160 - SAP HANA Modeler - revision 21 - upgrade news and tips.
\   Due to the large number of bugs that have been fixed, make sure to install the current \   revision as soon as possible.
\   ++++ ---- ++++ ---- ++++
\   If you're running BW on HANA SAP notes
\    #1637145 - SAP BW on HANA: Sizing SAP In-Memory Database
\    #1660125 - SAP HANA database: Table consistency check
\    #1664983 - RSHDB: Consistency check (004)
\   will be interesting to you.
\   ++++ ---- ++++ ---- ++++
\   That's so far about the SAP notes I stumbled over - the following links are some SDN \   blog posts I found pretty interesting.
\   Make sure to check them out:
\   In his two posts Krishna explains the design of the new In-Memory \   versions of InfoCube and DSO:

\  \  

\  \  

++++ ---- ++++ ---- ++++

\  \  

For the BWA user base thinking about the future of this BW add-on is quite common \   nowadays, and my colleague from SAP Labs provides some insight:
\   SAP NetWeaver BW Accelerator is NOT Dead
\   ++++ ---- ++++ ---- ++++
\   Very interesting and as usual nicely written are the posts from Blag about the interaction of R (a free statistics \   package) and HANA:

\  \  

\  \  

As R is about to be fully integrated into HANA in later versions, these information \   might become obsolete, but given the state of development today they help a great \   deal.
\   ++++ ---- ++++ ---- ++++
\   Even more hype-oriented and tech-nerdy is the phantastic project to have Apples Siri\   interact with HANA:
\   Siri meets HANA
\   ++++ ---- ++++ ---- ++++
\   Finally Jeffrey is off to a long journey and he takes us along:
\   Starting the Journey is a the first part of what likely will become a \   little series of blog posts about writing a book about HANA.
\   ++++ ---- ++++ ---- ++++
\   That's all folks - see you next week.
\   Best regards,
\   Lars


Filter Blog

By author:
By date:
By tag: