1 2 3 48 Previous Next

SAP HANA and In-Memory Computing

716 Posts

In the first part of this blog, we modeled the logic of game of life using a calculation view and  a decision table, In this part let us expose the decision table to a front end UI using XSJS and create an interactive game as shown below.




Wrapper SQL Script :


With just a decision table , we will not be able to generate consecutive generations of cells . To do so, we have to create a wrapper SQL script over the decision table. which will get the results from the decision table and update the source table ( cellgame table ).So, As and when the SQL Script Procedure is executed , cells for next generation will be updated to the cellgame table..









DROP TABLE temp_table;




In the above SQL Script procedure , we store the results of the decision table in a Global temporary table and we use the temporary table to update the values in the source table ( CELLGAME table ). So, whenever the procedure is called , the source table ( CELLGAME  table ) is updated with next generation cells.



The following server side javascript calls the procedure created in the previous step and the result of the source table is returned as a JSON.


var procedureCallSql = 'CALL "HANA_GARAGE"."PLAY"',


init = $.request.parameters.get("initial");



function close(closables) {

    var closable;

    var i;

    for (i = 0; i < closables.length; i++) {

              closable = closables[i];

              if(closable) {







function getCellValue(){

    var cells = [];

    var connection = $.db.getConnection();

    var statement = null;

    var resultSet = null;


              statement = connection.prepareStatement(sqlSelect);

              resultSet = statement.executeQuery();

              while (resultSet.next()) {

                        var cell = {};

                        cell.x = resultSet.getString(1)

                        cell.y = resultSet.getString(2);

                        cell.s = resultSet.getString(3);



    } finally {

              close([resultSet, statement, connection]);


    return cells;




function doGet() {


              $.response.contentType = "application/json";




              $.response.contentType = "text/plain";

              $.response.setBody("Error while executing query: [" + err.message + "]");

              $.response.returnCode = 200;




function callProcedure(sqlQuery){

    var connection = $.db.getConnection();

    var statement = null;

    var resultSet = null;


              statement = connection.prepareCall(sqlQuery);

              resultSet = statement.execute();

    } finally {

              close([resultSet, statement, connection]);












To visualize the output of the xsjs , we create a HTML page as shown below,


<!DOCTYPE html>


<html lang="en">


    <title>Game Of Life</title>

<!--     <link href="style.css" rel="stylesheet">



body {

  background: #a0a0a0;

  padding: 20px;



table {

  margin: 0 auto;

  border-collapse: collapse;

  background: black;



td {

    width: 40px; height: 40px;

    background: white;

    border: 1px solid black;




  background: black;

  border: 1px solid white;





margin:0 auto;




<script src="http://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>

<SCRIPT TYPE="text/javascript">



$( document ).ready(function() {


var result = [];

tdTag = "<td"

tdTag2 = "></td>",

tdTagBlack = '<td class="live"></td>',

htmlTag = "",

trOpen = "<tr>",

trEnd = "</tr>";




  result = data;



   htmlTag+= trOpen;


     htmlTag+= tdTag + " id='"+j+""+i+"'"+tdTag2;



   htmlTag += trEnd;



  var tableTag = $("table");



   var x = parseInt(result[n].x),

   y = parseInt(result[n].y),

   s = parseInt(result[n].s);


     var identifier = "td#"+x+""+y;








   result = data;


       var x = parseInt(result[n].x),

       y = parseInt(result[n].y),

       s = parseInt(result[n].s);

       var identifier = "td#"+x+""+y;






       var x = parseInt(result[n].x),

       y = parseInt(result[n].y),

       s = parseInt(result[n].s);


         var identifier = "td#"+x+""+y;
















<button type="button">Play</button>



Here we use jQuery Get function to request the JSON from play.xsjs file. The resulting JSON is displayed as cells using javascript and CSS. Whenever the play button is clicked, a request is sent to play.xsjs and procedure is called by the xsjs . then the next generation cell pattern is passed to the UI in JSON format. UI creates cells based on the JSON result from xsjs .




Whenever the Play button is clicked , next generation cells are generated and displayed in the UI as shown below.



Sometimes people think that because HANA is a columnar database, it doesn't run fast for simple OLTP operations. I was just looking at a performance problem with class /IWBEP/CL_MGW_ABS_MODEL, method GET_LAST_MODIFIED.


This had some screwy ABAP, which is incidentally fixed in SAP Note 2023100 (thanks Oliver Rogers for finding the note), and it generated the following SQL:







That SQL is pretty nasty, because it does a wildcard search on a big table. On the non-HANA system it was running in 20 seconds. I did the root cause analysis in the database and found that it was searching the primary clustered index, which was 98% fragmented.


Obviously I rebuilt the index - these are the results.


CPU time = 2750 ms,  elapsed time = 2746 ms.

CPU time = 2594 ms,  elapsed time = 2605 ms.

CPU time = 2750 ms,  elapsed time = 2764 ms.


I realized at this point this was some bad coding, so I found the fix thanks to Oli and we put the change in. That fixed the performance problem.


But then I thought... what happens if you run this bad query on a HANA system? This is just what custom code looks like a lot of the time...



successfully executed in 12 ms 414 µs  (server processing time: 11 ms 613 µs)

Fetched 8 row(s) in 0 ms 68 µs (server processing time: 0 ms 0 µs)



successfully executed in 9 ms 778 µs  (server processing time: 9 ms 136 µs)

Fetched 8 row(s) in 0 ms 64 µs (server processing time: 0 ms 0 µs)



successfully executed in 12 ms 677 µs  (server processing time: 11 ms 830 µs)

Fetched 8 row(s) in 0 ms 56 µs (server processing time: 0 ms 0 µs)


So anyDB is averaging 2705ms, and HANA is averaging 10.86ms, an average speedup of 249x.


You may be saying... OK well that's for poorly written SQL - what about when it was optimized. Sure, let's test in that scenario. Here's the SQL:





"PROGNAME" IN ('CL_ABAP_COMP_CLASS============CCDEF', 'CL_ABAP_COMP_CLASS============CCIMP', 'CL_ABAP_COMP_CLASS============CCMAC', 'CL_ABAP_COMP_CLASS============CI', 'CL_ABAP_COMP_CLASS============CO', 'CL_ABAP_COMP_CLASS============CP', 'CL_ABAP_COMP_CLASS============CT', 'CL_ABAP_COMP_CLASS============CU')




So ran it on anyDB, I couldn't get accurate results from the SQL console so I had to use the ABAP trace to get the numbers. They were 5.504ms, 1.484ms, 4.605ms for an average of 3.86ms. Let's see how HANA compares.



successfully executed in 1 ms 977 µs  (server processing time: 1 ms 156 µs)

Fetched 8 row(s) in 0 ms 63 µs (server processing time: 0 ms 0 µs)



successfully executed in 1 ms 946 µs  (server processing time: 1 ms 250 µs)

Fetched 8 row(s) in 0 ms 60 µs (server processing time: 0 ms 0 µs)



successfully executed in 2 ms 230 µs  (server processing time: 1 ms 127 µs)

Fetched 8 row(s) in 0 ms 59 µs (server processing time: 0 ms 0 µs)


With HANA then, we get an average of 1.18ms for an average speedup of 3.27x.




For poorly constructed OLTP queries at the database level, we can get enormous benefits of running HANA - up to 250x or more. With optimized SQL that hits database indexes on anyDB, that drops to around 3.27x, but SAP have only ever claimed a 2-3x increase of running ERP on HANA for transactional workloads.


And remember if you move to the sERP suite, you'll see another 2-3x because the data structures are simpler. That's going to mean response times of 5-10x faster than on anyDB.


I don't know about, you, but that feels significant to me.


Yes, I know I didn't do concurrency, inserts, updates and all that jazz. This was just a quick test run with 15 minutes of spare time. Hope it is informative. It's also worth noting that with program changes, I was in this case able to get acceptable performance using anyDB for our timesheet app. The only area where performance is a problem is for the WBS element search, which is a wildcard search again.


For those searches, HANA rocks. For customer, product - anything with a free text search, HANA is going to kill anyDB.


P.S. This was all run on HANA Rev.81


P.P.S Why not run this on your DB and let us know how it runs?




This blog is part of a series of troubleshooting blogs geared towards telling you a story of how an issue got resolved. I will include the entire troubleshooting process to give you a fully transparent account of what went on. I hope you find these interesting. Please leave the feedback in the comments if you like the format or things I can improve on


Let's get started!



Problem Description



Trying to register the secondary site for System Replication fails with error "remoteHost does not match with any host of the source site"



Environment Details



This incident occurred on Revision 73






Running the following command:


hdbnsutil -sr_register --name=SITEB --remoteHost=<hostname primary> --remoteInstance=<inst> --mode=<sync mode>


Gives error:


adding site ..., checking for inactive nameserver ..., nameserver <hostname_secondary>:3<inst>01
not responding., collecting information ..., Error while registering new
secondary site: remoteHost does not match with any host of the source site.
please ensure that all hosts of source and target site can resolve all
hostnames of both sites correctly., See primary master nameserver tracefile for
more information at <hostname_primary>, failed. trace file nameserver_<hostname_secondary>00000.000.trc
may contain more error details.]



Studio had a similar error as well.

studio system replication error.jpg




The error message indicates that the secondary system could not be reached when performing sr_register.


Firstly, when dealing with System Replication, it is always good to double-check that all the prerequisites have been completed. Refer to the Administration
guide for this (http://help.sap.com/hana/SAP_HANA_Administration_Guide_en.pdf)



Let’s make sure the network connectivity is fine between the primary master nodes and the secondary master nodes.



Are the servers able to ping each other?


From the O/S, type “ping <hostname>”. Perform this from the primary to secondary and secondary to primary.



In this customer’s case, ping was successful.



What about firewalls? Could the ports be blocked?



From the O/S, type “telnet <hostname> <port>”. Perform this from the primary to the secondary and secondary to the primary.
The port that you will use is the one in the error message.



In this customer’s case, ping was successful.




Comparing the host files between the primary and secondary sites

The customer noticed that there was an error in the /etc/hosts file, the shortname was not filled in correctly. They fixed this, but the problem still occurred





Network Communication and System Replication

There is a note 1876398 - Network configuration for System Replication in HANA SP6. 




The symptoms of the note match what we are experiencing “When using SAP HANA Support Package 6, a
System Replication secondary system may not be able to establish a connection to the primary system.


It is explained “Therefore, the listener hears only on the local network. System Replication also uses the infrastructure for internal network communication for exchanging data between the name servers of the primary and the secondary system.  Therefore, the name servers of the two systems can no longer communicate with each other in this case.”



It is worth noting this is very common cause of the issue, but in the customer's case, it was not the problem.








Performed an strace, here is some of the output.



sendto(13,"?\0\50\50\50\60\0\0\0\1\2\6,\0\0\0dr_gethdbversion"..., 86, 0, NULL,
0) = 86

recvfrom(13,0x7f1bd94549264, 8337, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)

poll([{fd=13,events=POLLIN|POLLPRI}], 1, -1) = 1 ([{fd=13, revents=POLLIN}])

8337, 0, NULL, NULL) = 52

recvfrom(13,0x7fff22c5277f, 1, 2, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)

recvfrom(13,0x7fff22c528bf, 1, 2, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)

gettid()                                = 35760

sendto(13,"?\0\32\33\45\33\0\0\0\1\2\0033\0\0\0dr_registerdatac"..., 413, 0,NULL, 0) = 413

recvfrom(13,0x7f1bd9745564, 8337, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)

poll([{fd=13,events=POLLIN|POLLPRI}], 1, -1



Seems like some sort of packet loss here.




Involving the Networking Team




We involved the customer’s networking team and found that the MTU-size was set to 9000. They set the MTU-size to 1500 and then ran the register step and it worked! The registration completed!



The networking team did not explain exactly what was going on but we suspect they performed a tcpdump to see if there was packet loss.



** This may need to be changed back later for performance optimization, see 2081065 - Troubleshooting SAP HANA Network **






This blog detailed the steps that SAP and the customer worked through a problem towards a resolution. This may not be the exact resolution for every incident that has the same symptoms. If you are encountering the same issue, you can review these steps with your HANA Administrator and Networking team.

SAP HANA Distinguished Engineer(HDE) is someone with significant hands-on experience with SAP HANA, and a member of the SAP technology community either as a customer, partner, or SAP employee. Find mode details about HDE program here.


Tamas Szirtes is a SAP HANA Distinguished Engineer.


Tamas will share his experience in using SAP HANA in real-world at customers as part of the HDE Webinar series.

Please register to attend the free webinar.


Title: SAP HANA usage scenarios in the real-world

Date/Time: November 25th, 2014 / 5 PM - 6 PM CET / 8 AM - 9 AM PT


The goal of this session is to show the ways SAP HANA is used successfully in the real world and share the key leanings.


In this webinar Tamas will explain the various SAP HANA deployment options, such as using SAP HANA as a data mart, running SAP BW powered by SAP HANA, running the SAP Business Suite powered by SAP HANA, using it as a sidecar, creating and running native SAP HANA applications, Cloud, etc. For each deployment scenario, one or more customer cases will be explain in terms of the industry context, business challenges and the solution. For each case,  the lessons learned will be explained in detail.

Amazon's annual global conference AWS re-invent  2014 is starting on Nov 11 at The Venetian, Las Vegas. And in a very short period of time, it has become the place to be for global cloud platform leaders, customers & developers to come together every year.


Its only fitting for SAP, as a key AWS partner, to be present at the show floor delivering to our customers, prospects & partners an understanding of our cloud strategy and how that can help them create their own cloud strategy to achieve their business goals. We will focus on how you can leverage the power of AWS to deliver the value that SAP solutions sooner especially with SAP HANA.





Achieve breakthroughs in speed and flexibility with SAP HANA running in the AWS cloud


  • Business is moving to the cloud – for a wide range of workloads.
  • SAP HANA is certified to run in the AWS public cloud – offering significant value and ability to scale.
  • You can deploy SAP HANA quickly and economically – for fast time to value.


Below are some of the options available to harness the power of SAP HANA with the flexibility of AWS.



Get the full set of SAP HANA features on a subscription basis.

This is our simplest entry point for SAP HANA and perfect for department scale projects, system integrators, independent software vendors, and innovative startups. Support is through an active community of HANA One users.


  • Breakthrough in-memory platform in the cloud: Real-time business with the simplicity of the cloud
  • Licensed for productive use: *64GB Memory
  • Flexible, pay as you go subscription pricing: SAP HANA license + provider infrastructure
*License does not permit direct connection to other SAP Software except SAP Lumira.


If you already own a license of SAP HANA, you may bring you own license (BYOL) and leverage the full value that the AWS Cloud offers and provision infrastructure in minutes versus weeks or months.


Getting started:

  • The SAP HANA on the AWS Cloud Quick Start Reference Deployment Guide provides architectural considerations and configuration steps necessary for deploying SAP HANA in the Amazon Web Services (AWS) Cloud in a self-service fashion. The guide utilizes a “Bring Your Own License (BYOL)” scenario for SAP HANA software and recommended best practices for deploying SAP HANA on AWS using services such as Amazon EC2 and Amazon Virtual Private Cloud (Amazon VPC).



The guide also provides links to automated AWS CloudFormation templates that you can leverage to more quickly and easily automate your deployment or launch directly into your AWS account. Download the full SAP HANA on the AWS Cloud Quick Start Reference Deployment Guide and get started today!





Come talk to our experts at the SAP booth #200 or to setup a 1X1 meeting please write to Mike Craig: Michael.craig@sap.com




For more info please visit:


SAP HANA: www.saphana.com

SAP HANA on AWS: http://aws.amazon.com/sap/saphana

Running SAP HANA on the Amazon Web Services Cloud: Free Webcast

SAP HANA in the AWS Cloud Quick Start Deployment Guide http://aws.amazon.com/quickstart/

For the Keynote at SAP TechEd/d-code 2014 Las Vegas, we built out a quarter trillion row model in a single scale-up HANA system. John has put together a great high level overview of the solution in his blog Unleashing Lightening with SAP HANA or you can watch the video.


John kicked off the in-depth explanation of how the solution was structured which you can see in more detail at Unleashing Lightening: Building a quarter trillion rows in SAP HANA


In this blog I'm going to briefly talk about the front-end of the demo. I will firstly briefly cover the oData layer and then touch on the technologies used to build the web UI.


The oData layer


The oData layer of the solution was pretty simple. Firstly we put together a stored procedure which took care of the query logic which John covers in more detail in his blog. We then wrapped that SQL Script in a HANA calculation view. The purpose of this was to allow ourselves to call this stored procedure via an oData service while passing in query parameters (in this case our search terms).


The oData service, once we had the calculation view, was pretty easy to define using SAP HANA's XS engine (.xsodata file):


service {

  "wiki.data::CV_WIKI3" as "Historical"

  keys generate local "ID"

  parameters via entity;


  "wiki.data::CV_WIKIPREDICT" as "Predictive"

  keys generate local "ID"

  parameters via entity;


In the above example, we are creating a new oData service based on the calculation view CV_WIKI3 and CV_WIKIPREDICT which are stored in the package "wiki" and the subpackage "data". We expose these services under the names Historical and Predictive and generate an ID column automatically based on the results returned from the view. Finally we allow any available parameters of the calculation view to be passed in via the oData service.


Once the oData service above is active we can now explore it by loading the $metadata - for example the metadata can be found for my service at:



The metadata gives the structure of the parameters that are required to be passed into the calc view:


Screen Shot 2014-11-08 at 22.04.28.png


In this case my parameters are imaginatively called "CalcViewParameters" and it has two fields, TITLE1 and TITLE2. So now we can construct our query URL which is formed as follows:



The above URL will call our oData service, pass in the TITLE1 and TITLE2 parameters and return the results in the result set "Results".


OK so now that we have our data we can move onto the UI.


The UI


The UI for the solution was pretty simple and used standard web technologies.


Screen Shot 2014-11-08 at 22.14.39.png


The main surround of the UI was built using Bootstrap a superb framework for developing responsive user interfaces. The form at the top that we used to specify the search criteria, the type of query and the predictive variables were created using standard HTML form elements combined with none other than openui5.


Finally the data visualisations showing the results were created using sapui5 which provides some amazing out of the box data visualisation functions which I can highly recommend. In this case the chart was of the type sap.viz.ui5.Column.


I won't go through the code of the UI in detail but I can recommend that anybody interested in getting started with a UI like this - please check out OpenUI5 or the sapui5 hosted SDK which has great tutorials and getting started guides.


For anybody trying this out for themselves, if you have any questions or comments I'd be more than happy to help so please do post below.


That's it from me for the front-end of Unleashing Lightning with SAP HANA. I encourage you all to try it out - it really is a lot of fun!!


Happy coding,



Companies with complex distribution networks or spare parts supply chains are continually trying to achieve the balance between not enough and too much when it comes to inventories. Too little adversely affects service capabilities, while conversely, too much may result in costly overstocking and excessive storage.


To help alleviate these issues, customers can set up materials requirements planning (MRP) algorithms in their SAP ERP application, which order parts automatically based on the requirements that are visible at a point in time. The MRP algorithms use parameters such as safety stock, maximum level, and minimum order quantity in order to deal with the unavoidable uncertainties about future demand quantities and supply lead times.


The operation of a typical supply chain involves a large variety of materials, events, interactions and tactical decisions. Different types of demand with different degrees and types of uncertainty have to be served from the same stock. Stock locations are organized on multiple tiers that supply one another, so the stock availability in one location influences the supply lead time for another one. All this complexity forces the business to make limiting assumptions when looking for the best strategies and parameters, without having a good way to assess the real impact of these assumptions on the accuracy of the result. A lot of times the assumptions that are correct for a material might be completely off for another one. And sometimes, the real case of inventory problems is in a place so unlikely that no one is looking at it – the wrong number in a rarely used field, a mismatch between the units of measurement used in different locations, etc.


Unveil the real problems behind low service levels


Now there’s a way to have better insights into inventory control through simulation and optimization functionality developed by SAP Data Sciences and the custom development team.


This new functionality, which runs on the SAP HANA database, is an analytic “side-car” to solutions like SAP ERP and models a company’s actual supply chain, including all the interactions and events that impact inventory levels.


With this software, the expected future behaviour of the full supply chain is simulated in detail, revealing all interactions and allowing for the identification of known and unknown problem areas. Based on the outcome of the simulation, a company’s current strategies and parameters in the supply chain can be adjusted to ones that produce an ideal outcome.


Realistic simulation, real-life optimization


The simulation model includes both deterministic events (such as purchases and stock transfers) as well as stochastic events (unexpected demand and lead-time variability). The stochastic events are described by statistical models which are custom built from the historical data and enhanced using the experience of the business users.

Starting from the current state of the supply chain and including the existing MRP algorithms, the simulation computes the future inventory situation in detail.
The optimization algorithms make use of the simulation to find the ideal replenishment strategies and parameters.


Empower supply chain analysts


The interface of this functionality brings together all the information about the current state, the expected future state and the optimal state of the supply chain. Supply chain analysts are given a complete view of their supply chain – they can view KPIs that reflect all materials, in all locations, over a long period of time and quickly switch to a view of the expected stock level or demand for one material, in one location, on a daily basis.

They can simulate the effects of any parameter changes and even new configurations of the network and in this way get additional insight before implementing changes in the real system.


This functionality allows companies to:

  • Visualize consolidated information for each material part across the whole supply chain
  • Predict the behaviour of the supply chain in the future, based on a data snapshot from SAP ERP
  • Simulate the effect of the existing network configuration, replenishment strategies, and parameters
  • Calculate KPIs, such as service levels or average inventory, based on the simulations
  • Optimize inventory policies to achieve the best possible trade-offs between service level and inventory stocking costs


The key: Event level simulation of the supply chain


At the heart of this new functionality is the simulation of events on the supply chain, on a daily level.


Data - such as weekly historical demand, future planned demand, MRP parameters, supplier and internal lead times - can be pulled from SAP ERP, other applications and even flat files and brought together in SAP HANA to form a coherent, detailed and easily accessible supply chain overview.
Rules – such as the quantities that MRP would order or priority of some types of demand – are modelled by a simulation algorithm written in JAVA, which allows the fast simulation of the effect of a large number of individual events on the situation of the supply chain.

The simulation algorithm is able to compute what will happen on the supply chain in the future and report the possible ranges for all supply chain KPIs: average stock values, order values, numbers of orders, average waiting days, etc.

The optimization algorithms are able to modify all parameters a business user is able to modify in reality, and do so automatically until the expected simulation KPIs reach their optimal values.


The results of simulations and optimization are stored in SAP HANA along with the supply chain data that was copied from ERP and other sources. This, along with the next generation of user interfaces that the SAP HANA Platform enables, makes it possible for supply chain analysts to get a clear picture of the whole supply chain and the way in which present decisions influence future KPIs.


Improved service – and significant savings


With rapid simulation and optimization, the real problems behind low service levels become apparent and inventory issues are resolved.

And based on our experience of working with customers in this area, there is potential for significant savings by reducing average stock levels or order frequencies and increasing the service levels for materials which cause the most costly delays.


If you are interested in utilizing this technology, contact me to learn how this functionality can be integrated into your distribution network.

I was inspired by Wenjun Zhou who wrote Play the game of life with SAP HANA to have a little fun and solve the Eight Queens problem using SAP HANA.


The Eight Queens is a classic problem in computer science: how can we place eight queens on a chess board, so that none of them can take each other? This is often taught to computer scientists, because it requires use of a backtracking algorithm to solve. I learnt this in a Modula-3 course back in the mid-90s. Here's a picture of the solution thanks to Eight queens puzzle - Wikipedia, the free encyclopedia


Screen Shot 2014-11-05 at 5.17.08 PM.png

It turns out that there are exactly 92 solutions to this problem on an 8x8 board. I can still remember my Modula-3 program spitting out solutions on a Sun UNIX server. The SQL Server Pro folks, wrote a blog Use T-SQL to Solve the Classic Eight Queens Puzzle which I then adapted to SQLScript. It's quite elegant, because it first only considers solutions where the queens are in distinct columns. This massively reduces the result space from n^n to n! (40320 for an 8x8 board).

It's even more fascinating if you do a PlanViz on this, because it only materializes 1704 rows at the most - it doesn't materialize the whole 40320 result set before it filters. Another example of the efficiency of the HANA column store engine.

I wrote a little piece to create an array of size N to represent the rows, which would be generic, but I couldn't figure out a way to recurse like you can in T-SQL. Can anyone see a more elegant solution?












FOR v_n IN 1 .. 8 DO

  v_queens[:v_n] := :v_n;



queens = UNNEST(:v_queens) AS (n);


SELECT a.n AS a, b.n AS b, c.n AS c, d.n AS D,

       e.n AS e, f.n AS f, g.n AS g, h.n AS h

  FROM :queens AS a

  JOIN :queens AS b

    ON b.n <> a.n

   and (b.n - a.n) NOT IN (-1, +1)

  JOIN :queens AS c

    ON c.n NOT IN (a.n, b.n)

   AND (c.n - a.n) NOT IN (-2, +2)

   AND (c.n - b.n) NOT IN (-1, +1)

  JOIN :queens AS d

    ON d.n NOT IN (a.n, b.n, c.n)

   AND (d.n - a.n) NOT IN (-3, +3)

   AND (d.n - b.n) NOT IN (-2, +2)

   AND (d.n - c.n) NOT IN (-1, +1)

  JOIN :queens AS e

    ON e.n NOT IN (a.n, b.n, c.n, d.n)

   AND (e.n - a.n) NOT IN (-4, +4)

   AND (e.n - b.n) NOT IN (-3, +3)

   AND (e.n - c.n) NOT IN (-2, +2)

   AND (e.n - d.n) NOT IN (-1, +1)

  JOIN :queens AS f

    ON f.n NOT IN (a.n, b.n, c.n, d.n, e.n)

   AND (f.n - a.n) NOT IN (-5, +5)

   AND (f.n - b.n) NOT IN (-4, +4)

   AND (f.n - c.n) NOT IN (-3, +3)

   AND (f.n - d.n) NOT IN (-2, +2)

   AND (f.n - e.n) NOT IN (-1, +1)

  JOIN :queens AS g

    ON g.n NOT IN (a.n, b.n, c.n, d.n, e.n, f.n)

   AND (g.n - a.n) NOT IN (-6, +6)

   AND (g.n - b.n) NOT IN (-5, +5)

   AND (g.n - c.n) NOT IN (-4, +4)

   AND (g.n - d.n) NOT IN (-3, +3)

   AND (g.n - e.n) NOT IN (-2, +2)

   AND (g.n - f.n) NOT IN (-1, +1)

  JOIN :queens AS h

    ON h.n NOT IN (a.n, b.n, c.n, d.n, e.n, f.n, g.n)

   AND (h.n - a.n) NOT IN (-7, +7)

   AND (h.n - b.n) NOT IN (-6, +6)

   AND (h.n - c.n) NOT IN (-5, +5)

   AND (h.n - d.n) NOT IN (-4, +4)

   AND (h.n - e.n) NOT IN (-3, +3)

   AND (h.n - f.n) NOT IN (-2, +2)

   AND (h.n - g.n) NOT IN (-1, +1)

ORDER BY a, b, c, d, e, f, g;




CALL queens();

Unfortunately there are some extremely efficient solutions to the N Queens problem like Jeff Somers's N Queens Solutions using C++, and the SQL solution can't compare to these for this type of problem. I tried running a 16x16 version of this and it was extremely slow.

Still, it was a little fun. I hope you enjoy.

A common inquiry


Last week I got two very similar question from colleagues in the consulting teams:


"[...] We created a user (“john” with select / execute privileges on “SYS_BI & SYS_BIC” schema.

User can access data using bo tools and all looking good.

When the user logged into hana studio with his userid…

He was able to execute

“CREATE view “stest”  as select * from <sys_bic> table…..

Is there any option to block user from creating views using sql script as above.[...]"




"[...]When a user is created in SAP HANA there is a schema created for this user automatically, for which the user has all privileges.

One of our customers wants to prevent this, as they fear that users can create their own secret data collections.

How can that be prevented for users with SAP HANA Studio access?[...]"


GEIGOKAI - a fierce enemy


free picture linked from pixabay.com.


Bottom line of these requests is that we want to have a true no-privileges-included user account that we can later provide with just the privileges it should have.

Killing two birds with one stone I decided to put out this blog post, instead of adding to the Great-Email-Inbox-Graveyard-Of-Knowledge-And-Information (GEIGOKAI... sounds like a Japanese management principle, but in fact, is just a sort of organisational dysfunction of communication ).


So, here we go again

As of SAP HANA SPS 08 this is really easy to have.

The tool of choice here is called "restricted user".


Restricted users are explained in the documentation and we cover them of course in chapter 12 "User Management and Security" of the SAP HANA Adminstration book.


Instead of going over the theoretical parts again, let's just look at an example.


I have my application schema I028297 which contains a table SOMESALESDATA and I have an analytic view build using this data called "I028297/AV_2013".

Now, I want to have a read only user that cannot just create a table or view himself, but who can read the data from my table and the analytic view.


1. I need to create the user:


Not much to say about this... no black magic required here.


2. I add the user to the SAP HANA Studio

Here I get the first "surprise".

After I changed the initial password (a favorite activity with all first time SAP HANA users ) I am confronted with this friendly message:

    "You are not authorized to execute this action; you do not have the required privileges".


So, what's going on here?

The reason for this is that the restricted users by default can only connect via HTTP/HTTPS.

Access via this channel can be controlled by the XS engine access rules.

But there is no default access to the database level for restricted users.


In order to allow users accessing the database level via ODBC or JDBC we need to explicitly grant built-in system roles RESTRICTED_USER_ODBC_ACCESS or RESTRICTED_USER_JDBC_ACCESS.


I want to stick with SAP HANA Studio access here (that's JDBC access), so I use the later:

GRANT restricted_user_jdbc_access TO readonly;


Now I can logon to the system, but I cannot really do much:

SELECT current_user FROM dummy;

works but that is nearly everything.


Other simple actions like


CREATE VIEW whoami AS (SELECT current_user FROM dummy);

fail with

    SAP DBTech JDBC: [258]: insufficient privilege: Not authorized


Same with accessing the application data:

SELECT * FROM i028297.somesalesdata;

    SAP DBTech JDBC: [258]: insufficient privilege: Not authorized





FROM "_SYS_BIC"."I028297/AV_2013" GROUP BY "TXNO", "TXDATE";

   SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

or (depending on which privileges are used for the analytic view)

    SAP DBTech JDBC: [2048]: column store error: search table error:  [2950] user is not authorized


3. Clearly, we need to grant the read privileges for the application table and view to our user.

So, let's do that quickly:



GRANT SELECT ON "_SYS_BIC"."I028297/AV_2013" TO justread;


Now, the JUSTREAD user can access the application data but (s)he cannot create own tables or views:


SELECT * FROM i028297.somesalesdata;

works, but

CREATE VIEW my_salesdata AS (SELECT * FROM i028297.somesalesdata);

again results in

    SAP DBTech JDBC: [258]: insufficient privilege: Not authorized


Same thing with the analytic view:


FROM "_SYS_BIC"."I028297/AV_2013" GROUP BY "TXNO", "TXDATE";


now works nicely, but something like




    FROM "_SYS_BIC"."I028297/AV_2013" GROUP BY "TXNO", "TXDATE");


does not work.


So, there we have it.

Important to remember is that it is not possible to change existing normal users to be restricted users or vice versa.

This decision has to be made when the user is created.

Restricted users still have their own schema listed in the SCHEMAS view, so don't get confused here.


Also, it is not good practice to directly assign privileges to single user accounts like I did it in this example.

Instead privileges should be assigned to roles and these roles in turn should be assigned to the users.


There you go, now you know.

Another pearl of wisdom saved from GEIGOKAI.




I stumbled upon this interesting blog written by Wenjun Zhou , in which he explains the implementation of the game using SQL script and window functions ( I would recommend reading Wenjun's blog to understand the Game of life before reading this blog). This inspired me to try the game of life using decision table and graphical calculation views.





1. Create the initial pattern as a table in HANA.


2. Find the number of neighboring live cells for each cell using a calculation view.


3. Implement the rules of the game in a decision table to get the next generation cells.



Initial pattern


We will start with the below pattern ,




Pattern is created as a table in HANA ( lets call it CELLGAME ),


If a cell is alive , value in column S will be 1 otherwise the value will be 0.




Building the calculation view


Number of live neighbors could be derived from the below SQL ( from Wenjun's Blog ). Let us try to implement this SQL in a Calculation view.






To create an inner join to the table, add two projection nodes and add the CELLGAME table to it. A dummy calculated column with a default value as 1 is created in both the projections.




Join both the projections using a join node and Rename one of the S column to N ( Number of neighbors ) as shown below,





With this set up all the rows of the tables will be joined . To apply the join condition, create two calculated columns ABS_X and ABS_Y to calculate ABS("X"-"X_1") and ABS("Y"-"Y_1")




Apply filter to the projection using the calculated columns created in the previous step.




To implement group by clause and SUM function from SQL to calculation view,  Link the projection node to the calculation view's aggregation node and add X,Y,S and N columns to the output. N as a measure with aggregation type as SUM.





Result of this calculation view will give the number of live neighbors for each cells




Decision table


Below are the rules of game of life,


"1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.

2. Any live cell with two or three live neighbours lives on to the next generation.

3. Any live cell with more than three live neighbours dies, as if by overcrowding.

4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction."

These rules can be converted to decision table conditions,


1. Any live cell with fewer than two live neighbours dies, as if caused by under-population --> if  S = 1 and N < =1 , then Result = 0

2. Any live cell with two or three live neighbours lives on to the next generation. -->  if S =1 and ( N = 2 or N = 3 ) , then Result = 1

3.  Any live cell with more than three live neighbours dies, as if by overcrowding. --> if S = 1 and N > 3 , then Result = 0

4.  Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction. --> if S = 0 and N = 3 , then Result = 0.

Using the above conditions, create a decision table with S , N columns as conditions and a new parameter S_NEW ( Result of the game ) as action.


decision table para.png






Result of this decision table will give the next generation cells.




Update : The next part of the blog which explains how to create an interactive game using the views that we have built is available here

Part 2 - Playing the Game of life with HANA using decision tables and Calculation views

Jenny Lee

HANA Health Check

Posted by Jenny Lee Oct 31, 2014

     For this blog, I would like to focus on some basic health checks for HANA. These checks can give you a good idea of how your HANA system is performing. We will go through some SQL statements and the thresholds to determine what the status of your HANA system is in. To know how the HANA system is performing, it can allow us to plan ahead and avoid unnecessary system disaster.




System Availability:


The following query shows you how many time each service was restarted in the specified hour and date within the analyzed period.

select  to_dats(to_date("SNAPSHOT_ID"))AS "DATE", hour("SNAPSHOT_ID") AS "HOUR",
SNAPSHOT_ID >= add_days(now(), -14)





Name server is not running

Name server/ Index server had 3 or more restarts in the analyzed period


Statistics server is not running

Name server / Index server had up to 2 restarts in the analyzed period

Remaining servers had 2 or more restarts in the analyzed period


All other cases


The example below shows that this standalone test system got restarted 1 time on October 22nd, 2 times on October 21st at around 11pm and another 2 times at around 10pm. In total, there are 3 restarts of the indexserver and nameserve in the analyzed period. If the nameserver is currently not running, then this will be rated as RED. To find out rather the database is restarted manually or due to some other reasons, you may go to index server and name server traces to get more information. If you need further assistance, please consider opening an incident with Product Support.



Top 10 Largest Non-partitioned Column Tables (records)

The following query displays the top 10 non-partitioned column tables and how many records exist in each.


SELECT top 10 schema_name, table_name, part_id, record_count from SYS.M_CS_TABLES where schema_name not LIKE '%SYS%' and part_id = '0' order by record_count desc, schema_name, table_name




If tables with more than 1.5 billion records exist.


If tables with more than 300 million records exist.


No table has more than 300 million records.

In the threshold chart, it shows that if the column table has more than 300 million records; then it is in yellow rating.This is not yet critical with regards to the technical limit of 2 billion records but you should consider partitioning those tables that are expected to grow rapidly in the future to ensure parallelization and sufficient performance. For more information, please refer to the below SAP Notes or the SAP HANA Administration Guide.


Useful SAP Notes:

- 1650394  - SAP HANA DB: Partitioning and Distribution of Large Tables

- 1909763 - How to handle HANA Alert 17: ‘Record count of non-partitioned column-store tables’


Top 10 Largest Partitioned Column Tables (records)

This check displays the 10 largest partitioned column tables in terms of the number of records.

select top 10 schema_name, table_name, part_id, record_count


where schema_name not LIKE '%SYS%' and part_id <> '0'

order by record_count desc, schema_name, table_name




If table with more than 1.9 billion records exist.


If table with more than 1.5 billion records and below 1.9 billion records exist.


No table has more than 1.5 billion records.


The recommendation is to consider re-partitioning after it has passed 1.5 billion records as the technical limit is two billion records per table. If table is more than 1.9 billion records, then you should do the re-partitioning as soon as possible. For more information, please refer to the below SAP Notes or the SAP HANA Administration Guide.


Useful SAP Notes:

-   1650394  - SAP HANA DB: Partitioning and Distribution of Large Tables


Top 10 Largest Column Tables in Terms of Delta size (MB):

This check displays the 10 largest column tables in terms of the size of the delta and history delta stores.

select top 10 schema_name, table_name, part_id, round(memory_size_in_main /(1024*1024),2), round(memory_size_in_delta/(1024*1024),2), record_count, RAW_RECORD_COUNT_IN_DELTA from SYS.M_CS_TABLES

where schema_name not LIKE '%SYS%'

order by memory_size_in_delta desc, schema_name, table_name










The mechanism of main and delta storage allows high compression and high write performance. Write operations are performed on delta store and changes are taken over from the delta to main store asynchronously during Delta Merge. The column store performs a delta merge if one of the following events occurs:

- The number of lines in delta storage exceeds the specified limit

- The memory consumption of the delta storage exceeds the specified limit

- The delta log exceeds the defined limit


Ensure that delta merges for all tables are enabled either by automatic merge or by application-triggered smart merge. In critical cases trigger forced merges for the mentioned tables. For more detail, please refer to the following SAP Note or the SAP HANA Administration Guide.


Useful SAP Notes:

-1977314 - How to handle HANA Alert 29: 'Size of delta storage of column-store tables


CPU Usage:

To check the CPU usage in relation to the available CPU capacity, you can go to the Load Monitor from SAP HANA Studio.


Header 2


Average CPU usage >=90% of the available CPU capacity


Average CPU usage >=75% and < 90% of the available CPU capacity


Average CPU usage < 75% of the available CPU capacity



The Load Graph and the Alert tabs can provide the information of time frame of the high CPU consumption. If you are not able to determine the time frame because the issue happened too long ago, check the following StatisticsServer table which includes historical host resource information up to 30 days:




With the time frame, you may search through the trace files of the responsible process as they will provide indications on the threads or queries that were running at the time. If the high CPU usage is a recurrent issue that is due to scheduled batch jobs or data loading processes, then you may want to turn on the Expensive Statements trace to record all involved statements. For recurrent running background jobs like backups and Delta Merge, you may want to analyze the two system views: "SYS". "M_BACKUP_CATALOG" and "SYS"."M_DELTA_MERGE_STATISTICS" or "_SYS_STATICTICS"."HOST_DELTA_MERGE_STATISTICS"


For more information, please refer to the following SAP Note and also the SAP HANA Troubleshooting and Performance Analysis Guide.


SAP Note:

- 1909670 - How to handle HANA Alert 5: ‘Host CPU Usage'



Memory Consumption:

To check the memory consumption of tables compare to the available allocation limit, you may go to the Load Monitor From HANA Studio.







Memory consumption of tables >= 70% of the available allocation limit.


Memory consumption of tables >= 50% of the available allocation limit.


Memory consumption of tables < 50% of the available allocation limit.


As an in-memory database, it is critical for SAP HANA to handle and track its memory consumption carefully and efficiently; therefore, HANA database pre-allocates and manages its own memory pool. The concepts of the in-memory HANA data include the physical memory, allocated memory, and used memory.

- Physical Memory: The amount of physical (system) memory available on the host.

- Allocated Memory: The memory pool reserved by SAP HANA from the operating system

- Used Memory: The amount of memory that is actually used by HANA database.


Used Memory serves several purposes:

- Program code and stack

- Working space and data tables (heap and shared memory) The heap and shared area is used for working space, temporary data, and storing all data tables (row and column store tables).


For more information, please refer to the following SAP Note and also the SAP HANA Troubleshooting and Performance Analysis Guide.


Useful SAP Note:

- 1999997 - FAQ: SAP HANA Memory


HANA Column Unloads:


Check Column Unloads on Load Graph under the Load Tab in the SAP HANA Studio. This graph will give you an idea of the time frame of any high activities of column unloads.

Header 1

Header 2


>= 100,000 column unloads


>= 1001 and <100,000 column unloads


<=1000 column unloads


Column Store unloads indicates the memory requirements exceed the current available memory in the system. In a healthy situation, it could be that the executed code request a reasonable amount of memory and requires SAP HANA to free up memory resources that are used rarely. However, if  there is a high number of table unloads then it will have an impact on the performance as the tables needs to be fetched again from the disk.

There are a couple of things to look for.


-  If the unloads happen on the statistics server, then it might be that the memory allocated for statistics server is not sufficient and most of the time it would accompany by Out of Memory errors. If this is the case, refer to SAP Note 1929538 HANA Statistics Server - Out of memory. On the other hand, if the unload motivation is 'Unused resource' then you should increase parameter global.ini [memoryobjects] unused_retention_period.


- If the unloads happen on the indexserver server and the reason for the unloads is due to low memory then it could be either of the reasons:

1) The system is not properly sized

2) The table distribution is not optimized

3) Temporary memory shortage due to expensive SQL or mass activity


For more detail information on this, please refer to SAP Note 1977207.

1977207 - How to handle HANA Alert 55: Columnstore unloads

License Information:

The view M_LICENSE can show the date that the HANA license will expire. You can also check the HANA license information from HANA Studio, right click the HANA system > Properties > License. If the license expires, the HANA system will be in a lockdown state; therefore, it is important to make sure the license is renewed before it expires.


select system_id, install_no, to_date(expiration_date), permanent, valid, product_name, product_limit, product_usage FROM "SYS"."M_LICENSE"


HANA database supports two kinds of license keys:

1) Temporary license key:

      - It is valid for 90 days.

      - It comes with a new SAP HANA database. During these 90 days, you should request and apply a permanent license key.

2) Permanent license key:

     - It is valid until the predefined expiration date.

     - Before a permanent license key expires, you should request and apply a new permanent license key.


For more information and steps to request for a license, please refer the SAP Note 1899480


- 1899480 - How to handle HANA Alert 31: 'License expiry'

I just reached the final credits of In-Memory Data Management (2014) - Implications on Enterprise Systems and I’d like to share my thoughts for each session.


I won’t explain the content for each week (you can found a very well explanation here but I’ll give my impressions, what I liked or learned. It’s totally personal, you may found other topics more interesting.


Let’s go:


Week 1

Lecture 1 - History of Enterprise Computing

When you start to hear a senior man with white hair talking about tape storage you may think “what I’m doing? I just bought my very modern smartphone and wasting my time hearing that man talking about store information in… tapes?!”. It’s not true in this case. I always like to hear Platter, it’s like a Jedi Master teaching. This introduction is very important to understand the motivation and birth of in-memory database.


Lecture 2 - Enterprise Application Characteristics

In my ABAP classes I always teach about OLAP/OLTP and the paradigm to have separated machine with different tuning for each one. Here I learnt a different history.


Lecture 3 - Changes in Hardware

How cheap memories, fast network and affordable servers able in-memory computing.


Lecture 4 - Dictionary Encoding

Here is one of the key points of SAP HANA, column storage. Here Plattner explain columnar storage and start to talk about compression.


Lecture 5 - Architecture Blueprint of SassouciDB

A very quick explanation about an academic and experimental database.


Week 2

Lecture 1 - Compression

It’s another key point for SAP HANA. You will learn about compression technics and yes, you will start to do some math to compare them.


Lecture 2 - Data Layout

More detail about row vs. column data storage. Pros and cons for each approach and a hybrid possibility.


Lecture 3 - Row v. Column Layout (Excursion)

Here we have more of Professor Plattner giving more information about data layout.


Lecture 4 - Partitioning

As a geek I only heard about partition when I want to install two operational systems in the same machine. Here I learned a very powerful technic to help parallelism reach higher levels.


Lecture 5 - Insert

Insert command. Under the hood.


Lecture 6 - Update

Lots of things to modify, re-ordenate and re-write.


Lecture 7 - Delete

Not delete, left behind.


Lecture 8 - Insert-Only

Worry about the future without forget the past.


Week 3

Lecture 1 - Select

Projection, Cartesian Product and Selectivity. All the beautiful theory about retrieving data.


Lecture 2 - Tuple Reconstruction

Retrieve a tuple in a row database: piece of cake. Retrieve a tuple in a colomn database: pain in the …


Lecture 3 - Scan Performance

Full table scan: row versus column layout. Show me the numbers!


Lecture 4 - Materialization Strategies

Materialization: when the attribute vector and dictionary mean something. Here you will learn two strategies for materialization during a query: early and later materialization.


Lecture 5 - Differential Buffer

I special buffer to help speed up write operations. Do you remember the insert-only paradigm? It’s about “worry about the future”.


Lecture 6 - Merge

When the differential buffer becomes main partition. Do you remember the insert-only paradigm? It’s about “without forget the past”.


Lecture 7 - Join

Once you learn that retrieve a tuple in a column layout is a pain, you can imagine what’s doing a join. Here you will know why.


Week 4

Lecture 1 - Parallel Data Processing

Very good lesson about parallel data processing. The lecture and reading material try to cover hardware and software aspects of parallelism. Highlight to map reduce. I highly recommend you deep into.


Lecture 2 - Indices

Presenting the indices of indices: inverted indices. “Using this approach, we reduce the data volume read by a CPU from the main memory by providing a data structure that does not require the scan of the entire attribute vector.” (from the reading material, chapter 18).


Lecture 3 - Aggregate Functions

Coming from old-school ABAP generation, aggregate functions still causes some itches in my ears. However, with push-down concept everything changed. Can old dog still can learn new tricks?


Lecture 4 - Aggregate Cache

In the past everything was simple: storage in disk, cache in memory. Today, storage in memory and cache in.. memory too!? Why do I need cache using in-memory database? Cache some chewed data, here is aggregate cache.


Lecture 5 - Enterprise Simulations

Answer insanity-fast a query is only part of the game. Now, enterprise simulations are possible. Change some variables and see the result. Ok, it’s not that simple, but it’s awesome anyway!


Lecture 6 - Enterprise Simulations on Co-processors (Excursion)

Awesomeness of enterprise simulation with co-processors. For who born before internet might remember co-processor 387, “almost” the same. In this presentation we see how co-processors can help high intensive calculation processing.


Week 5

Lecture 1 - Logging

If you think that logging is just to check what happened in the past or to check who was responsible to change the value that causes the highest incident in production yesterday, it’s better to think twice. Logging have a very important role in recovery process.


Lecture 2 - Recovery

The first think that everyone try to realize when know about in-memory databases is “if power goes down? All my database data will be swiped out?”. Here you learn that it’s right. But you also learn how in-memory database overcome that.


Lecture 3 - Replication

I remember a very simplistic definition of ACID concept: “All or nothing in”. In this lecture we check “all in” concept applied to in-memory databases. How to guarantee ACID in a database stored at RAM.


Lecture 4 - Read-only Replication Demo (Excursion)

Replication in action.


Lecture 5 - Hot-Standby

It’s a very hot topic (sorry…I won’t do it again). Hot-standby works together replication in order to guarantee ACID. It’s a good opportunity to you see why we can say that SAP HANA is a very beautiful piece of engineering.


Lecture 6 - Hot-Standby Demo (Excursion)

Hot-standby in action.


Lecture 7 - Workload Management and Scheduling

SAP HANA is all about speed, including user response. Professor Platner explain the importance to have a very responsive system. Here a quote that summarize it: “we must have to answer to user in the same speed of Excel, otherwise the user will download the data to Excel and work there”.


Lecture 8 - Implications on Application Development

What the implications for that special people that develop application to users? Code push-down (mode business logic to database) and store procedures, yes we’re still talking about ABAP. Those are the biggest paradigm shift for ABAP developers.


Week 6

Lecture 1 - Database-Driven Data Aging

Carsten Meyer explain news ideas about archiving and old data.


Lecture 2 - Actual and Historical Partitions

Cold data in not about aging, it’s about usage. Nuffsaid Professor Plattner.


Lecture 3 - Genome Analysis

In-memory have very huge implications beyond the Enterprise System. Let me bring a excerpt from “High Performance In-Memory Genome Data Analysis” reading material that can help to desmystify HANA as a luxury: “Nowadays, a range of time-consuming tasks has to be accomplished before researchers and clinicians can work with analysis results, e.g., to gain new insights”.


Lecture 4 - Showcase: Virtual Patient Explorer (Excursion)

Medical and patient stuff with lots of lots of information.


Lecture 5 - Showcase: Medical Research Insights (Excursion)

More medical and patient stuff with lots of lots of information.


Lecture 6 - Point-of-Sales Explorer

How In-memory SAP HANA DB help sales analysis. Three tables and 8 billions rows. Featuring The Professor commenting about SAP HANA performance “freaking unbelievable! People are scared!”.


Lecture 7 - What’s in it for Enterprises (Excursion)

More benefits to use SAP HANA for Enterprise. Decisions are able to be made in real-time basis.


Lecture 8 - The Enterprise Cloud (Excursion)

Bernd Leukert, member of the executive Board of SAP, talking about running business on cloud is much more than upload your files to Dropboxe.


As I said, it’s was my impression about each section. I really enjoy that training and it’s helping me a lot to understand other SAP HANA trainings.


I consider that as the cornerstone for anyone that decide to work with SAP HANA.

Before my tenure at SAP I worked in the sales group of a business intelligence (BI) startup and I used to pitch-hit for our under-staffed training department. This meant that when they were in a bind, I’d occasionally jump in to do on-site training sessions with a new customer deploying our BI software.


While I enjoyed showing the solutions without the pressure of having to close a sales deal, I always found the database section, where I did a relational database 101 overview and connected our software, to be quite tedious.


Jump forward to today’s hyper-connected world, where everything is digitized, fueling new data-driven business models and there’s a lot more to be excited about.


It’s not the individual advancements in data processing technology that I’m jazzed about… it’s what happens when you combine the data from devices, sensors and machines, creating inventive scenarios and adding unique business value that I really appreciate – especially given the expanding data challenges organizations face.


The changing world of data.jpg
As an example if a software provider or enterprise customer strings together a sensor with an embedded or remote database, adds real-time event processing software and a data warehouse with in-memory computing and then tosses in predictive analytics for good measure – they have a great recipe for:
- A smart vending machine that can deliver user recommendations and transaction history, or tell the candy supplier when a refill is needed. 
- Intelligent plant equipment that captures its own usage information and provides proactive repair warnings based on and historical failure data.
- Real-time fleet management systems that calculate the optimal distribution of work to maximize efficiency - distributing work to fleet assets in real time.


Add Cloud and mobile and of course all that exciting data management utility and information is available anywhere, anytime with lower TCO…


DM Manufacturing.jpg

There seems to be an infinite number of operational situations, processes and business models where end-to-end data management creates new services and revenue streams delivering customer value.


And that’s exciting… 


If you’re interested in exploring more use cases like:
- Real-time problem solving
- Smart metering
- Real-time promotions
- Pattern and customer profitability analysis
…then feel free to navigate The Changing World of Data Solution Map,  our Data Management Solution Brief, or reach out to our OEM team. Many partners are already embedding SAP data management solutions with their offerings to reduce their time to market, differentiate their solution and open up new revenue opportunities.


Get the latest updates on SAP OEM by following @SAPOEM on Twitter. For more details on SAP OEM Partnership and to know about SAP OEM platforms and solutions, visit us www.sap.com/partners/oem

From below statisticsserver trace, look at the memory consumption for statisticsserver highlighted in red. Pay attention to the PAL (process allocation limit), AB (allocated byte) and U (used) value. When U value is close, equal or bigger then PAL value, this indicates out of memory occurred.




[27787]{-1}[-1/-1] 2014-09-25 16:10:22.205322 e Memory ReportMemoryProblems.cpp(00733) : OUT OF MEMORY occurred.

Failed to allocate 32816 byte.

Current callstack:

1: 0x00007f2d0a1c99dc in MemoryManager::PoolAllocator::allocateNoThrowImpl(unsigned long, void const*)+0x2f8 at PoolAllocator.cpp:1069 (

  1. libhdbbasis.so)

2: 0x00007f2d0a24b900 in ltt::allocator::allocateNoThrow(unsigned long)+0x20 at memory.cpp:73 (libhdbbasis.so)

3: 0x00007f2cf78060dd in __alloc_dir+0x69 (libc.so.6)

4: 0x00007f2d0a247790 in System::UX::opendir(char const*)+0x20 at SystemCallsUNIX.cpp:126 (libhdbbasis.so)

5: 0x00007f2d0a1016dc in FileAccess::DirectoryEntry::findFirst()+0x18 at SimpleFile.cpp:511 (libhdbbasis.so)

6: 0x00007f2d0a1025da in FileAccess::DirectoryEntry::DirectoryEntry(char const*)+0xf6 at SimpleFile.cpp:98 (libhdbbasis.so)

7: 0x00007f2d0a04872f in Diagnose::TraceSegmentCompressorThread::run(void*&)+0x26b at TraceSegment.cpp:150 (libhdbbasis.so)

8: 0x00007f2d0a0c0dcb in Execution::Thread::staticMainImp(void**)+0x627 at Thread.cpp:475 (libhdbbasis.so)

9: 0x00007f2d0a0c0f6d in Execution::Thread::staticMain(void*)+0x39 at Thread.cpp:543 (libhdbbasis.so)

Memory consumption information of last failing ProvideMemory, PM-INX=103393:

Memory consumption information of last failing ProvideMemory, PM-INX=103351:

IPMM short info:

GLOBAL_ALLOCATION_LIMIT (GAL) = 200257591012b (186.50gb), SHARED_MEMORY = 17511289776b (16.30gb), CODE_SIZE = 6850695168b (6.37gb)

PID=27562 (hdbnameserver), PAL=190433938636, AB=2844114944, UA=0, U=1599465786, FSL=0

PID=27674 (hdbcompileserve), PAL=190433938636, AB=752832512, UA=0, U=372699315, FSL=0

PID=27671 (hdbpreprocessor), PAL=190433938636, AB=760999936, UA=0, U=337014040, FSL=0

PID=27746 (hdbstatisticsse), PAL=10579663257, AB=10512535552, UA=0, U=9137040196, FSL=0

PID=27749 (hdbxsengine), PAL=190433938636, AB=3937583104, UA=0, U=2352228788, FSL=0

PID=27743 (hdbindexserver), PAL=190433938636, AB=155156312064, UA=0, U=125053733102, FSL=10200547328

Total allocated memory= 198326363056b (184.70gb)

Total used memory     = 163214166171b (152gb)

Sum AB                = 173964378112

Sum Used              = 138852181227

Heap memory fragmentation: 17% (this value may be high if defragmentation does not help solving the current memory request)

Top allocators (ordered descending by inclusive_size_in_use).

1: / 9137040196b (8.50gb)

2: Pool 8130722166b (7.57gb)

3: Pool/StatisticsServer 3777958248b (3.51gb)

4: Pool/StatisticsServer/ThreadManager                                     3603328480b (3.35gb)

5: Pool/StatisticsServer/ThreadManager/Stats::Thread_3                     3567170192b (3.32gb)

6: Pool/RowEngine 1504441432b (1.40gb)

7: AllocateOnlyAllocator-unlimited 887088552b (845.99mb)

8: Pool/AttributeEngine-IndexVector-Single                                 755380040b (720.38mb)

9: AllocateOnlyAllocator-unlimited/FLA-UL<3145728,1>/MemoryMapLevel2Blocks 660602880b (630mb)

10: AllocateOnlyAllocator-unlimited/FLA-UL<3145728,1>                       660602880b (630mb)

1: Pool/RowEngine/RSTempPage 609157120b (580.93mb)

12: Pool/NameIdMapping                                                      569285760b (542.91mb)

13: Pool/NameIdMapping/RoDict 569285696b (542.91mb)

14: Pool/RowEngine/LockTable 536873728b (512mb)

15: Pool/malloc                                                             429013452b (409.13mb)

16: Pool/AttributeEngine 253066781b (241.34mb)

17: Pool/RowEngine/Internal 203948032b (194.50mb)

18: Pool/malloc/libhdbcs.so 179098372b (170.80mb)

19: Pool/StatisticsServer/LastValuesHolder                                  167034760b (159.29mb)

20: Pool/AttributeEngine/Delta 157460489b (150.16mb)

Top allocators (ordered descending by exclusive_size_in_use).

1: Pool/StatisticsServer/ThreadManager/Stats::Thread_3                     3567170192b (3.32gb)

2: Pool/AttributeEngine-IndexVector-Single 755380040b (720.38mb)

3: AllocateOnlyAllocator-unlimited/FLA-UL<3145728,1>/MemoryMapLevel2Blocks 660602880b (630mb)

4: Pool/RowEngine/RSTempPage 609157120b (580.93mb)

5: Pool/NameIdMapping/RoDict 569285696b (542.91mb)

6: Pool/RowEngine/LockTable 536873728b (512mb)

7: Pool/RowEngine/Internal                                                 203948032b (194.50mb)

8: Pool/malloc/libhdbcs.so 179098372b (170.80mb)

9: Pool/StatisticsServer/LastValuesHolder                                  167034760b (159.29mb)

10: StackAllocator                                                          116301824b (110.91mb)

11: Pool/AttributeEngine/Delta/LeafNodes                                    95624552b (91.19mb)

12: Pool/malloc/libhdbexpression.so 93728264b (89.38mb)

13: Pool/AttributeEngine-IndexVector-Sp-Rle                                 89520328b (85.37mb)

14: AllocateOnlyAllocator-unlimited/ReserveForUndoAndCleanupExec            84029440b (80.13mb)

15: AllocateOnlyAllocator-unlimited/ReserveForOnlineCleanup                 84029440b (80.13mb)

16: Pool/RowEngine/CpbTree 68672000b (65.49mb)

17: Pool/RowEngine/SQLPlan 63050832b (60.12mb)

18: Pool/AttributeEngine-IndexVector-SingleIndex                            57784312b (55.10mb)

19: Pool/AttributeEngine-IndexVector-Sp-Indirect                            56010376b (53.41mb)

20: Pool/malloc/libhdbcsstore.so 55532240b (52.95mb)

[28814]{-1}[-1/-1] 2014-09-25 16:09:19.284623 e Mergedog Mergedog.cpp(00198) : catch ltt::exception in mergedog watch thread run(

): exception  1: no.1000002  (ptime/common/pcc/pcc_MonitorAlloc.h:59)

    Allocation failed

exception throw location:



You can refer 2 solutions below if the HANA system is not ready to switch to embedded stasticsserver for any reason.

Solution A)


1) If statistic server is down and inaccessible, you need to kill hdbstatisticsserver pid in OS. Statisticsserver will be restarted immediately by hdb daemon.


2) Check memory consumed by statisticsserver:



3) Check whether the statistics server deletes the old data, go to Catalog -> _SYS_STATISTICS -> TABLES and randomly check table starting with GLOBAL* and HOST* and sort by snapshot_id ascendingly. Ensure the oldest date identical to the retention period.


Alternatively, you can run command: select min (snapshot_id) from _SYS_STATISTICS.<TABLE>






4) Check the retention period of each tables in Configuration -> Statisticsserver -> statisticsserver_sqlcommands


30 days for HOST_WORKLOAD


5) If old data more than 30 days (or we want to delete old data by shorten the retention period), follow 1929538 - HANA Statistics Server - Out of Memory -> Option 1:

Create the procedure using the file attached on note 1929538 and run call set_retention_days(20);


6) Once done, you’ll see old data with more than 20days get deleted :


Memory consumption for statisticsserver reduced:

Also, the min snapshot_id get updated, which is 20days before the retention period:

7) You can reset the retention period to default value anytime if you want, by calling call set_retention_days(30);or restore every SQL command to default in statisticsserver_sqlcommands.

Solution B)

i) Follow 1929538 - HANA Statistics Server - Out of Memory and increase allocationlimit for statisticsserver. This can be done only when statisticsserver is up and accessible. Otherwise, you need to kill and restart it.




One good script HANA_Histories_RetentionTime_Rev70+ from Note 1969700 - SQL statement collection for SAP HANA provides a good overview of Retention time.

My 2 cents worth, for any statisticsserver OOM error, always check the memory usage of statisticsserver to ensure obselete data get deleted after retention period instead of increasing the allocation limit for statisticsserver blindly.

Additionally, you also can refer to 2084747 - Disabling memory intensive data collections of standalone SAP HANA statisticsserver to disable data collection that consume high memory.

Hope it helps,


Nicholas Chang


This blog is intended to use SAP crypto library to enable SAML SSO from SAP BI4 to SAP HANA DB. If you want to use OPENSSL instead, please check the other SCN blog for details.


Turn on SSL using SAP Crypto Library


1.     Install SAP Crypto library

SAP Crypto Library can be downloaded from Service Market Place. Browse to http://service.sap.com/swdc, expand Support Packages and Patches "Browse our Download Catalog "SAP Cryptographic Software" SAPCRYPTOLIB" SAPCRYPTOLIB 5.5.5 "Linux on x86_64 64bit.


Use SAPCAR to extract sapgenpse and libsapcrypto.so to /usr/sap/<SID>/SYS/global/security/lib/

Add the directory containing the SAP Crypto libraries to your library path:

  export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/sap/<SAPSID>/SYS/global/security/lib


The new CommonCryptoLib (SAPCRYPTOLIB) Version 8.4.30 (or higher) is fully compatible with previous versions of SAPCRYPTOLIB, but adds features of SAP Single Sign-On 2.0 Secure Login Library. It can be downloaded in this location:

expand Support Packages and Patches "Browse our Download Catalog "Additional Components " SAPCRYPTOLIB "COMMONCRYPTOLIB 8



Please refer to the following SAP note for details about using CommonCryptoLib:

2084313 - Install and Verify CommonCrypto to SAP HANA


The CommonCryptoLib is supported by HANA since Rev 74. Starting from HANA SPS9, the CommonCryptoLib will be delivered with HANA.


2.     Create the SSL key pair and certificate request files

  • Copy the sapgenpse to $SECUDIR directory. Then run sapgenpse to generate sapsrv.pse file and SAPSSL.req file:

  ./sapgenpse get_pse -p sapsrv.pse -r SAPSSL.req "CN=<FQDN of the host>"


  • Send the Certificate Request to a Certificate Authority to be signed. Browse to http://service.sap.com/trust, and expand SAP Trust Center Services in Detail, and click SSL Test Server Certificates, and then click the ‘Test it Now!’ button. Paste the content from the SAPSSL.req file to the text box, and click Continue.
    SAP returns the signed certificate as text, copy this text and paste it into a file on the HANA server: 
  • Download the  SAP SSL Test Server CA Certificate from the http://service.sap.com/trust site:

  • Import the Signed Certificate using sapgenpse
    ./sapgenpse import_own_cert -c SAPSSL.cer -p sapsrv.pse -r SAPServerCA.cer
3. Check HANA settings
global.ini->[Communication]->sslcryptoprovider = sapcrypto(change it to commoncrypto if use the CommonCryptoLib)



4.Restart HANA, and test if SSL works from HANA studio

Click on the "Connect using SSL" option in the properties of the connection.  Once done, a lock will appear in the connection in HANA Studio

Create Certificate file for BO instance.


  1. Create HANA Authentication connection
    Log onto BO CMC" Application" HANA Authentication, click New. After provide HANA Hostname and port, and IDP name, click the Generate button, and click OK button so that you will see an entry added for HANA authentication
    10-22-2014 10-07-46 AM.png
  2. Copy the content of the generated certificate and paste it to a file on your HANA server:

  3. Add the certification to the pse file:

./sapgenpse maintain_pk -p sapsrv.pse -a sapid.cer


4. You may need to Restart HANA to make the new pse file take effect.


SAML configuration in HANA


  1. Create SAML provider in HANA

You could import the SAML identity provider from the certificate file (sapid.cer) which you created from last step in Security->Open security Console -> SAML Identity Providers. Make sure you have chosen the SAP Cryptographic Library.



2. Create a HANA user TESTUSER with SAML authentication.

Check the SAML option, click the Configure link, then Add the Identity Provider created in last step 'HANA_BI_PROVIDER' for the external user 'Administrator'




Test SAML authentication


Go to BO CMC" Application" HANA Authentication, edit the entry created in previous step, click "Test Connection" button.




If the connection test is not successful, please change the trace level of the following to DEBUG:

indexserver.ini - authentication, xssamlproviderconfig

The index server trace will provide more information on why the authentication failed.


You may find more information about tracing in this SAP note:

2083682  - How to Enhance Tracing for SAP HANA SSO Login Issues



How to Configure SSL for SAP HANA XSEngine using SAPCrypto

Configuring SAML with SAP HANA and SAP BusinessObjects 4.1 - Part 1

Use SAML to Enable SSO for your SAP HANA XS App


Filter Blog

By author:
By date:
By tag: