User CRUD operations is very common during the HANA XS application development process.

And let's have a try and discovery on the concurrency control and the lock mechanism design.


1.Assumed use case.

In the assumed scenario:

The database table has 4 fields, DATE,PERSON_ID,NAME and PERSON_INFO.

DATE and PERSON_ID make the primary key.


Here is two pages in the application:

The page shows the content exposed from HANA database.

And you could click on the "Edit Content" button in the head of the table.




The designed scenario is "Multiple users cannot edit the same record at the same time period".

For example, while user1 is editing content of PersonID 1001 on DATE '2015-02-09', user2 cannot edit PersonID 1001 on '2015-02-09' but user2 is allowed to edit PersonID 1001 on '2015-02-10' or PersonID 1002 on '2015-02-09'.


If no confilcts happens, user would be directed to this page, then do the save or cancel operations.




If someone is editing, the page would give an alert:





2.Business requirement analysis


Here I draw a very simple business requirment process flow diagram for better understanding.





There is some point need to be noticed:

(1)     When user click on the "edit" button, the application should decide whether it should be directed to the next page or alert a notice that other user is editing the same record.

(2)     For situation when user forget to terminal the session, the system should have a default time-out mechnism and release the lock.


3.technical design


I came up an idea which is writing all the users' opertions down and maintaining the opertion log in a trasaction table.

When user click on the "edit" button, the service would check the transaction log first and then decide whether the user could enter the next page.


Lock Table design:



This table has four fields, firstly it should contain the primary key as the database table has.

Then the LOCK_TIMESTAMP is used for tracking the time when the click event session starts.

The SESSSION_USER is used for taking notes of the operator of this session.

Beacuse this table is used for OLTP scenarios and it always to be fetched or looked up by record, so row store is a better choice.


Here is the enhanced process flow:




Whenever the user click on the "Edit" button, the service would first look up the transaction table:

Three situations would cover:

(1)     If no log exist for this record, then add a new log for this record. Return yes to the user and let him/her view the next page.

(2)     If log exist, check the difference of the timestamp:

          (a)     Timestamp difference is larger than the value we set, overwrite the log  with current timestamp, return yes.

          (b)     Timestamp difference is smaller than the value we set, keep the log  same, return no and the session_user value, then alert a message to                               the user and tell him/her someone is editing the records.


4. implement and show the code


Let's have a look at the code:

I use XSJS service to implement this:


function EditRecord(){

    var PersonID= $.request.parameters.get('PersonID');             //Get the PersonID which user inputs

    var Date= $.request.parameters.get('Date');                         //Get the Date which user inputs

    var diffTime;                                                         //Store the value for differnce between current timestamp and the timestamp value in the transaction table

    var username;                                                     //Store the lock user name in the transaction table

    var conn=$.db.getConnection();

    var pstmt;

    var query ="select seconds_between(time_lock,CURRENT_TIMESTAMP),DATE,SESSION_USER from \"XXX\".\"LOG_FOR_LOCK\" where PERSON_ID= ?";



    var rs=pstmt.executeQuery();





        if(Date.getTime()-LockTableTime.getTime() ==0 && diffTime<1200 && username!==$.session.getUsername())

        {$.response.setBody(-1);return -1;}


     // We check whether the DATE are the same by using the getTime() function. And I set the time-out value to be 1200 seconds.

     // Also, here we allow the same user to open multiple sessions. Otherwise, it is possible to let the user be locked by himself.


     // if the function does not end, which means whether there is no log for the records or the log is out-dated.

     // So we should insert or update the log information with the current session


    var query1="upsert \"XXX\".\"LOG_FOR_LOCK\" values('"+ Date + "','" + PERSON_ID +

    "',now(),'"+ $.session.getUsername()+ "') where DATE= " + DATE + " and PERSON_ID= " + PersonID ;







    return 1;



// When the application page call this service, it could decide whether jump to the next page according to the return value.

// I wrote this code just for test use, no gurantee for 100% secure or accurate.



5. Summary & FAQ

Absolutely, there is no best solution for every scenario.

This lock mechanism is designed for only some kind of situation and it may contain some defects.

I just implement for a test use case and the code I wrote may contain some flaws and it is not an official recommanded mechism.

Different methods would effects according to different design.




1.     How about orphan lock situation?

The service would check the timestamp between current timestamp and the timestamp stored in the log_for_lock table.


2.   What is the most siginificant difference between this application lock and database concurrency controll.

As my understanding, this application is meant to detect confilct when user click on the "edlt" button.

And database concurrenct controll is a lock mechism effect when comit action happens.


For other methods, please refer to below links for detailed information:



2. Optimistic Lock:

Just Google it and it would bring many good results.


3. HANA MVCC controll:

SAP HANA Concurrency Control


Special thanks for help from Thomas, Neil, Yuan and Antonie.


Please feel free to point out if there is any other flaw in this design and have discusion with me.

Thanks for reading.

Have a nice day.



I faced a problem related to Parent-Child hierarchy in HANA Attribute view. I searched for the solution, of course, and the only trace in Internet that I found was a SCN thread with no replies, which suggested that the problem was not documented so far. So I started a research on my own.

Very often, I found that "classical" examples with real-life-like data (such as tables in SAP_HANA_DEMO schema which comes along with trial cloud solutions) overwhelmed my attention with too much information and too large data sets to analyze. I was not able to really understand what was going on until I created the most simple examples, far too simple for any real-life scenario, but "naked" to the essence of my problem. Inspired by my wife's favorite chef Jamie Oliver (who called himself "Naked Chef" because his cooking was "naked" to its essence), I created this blog with intention to make it a series. I will try to elaborate problems that I had and simplest possible test-cases which enabled me to find solutions.


So let's move on.


Create Test Data

Let's create two simple tables: COMPANIES with company names and countries, and OUTGOING_INVOICES, payments each company receives, to be analyzed later. In order to create tables and fill them with data, we run this code in SQL console. Please replace my schema name and table name prefix with something that suits your environment.

-- create tables
  (COMP_CODE varchar(10) primary key,
   COUNTRY   varchar(10),
   NAME      varchar(30) )  ;
  (DOC_ID    varchar(10) primary key,
   COMP_CODE varchar(10),
   AMOUNT    decimal(15,2)    )  ;
-- fill test data
-- companies
DELETE FROM "HA300_160115"."D107_COMPANIES";
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( '1', 'DE',  'SAP AG' );
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( '2', 'DE',  'Mercedes' );
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( '3', 'DE',  'Bosch' );
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( '4', 'CH',  'Nestle' );
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( '5', 'SA',  'Saudi Aramco' );
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( '6', 'HR',  'Erpicon' );
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( '7', 'HR',  'INA' );
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( '8', 'US',  'Boeing' );
-- incoming invoices
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '0', '0', '2000000' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '1', '1', '2000000' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '2', '1', '24352' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '3', '2', '6746' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '4', '2', '33333' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '5', '3', '2000000' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '6', '3', '24352' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '7', '4', '33667' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '8', '4', '34324' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '9', '5', '24352' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '10', '6', '6746' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '11', '7', '33333' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '12', '8', '2000000' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '13', '5', '24352' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '14', '4', '33667' );
INSERT INTO "HA300_160115"."D107_OUTGOING_INVOICES" VALUES ( '15', '8', '34324' );

Attribute View with Hierarchy

Data Foundation

Now is the time for Attribute view with parent-child hierarchy. Let's call it after the table: AT_COMPANIES. The data foundation is the table COMPANIES and we will take all the fields into output:


In Semantics, the COMP_CODE is the key.

The hierarchy has only one line: COMP_CODE is a child of COUNTRY.

Don't forget to assign the NAME field as a Label Column to COMP_CODE. This is not necessary for our example in technical terms, but it will help the output to look better.

Analytic View

The Analytic View AN_COMPANIES_OUTGOING_INVOICES has just one table as Data Foundation: OUTGOING_INVOICES. The Logical Join (Star Join) uses the attribute view which we have just created: AT_COMPANIES. There are only two output columns (from Data Foundation): attribute DOC_ID and measure AMOUNT.

Testing: Excel and MDX

Let's try to consume this view in a hierarchical way. We shall use Microsoft Excel with SAP HANA MDX Provider connection. A prerequisite for this is installing SAP HANA Client on your workstation, which you can download here (if SAP removes this link, please inform me so I can update/remove it).

After starting Excel, we need to add data From Other Sources for the first run (later, provided that you save the newly created connection, you can use Existing Connections).


Now we need to choose Data Link SAP HANA MDX Provider. If you haven't installed SAP HANA Client on your PC, you won't see it and won't be able to continue from here. Supply host name (or IP), instance number and your account credentials.


Pick a package from the dropdown box and select your Analytic View. You can save the connection file (with password) so that you don't have to go through this process next time you want to run the same Analytic View.


We are almost there, just one more step:



Like in the above mentioned SCN thread, the engine complains: Each predecessor (except the root node) must also appear as a successor, with country names listed. Trying this with standard real-life-like tables gave me headache. There was just too much data to analyze and play with, and I didn't want to destroy it. But with my own nano-size custom data set I could do whatever I wanted.

Problem Analysis

The key characteristic of Parent-Child hierarchy is the fact that it is Recursive.

This means that every parent can be a child at the same time and there must be no orphans, except the roots. If I say that my company code "1" (that's SAP AG) belongs to country with code "DE", and given recursive nature of the relation (every child must have a parent), it means that the company with code "DE" must exist too. Since it does not, my company code "1" is an orphan and so are all the companies. Now, semantically this is meaningless, but technically that's how Parent-Child hierarchy works.

What could I do about it?

Expected Solution

I would expect that setting the hierarchy properties would fix the problem: Setting COUNTRY as a Root Node and deciding what happens with orphans - they become Root Nodes - should do the trick. Unfortunately it didn't help.

I plaid with other variations of these properties. If I chose Orphan nodes: Ignore, my Excel worked, but with no data in it. Naturally, all my companies belong to countries and they are therefore orphans. Since all the facts (outgoing invoices) belong to them, there were no facts to analyze.

This is where I hit the wall when it comes to view adjustments. If there is better solution to this, please share. Meanwhile, I tried another thing which made things much more clear and helped me to understand the Parent-Child hierarchy better.

Solution (or at least explanation)

Since this failed, I tried making the data set technically consistent. Let's pretend that countries are companies at the topmost level of the recursive hierarchy and they have no parents (their COUNTRY field is filled by null and therefore they should be root). I added the following data to my COMPANIES table:

-- countries as companies
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( 'DE', null, 'Germany' );
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( 'CH', null, 'Switzerland' );
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( 'HR', null, 'Croatia' );
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( 'US', null, 'United States' );
INSERT INTO "HA300_160115"."D107_COMPANIES" VALUES ( 'SA', null, 'Kingdom of Saudi Arabia' );

And finally, after the long battle, sunshine on the horizon:

So technically we reached the satisfactory result, but not semantically, as countries are not companies. In real life, I believe that this example calls for Level Hierarchy rather than Parent-Child.

Parent-Child recursive hierarchy would be more suitable for truly recursive relations, such as company organization with departments and job positions, or folders and files structure.


I hope that my blog helped to share more light on Parent-Child hierarchy.


Hello World!


I started my journey into SAPUI 5 four months ago at BMB Services as an intern as part of completing my Master programme at ESIEE Paris, and it has been for the most part, a consuming love affair.

For me staying focused was important, like a baby to modern day gadgets, I seem to find out that you get your head spinning on the varous interesting careers that the SAP ecosystem proposes,  but remaining true to my love of the web kept me steadfast and pushed me thus far.


While building a sentimental analysis application based on SAP HANA using SAPUI5, I recently had to scan through a lot of blog posts to understand the workings. For the Fileupload, this was not a feature I had immediately envisaged and wondered why I never saw it coming before my boss added this wonderful request. In truth, I got totally seperated, not knowing how SAPUI5 would implement such a simple feature.Maybe I expected a very diffcult task ahead. Being pressed with time and wanting to quickly implement a working feature, I had several discussions, perfomed documentation reads and rereads, and guess what, I was getting no where with all that hurry.


I must say that learning to learn is something. Through this period, I have taken a back seat to watch and understand my learning patterns. It turns out that I could spend a heck lot of time looking for solutions on the internet, circling around the same thing and the at same time circling further away from the solution of the task at hand. This is crazy, time spent can be enourmous especially relative to the accomplished tasked. But accodring to  Sir Isaac Newton, If I have seen further than others, it is by standing upon the shoulders of giants, hence , thanks SCN.


I think on the average, I spent more time on Fileuploader than I did understanding any other feature of SAP UI5- the reason, obvious, but for me, it was lesson learnt!


So here I put together snippets of code for end to end  upload and database save/store to  database retrieve  to display your file on your page. In my case, the files were Images, I am sure you can be extended for other file types. Investigate.


This post requires that you have some prequisite knowledge of MVC  and SplitApp approaches in SAPUI5.






id="FileLoader"       width="100%"       tooltip="Upload picture of contact"       uploadComplete="handleUploadComplete"       change="handleValueChange"       typeMissmatch="handleTypeMissmatch"       style="Emphasized"       fileType="jpeg,jpg,png"       placeholder="Choose a file for Upload...">        </u:FileUploader>    <Button      text="Upload File"      press="handleUploadPress"/>





handleUploadPress : function(oEvent)  
       var fileLoader =this.getView().byId("FileLoader");//XML View
            var fileName = fileLoader.getValue();  
              if (fileName == "" )  
              "Please choose File.", sap.ui.commons.MessageBox.Icon.INFORMATION, "Information");  
                        var uploadUrl = "services/PictureFileUpload.xsjs?file_name="+fileName;  
                        var formEle ="UpdateContact--FileLoader"); 
                        var form = $(formEle).find("form")[0] ;  
                        var fd = new FormData(form);     
                                  url: uploadUrl,  
                                  type: "GET",  
                                  beforeSend: function(xhr)  
                                            xhr.setRequestHeader("X-CSRF-Token", "Fetch");  
                                  success: function(data, textStatus, XMLHttpRequest) {  
                                            var token = XMLHttpRequest.getResponseHeader('X-CSRF-Token');  
                                                      url: uploadUrl,  
                                                      type: "POST",  
                                                      processData :false ,  
                                                      contentType: false ,  
                                                      beforeSend: function(xhr)  
                                                                xhr.setRequestHeader("X-CSRF-Token", token);  
                                                      success: function(data, textStatus, XMLHttpRequest)   
                                                           var resptext = XMLHttpRequest.responseText;  
                                                 , sap.ui.commons.MessageBox.Icon.INFORMATION, "Information");  
                                                          if(data == "Upload successful"){
                                                   "File uploaded.", sap.ui.commons.MessageBox.Icon.INFORMATION, "Information");  
                                                      error: function(data, textStatus, XMLHttpRequest)  
                                                      "File could not be uploaded.", sap.ui.commons.MessageBox.Icon.ERROR, "Error");  
                                  }} ) ;  





Create your schema table with a column of type BLOB

$.response.contentType = "text/html";  
var ID  = $.request.parameters.get("ID");
           var conn = $.db.getConnection();  
           var filename = $.request.parameters.get("file_name");  
           var pstmt = conn.prepareStatement( "UPDATE \"SCHEMA\".\"MYTABLE\" SET IMAGE_CONTENT = ? where ID='"+ID+"'"); 
                          var file_body = $.request.entities[0].body.asArrayBuffer();  
                                $.response.setBody("[200]:Upload successful!");  
                     $.response.setBody("No Entries");  
           if (pstmt !== null)  
           if (conn !== null)  



That saves you file in to the database





                        <Image id="profileImg" class="primeImg" src="data:image/jpeg;base64,' + hexToBase64({IMAGE_CONTENT})"           width="{/imageWidth}" >                              <layoutData>                                <FlexItemData growFactor="1" />                              </layoutData>                         </Image>




You can choose to trigger this fuction any way you choose, in my case , the image were to be shown the user clicked on a detail table master row which navigates to open a new contact page.

Things to note

     handleLineItemPress : function (evt) {
          var context = evt.getSource().getBindingContext();
          var ID = context.getObject().ID;
          var xmlHTTP = new XMLHttpRequest();
'GET', 'services/GetImage.xsjs?id=' +ID, true);
        xmlHTTP.responseType = 'arraybuffer';
        xmlHTTP.onload = function(e) {
             if (this.status == 200) {
                  var arr = new Uint8Array(this.response);
                  var raw = String.fromCharCode.apply(null, arr);
                  var b64 = btoa(raw);
                  var dataURL = "data:image/jpeg;base64," + b64;
                  document.getElementById("ProfileEdit--profileImg").src = dataURL;
                 //document.getElementById('errorMessage').innerHTML = '';
             else if (this.status == 404) {
                  //document.getElementById('errorMessage').innerHTML = 'Image not found';
                  document.getElementById("ProfileEdit--profileImg").src = 'img/no_user.png';
                  document.getElementById("ProfileEdit--profileImg").src = 'img/no_user.png';
          //after image is set"ProfileEdit", context);


var schema_name = "YOUR_SCHEMA_NAME";
var id = $.request.parameters.get('id');
var conn = $.db.getConnection();
try {
    var query = "Select IMAGE_CONTENT From \"SYSTEM\".\"BMBFRIENDS\" Where ID = '" + id + "'";
    var pstmt = conn.prepareStatement(query);
    var rs = pstmt.executeQuery();
        $.response.headers.set("Content-Disposition", "Content-Disposition: attachment; filename=filename.jpg");
        $.response.contentType = 'image/jpg';
        $.response.status = $.net.http.OK;
         $.response.contentType = 'text/html';
        $.response.setBody("Image not found");
        $.response.status = 404;
} catch (e) {
    $.response.setBody("Error while downloading : " + e);
    $.response.status = 500;

OK thats it! C'est tout!

I hope I have save you some eyelashes,  be linient, its my first post, but you can go all the way to positive criticisms, we all love lets get your sentiments too


Hot off the presses, the brand new SAP HANA Developer Edition (SPS9 Revision 91) is now available for immediate consumption!


The new versions boasts all the powerful updates associated with the SPS9 release of SAP HANA as well as updated example applications, SHINE content and more!


Screen Shot 2015-02-25 at 2.17.10 PM.png


We've also brought back the 32GB instance as well as the 64GB.


Screen Shot 2015-02-25 at 2.18.41 PM.png

Be sure to get your instance today on either AMAZON or Microsoft Azure.


Screen Shot 2015-02-25 at 3.11.07 PM.png



To get your own system please follow this link.

For all who do't know Barry Allen , he is Flash , a superhero in the DC Comics universe .When i put in the context ,SAP HANA for me is the Barry Allen for ERP world ,lightning fast and capability which can give you super power over traditional database , operations and use cases which previously were compromised in certain way now does not need to settle for less and possibility are endless  , speaking of which we recently saw this in action in the ICC World cup


SAP ICC logo.png


It was 22nd of Feb 2015 , i was watching Indian team in the field batting against south Africa in there 2nd world cup match , and the opener Shikar dawan was cruising for his first century in world cup  and the new thing beside his new hair cut was there were so many statics which were introduced for first time like the weak spot , the strike rate of Dhoni in last 10 over in world cup , the average of the partner ship when the two batsman played against south Africa before and it was something very new and i wanted to know more as it give you a sense of predictability in times of uncertainty.

So i jumped into the STATS section of the website( shown in the Television), UX experience was quite simple as i could find the stats relatively easily , but for my surprise when i entered the stats section there was no response for first 5 min , i was confused for a while and thought of opening the console , as the software developer inside me want to know why and to my surprise what i saw something which would have been a night mare for a guy whose is responsible for getting the things on place .

The view was something like this :



main issue.png


and like this:

time and refresh.png



The server was unable to handle load so there was 503 response


So i thought of pulling some more statics like batting average and strike rate and the same error was shown again




I started thinking is it a good news or bad news , how many more Cricket fan like me are checking these cool features , in the meantime the match was in a very exciting stage , Indian batsman were keeping the score board ticking .So i also decided to keep going on the journey of exploration so i fired the postman and start cheeking the time and started putting the URL which might be hitting the HANA system , the news was not that exciting  too , it was taking 30 sec to get any result from the system :





Finally i gave up and focused on the Television for a while , after the first inning was over I went back to see how things are and everything was working as it should be 


It was the load on server which was causing the issue as waiting time for the query to HANA system reduced to 5 sec.




sap query in icc wc.png




The good thing was I was able to see the result of last 40 year of world cup matches with stats with a very nice UX




40 years good.png





The UI and UX presented  is really impressive and responsive



For tablet version :




and for mobile version :





To sum up this brief case study of the SAP cloud insight in ICC world cup 2015 , my final words would be that our Barry Allan (Flash) have arrived and is ready to take on the real challenges of the world but some of the learning is still along the way to be more better and more faster .




-Ajay Nayak





Welcome to the second post in a series showing you how to use the OAuth2 authentication mechanism in HANA XS. The mechanism enables you to use APIs from providers like Google, Facebook, Twitter or others offering this this authentication. Of course you will also learn how to authenticate against the HANA Cloud Platform and an ABAP server.


In this particular post we are going to implement step by step an end to end XS application calling a Google API.


Currently the series contains following posts:

  1. OAuth2 Authentication using HANA XS – Basics (1)  (start off with this one, if you are new to OAuth)
  2. OAuth2 Authentication using HANA XS – XS OAuth client lib calling Google's API (2) (end to end description in a step by step manner)




In order to get the whole scenario running I am using a productive HANA cloud instance with revision 91. The instance can be reached from Google servers. This is a necessary prerequisite for Google’s web server application scenario. It is also possible to use OAuth2 with an on premise HANA box, which cannot be reached from the Internet.


We are going to perform the following steps:

  • Create a Google project and an OAuth application via Google’s developer console
  • Configure a trust store
  • Create required XS artifacts (xshttpdest, xsoauthclientconfig, xsoauthappconfig, …)
  • Create a xsjs calling a Google API


So let’s start!


Authorization Server configuration (Google)

In order to make Google API calls later on, we first of all have to register our client application at Google’s developer console. During the registration we receive credentials, we later use to register instances of our application. This means every user has to grant access to his own data - nevertheless all application instances share the same basic client ID and secret.


This is achieved by:

  • Go to the Google developer console
  • Log on (you need to create a Google account if you want to user their API)
  • Create a project


  • Navigate into the project, section credentials (below APIs & auth)
  • Click the ‘Create new Client ID’ button


  • Leave the default ‘Web application’ and click continue
  • Provide your email address and a description
  • In the following popup you need to configure the connection from Google to HANA XS
    • Authorized JavaScript origins: Provide the publically available fully qualified hostname of your HANA instance
      • e.g.:
      • Authorized redirect URIS:
        • Google calls this URI to prevent attacks and to hand over control back to the calling application
        • You can implement your own handler or use SAPs delivered generic handler available at
          • <YOUR HOST>/sap/hana/xs/oAuth/lib/runtime/tokenRequest.xsjs


Your Google client configuration should now look like this:

3-5-2015 4-47-15 PM.png


Later on we need the client ID and secret. The other information is already deployed in your HANA XS box for Google as an authorization provider. The developer console provides a convenient ‘Download JSON’ button to save this information to your local machine.


Enable the Google+ API

Another thing we can do right now is the enablement of the API we want to call later on. In this post we will use the Google+ API and this is the way to enable it:

  • Navigate to the API link below ‘APIs & auth’
  • Search for ‘+’
  • Enable the API via the button in the status column



HANA XS configuration

If you want to configure OAuth authentication in XS, several configuration aspects come into play. There are well-known ones as XS HTTP destinations and XS Trust Stores, but also some new ones, namely XS OAuth configuration packages. We are going to create everything from scratch in a package called googleAPI. The only thing being reused is the configuration for calling the Google API (more on this later).


To perform all the tasks your user needs to have several roles:

  • RuntimeConfAdministrator
  • HTTPDestAdministrator
  • oAuthAdmin
  • and some standard developer roles
    • sap.hana.ide.roles::EditorDeveloper
    • sap.hana.xs.debugger::Debugger
    • sap.hana.ide.roles::CatalogDeveloper
    • sap.hana.ide.roles::SecurityAdmin
    • sap.hana.ide.roles::TraceViewerNote


Setting up the Trust Store

Google only allows secure calls to their APIs (via HTTPS). So we have to set up a XS HTTP destination using a Trust Store. This Trust Store holds the server certificate – signed by a certification authority. We have to retrieve this certificate and import it into a trust store.


To retrieve the certificate

(I am using Chrome under Windows 7 in this example)

  • browse to:
  • click the green lock next to the URL
  • switch to the tab ‘Connection’
  • open the ‘certificate information’




  • switch to tab ‘certification path’
  • select ‘Geo Trust Global CA’
  • click ‘view certificate’
  • switch to ‘details’
  • click ‘copy to file’


  • keep the first option ‘DER encoded binary X.509 (.CER)


  • save the file to your local machine


To import the certificate


  • Switch over to the XS Admin at: <your host>/sap/hana/xs/admin/
  • Change to the Trust Manager


  • Add a new trust store via the 'Add' link below the list of trust stores
  • Give it a name (e.g. googleAPI)
  • Switch to the tab ‘certificate list’
  • Click 'Import Certificate'


  • Select the exported certificate and import it
  • If it looks like this, you are good to continue



Basic application setup

We want to have a folder containing everything we configure and are going to do so via the web IDE:

  • open the web IDE <your host>/sap/hana/xs/ide/editor/
  • right click on ‘content’
  • select ‘Create Application’ with the following parameters
    • Template: ‘Empty application’
    • Package: ‘googleAPI’ (or something else you prefer and are willing to remember)



Disable security checks for this application

For reasons of demonstration we make things as simple as possible and switch off authentication:

  • Open the .xsaccess file in the new application and change the authentication




"authentication": [{
"method": "Form"




"authentication": null,



it looks like


XS HTTP destination

We already created the trust store and imported Google's certificate. Now we link this trust store to the HTTP destination so we retrieve a HTTPS destination.


Using the Web IDE, we do this by creating a new file in our package called


with following content:

host = "";
port = 443;
pathPrefix = "";
proxyType = none;
authType = none;
useSSL = true;
timeout = 0;





To connect the http destination to the trust store, we go to the XS Admin again.

  • Open the XS Admin: <your host>/sap/hana/xs/admin/
  • Navigate to your package (googleAPI) via the right arrow next to it
  • Select the .xshttpdest file and click the ‘edit’ button




  • In the tab ‘authentication details’
    • Select ‘SSL enabled’
    • TrustStore: your configured trust store (googleAPI)
    • ‘SSL authentication type’: ‘Anonymous’




  • Save the destination for the moment – we’ll come back to it later on


OAuth Configuration Package


OAuth is an open standard. Due to the fact that it has been in a draft version for quite some time, different providers implemented slightly different solutions. They all are standard compliant, but use e.g. different names for parameters and the like.

HANA XS provides a mechanism to meet the requirements of this fact by providing a basic framework and provider specific configurations. These configurations consist of:

  • OAuth client configuration (.xsoauthclientconfig)

here you find: client ID, authentication type, endpoints, …

  • OAuth client flavor configuration (.xsoauthclientflavor)

here you find: protocol steps, parameter names and http request location (header, body, …)


To link the metadata (config & flavor) to an actual XS application there is a final piece in the puzzle - completing an XS OAuth configuration package:

  • OAuth application configuration (.xsoauthappconfig)

          Links metadata to runtime data


SAP delivered content

SAP delivers templates for some providers (at the time of writing Google, HANA Cloud platform, SAP Jam, SAP ABAP). Of course it is possible to create further OAuth client configurations and I strongly would like to encourage to do so.


Create a XS OAuth client configuration package

Use the Web IDE and create a file with the suffix .xsoauthappconfig in our application package. Details will be configured via the XS admin.


Let’s name the file googleOAconfig.xsoauthappconfig and give it the minimum content required:

"clientConfig"    : "sap.hana.xs.oAuth.lib.providerconfig.providermodel:googleV1"

Now our XS application is linked to the SAP delivered Google provider model.


  • If you are curious, you might want to check the package sap.hana.xs.oAuth.lib.providerconfig.providermodel
    • It contains SAP predelivered OAuth configurations (currently HCP, Jam, ABAP, Google) and gives you an idea of how to develop own OAuth configuration packages


To provide the missing content of the OAuth configuration we switch over to the HANA XS Admin again:

  • Open the Admin UI at <your host>/sap/hana/xs/admin/
  • Navigate into our package
  • Select the HTTP destination configuration (googleAPIdestination.xshttpdest)
  • Change to the ‘OAuth details’ tab
  • Click ‘Edit’ in the lower right corner
  • Click ‘Browse OAuth App Configs’



  • In the popup: select our fresh OAuth configuration (googleOAconfig.xsoauthappconfig)
  • Save your work


Now this particular HTTP destination will use the OAuth authentication. We still are using the pre delivered default Google configuration, which is not aware of our client ID and secret (which we set up at Google's developer console). Even though it is possible, it is not a good idea to provide this information in the default configuration, as an upgrade of the HANA revision will overwrite our details. This is where the extension mechanism comes in handy:


  • In the ‘OAuth Details’ tab of the HTTP destination click: ‘Navigate to OAuth App Config’


As we want to call the Google Plus API later on,  we need to tell the app configuration which scopes (authorizations) we require. To add the scope


  • Save the XS OAuth application configuration
  • And click ‘Navigate to Client Config’



  • You will see the default client configuration, which we are going to extend now: click ‘Extend’



  • Choose our package and give the extension a meaningful name like: googleOAClientExtension



  • Now it is time to provide our Google client credentials and a meaningful redirect URL
    • Click edit and provide the Client ID you downloaded as JSON from Google’s developer console
    • Set the redirect URL to <your host>/sap/hana/xs/oAuth/lib/runtime/tokenRequest.xsjs
    • Save
    • Now click the ‘Store client secret’ link in the lower right corner
      • Provide the client secret from Google’s developer console


That’s it for the configuration part – a no brainer, wasn’t it ?

Let’s go ahead and use the configured HTTP destination in a xsjs.



HANA XS OAuth client


Oauth client library

HANA XS SPS9 comes with an OAuth client library. It is a hidden a little bid… but maybe you want to have a look into package:  sap.hana.xs.oAuth.lib and particularly at file oAuthClient.xsjslib ?


You will find useful HANA XS OAuth client for the OAuth scenario over there. I’d suggest you briefly look at the following table and use it as a reference later on so we can use the remainder of this post to finish our end to end scenario by an implementation.




// Example on how to load and initialize the library:
  var oAuthClientLib = $.import("sap.hana.xs.oAuth.lib", "oAuthClient");
// The constructor requires package name and file name of configured HTTP destination
var oaC = new oAuthClientLib.OAuthClient (<folder>, <destfilename>);




Following methods are available:


Step descriptionMethodComment
Check for valid access token



returns true if access token is available, not yet expired, and the scope set of the application configuration is included in the set of the granted scopes

Check for valid refresh token



returns true if refresh token is available and the scope set of the application configuration is included in the set of the granted scopes
Get URI for user authorizationString    oaC.userAuthorizeUri(<redirectURL>);

authCode flow only. Returns URI for user authorization on oauth sever. Such call causes a creation of State-cookie which is later used within GrantAccess Step.

Parameter redirectURL is required as an end target after acquiring of access token.

Get Token(s) from Code



authCode flow only. Exchange the authorization code into a token set. This call is usually executed by a generic redirect implemented at https:/<server>:<port>/sap/hana/xs/oAuth/lib/runtime/tokenRequest.xsjs.
Get Accestoken from Refresh token



authCode flow only. Uses the refresh to acquire a new Access token. Return a HTTP Status code of the used connection
Get Accesstoken from Saml Assertion



SAML Flow only.  Exchange a SAML assertion into an token set
Data Processingresponse    oaC.processData(oHttpClient, request, googleDriveDest);Implements a connection to the target server enriching the request with OAuth AccessToken.
Revokes Access Token



Connects to the revocation end point and revokes available access and refresh tokens. Even if such connect fails tokens a removed from local database afterwords. HTTP return code of the executed call ( 200 Success etc.)


Example xsjs calling the Google API


Before we go into the implementation, remember we chose to use the Google Plus API by



Where the scope comes from

Scopes define fine granular access permissions. Depending on the API you want to use, you need to use a particular scope. There are several ways to discover the perfect scopes for the task. I like using Google’s OAuth playground ( . This is also a very good resource to get a feeling for this API and the OAuth authorization flow itself. Other good resources are Google’s API explorer ( ) and of course the corresponding documentation


Show me the code!

Okay this was a lot of setup work. Stay tuned just a little longer before we finally implement and execute the application.


For this API, you need to get your google ID. To do so:



And finally: let's code!

  • Please check the comments in the code if you are asking yourself what is going on
  • Using the Web IDE, create a file called ‘callGooglePlusAPI.xsjs’
  • Paste the following code and execute it via the green arrow or F8



Please enjoy the code – you really deserve it now:


var callGoogle = function() {
    // you need to add yourGoogleID right here (get it via -> Profile -> copy the long number)
    var yourGoogleID = "101287807282311627286";
    var response;
    // the HTTP client to send requests
    var oHttpClient = new $.net.http.Client();
    // load the library handling the OAuth protocol
    var oAuthClientLib = $.import("sap.hana.xs.oAuth.lib", "oAuthClient");
    // Google + API specific endpoints
    var suffix = "/plus/v1/people/";
    suffix += yourGoogleID;
    // where we want to go
    var request = new $.net.http.Request($.net.http.GET, suffix);
    // initialize the HTTP destination and OAuth client
    var myDestObj = $.net.http.readDestination("googleAPI", "googleAPIdestination");
    var oaC = new oAuthClientLib.OAuthClient("googleAPI", "googleAPIdestination");
    // SCOPES -> configure via XS Admin in the OAuth configuration package
    // if you want to start from scratch, just set a breakpoint here and call this method
    // oaC.userRevokeAccess();
    // initialize the OAuth authorization code flow (and trace a debug message)
    // do you know what happens if you call this URL via your Browser?
    var url = oaC.userAuthorizeUri("");
    $.trace.debug("Auth URL is: " + url);
    // if you called the URL via your browser paste the authorization code response into the 'code' variable (after uncommenting of course)
    // var code;
    // this is an alternative way to get the access tokens
    // oaC.userGrantAccess(code);
    // is our access token still valid, do we need to refresh it or didn't we receive anyone at all?
    var validAT = oaC.hasValidAccessToken();
    if (validAT) {
        // call the API
        response = oaC.processData(oHttpClient, request, myDestObj);
    } else {
        var validRT = oaC.hasValidRefreshToken();
        if (validRT) {
            var refreshStatus = oaC.userRefreshAccess();
            if (refreshStatus === 200) {
                // call the API
                response = oaC.processData(oHttpClient, request, myDestObj);
        } else {
                error: false,
                errorDescription: "There are no tokens (access or refresh) for this application/user context available",
                solution: "Authorize yourself via the following authorization URL",
                authorizationURL: url
    if (response) {
        // display googles response
        var myBody;
        if (response.body)
            try {
                myBody = JSON.parse(response.body.asString());
            } catch (e) {
                myBody = response.body.asString();
        $.response.contentType = "application/json";
        $.response.status = 200;
            "status": response.status,
            "body": myBody
try {
} catch (err) {
    $.response.setBody("Failed to execute action: " + err.toString());








The first time you execute the application you will get something like:



The reason for this is the missing access token. We trigger the authorization flow by sending the required scopes, client id & secret and some more information to Google. Google sends us an authorization code back, which we finally exchange against access and refresh tokens. The access token will be stored in the OAuth secure store and used for upcoming API calls.

  • click the provided authorizationURL


After you have approved the client request, you will see the well known Google OAuth log on UI


Log on to this one and grant access to the client




you will be redirected to the very same application you came from – this time showing you the JSON returned by the API call:





There are many configuration steps to carry out, before you can finally use OAuth authentication in you application. I hope you successfully completed all of them and most of all hope you see the potential you got at hand right now.

There will be some follow up posts dealing with the usage of the other pre delivered configuration packages, but also some going for even more advanced topics i.e. explaining how to create own configurations, how to debug outgoing requests and the like.

I would like to thank Michael Engler and Klaus Herter for their great support of this post!

I hope you all stay tuned and are looking forward the next posts.









SAP documentation







OAuth2 is finally here  - being delivered with HANA SPS 09. Great! But what is the OAuth2 hype all about?


Welcome to a series of posts showing you how to use the OAuth2 authentication mechanism in HANA XS. The mechanism enables you to use APIs from providers like Google, Facebook, Twitter or any one of the others offering this authentication. Of course you will also learn how to authenticate against the HANA Cloud Platform or an ABAP server.


In this first post we will look at the prerequisites regarding your HANA XS knowledge and the theory behind OAuth itself. The next post will show you how to use the delivered Google OAuth configuration package as a first example. Follow up posts become more advanced i.e. describing how to create own configurations, how to debug outgoing requests and the like.


Currently the series contains following posts:

  1. OAuth2 Authentication using HANA XS – Basics (1) (start off with this one, if you are new to OAuth)
  2. OAuth2 authentication using HANA XS – XS OAuth client lib calling Google's API (2) (end to end description in a step by step manner)


I will try to keep things as simple as possible, without loosing advanced users with a lengthy explanation of details.



This is a rather advanced topic in the area of SAP HANA XS. So if you are totally new to it (you have never heard of .xsjs, .xsapp, .xsaccess, …), here are a couple of resources to get you started:


If you already know about these, but aren’t so sure about .xshttpdest or trust stores, you may be better off with this series in the first place:


After reading the OAuth series you will also understand the XS artifacts .xsoauthclientconfig and .xsoauthclientflavor and where they belong in the big puzzle.


So let's get started with a basic introduction to OAuth.

If you already know about OAuth I recommend to check out the chapter 'The theory behind OAuth 2.0 in HANA XS'  nevertheless and advance to the next post.


Why OAuth?

OAuth is an open standard to authorization, but why is it so popular today?

Currently there is a strong trend away from a rich client or heavy weight architecture towards distributed systems with lightweight architectures based on plain text (i.e. JSON ) web services. Representational state transfer (REST) is the most popular flavor of this architectural style. And OAuth is a perfect match to this architecture. Why?

Usually a REST web service offers it’s API via Hypertext Transfer Protocol (HTTP), using it’s very standard methods (GET, PUT, POST, …) for actions to perform. This protocol, and hereby REST, is supported out of the box in practically all common coding languages. This ubiquitous support, an easier consumption compared to SOAP / WSDL based web services, the growing number of cloud scenarios and REST being a performing and maintainable architecture (please see REST's implementation constraints for details) furthermore foster this trend.


REST services often are powerful and need an advanced security concept. OAuth is a natural fit, as it

  • is specifically designed to work with HTTP
  • implements role based authorizations
  • supports frequent and easy updates to user accounts
  • is a centralized service


What is OAuth?

OAuth2 specifies a protocol enabling client/web applications to act on behalf of a user after the user explicitly granted permissions. Another web application/web service (the resource server) finally carries out the actions a client is allowed to perform. The user approves the actions by telling an authorization server that he trusts the client to do what it is asking for.


The most common way for a client to present itself to a resource server is using a bearer token, which is obtained from the authorization server and stored by the client. OAuth does not specify the token itself – only the protocol how to obtain it. To finally access a resource server the client sends a special http header in the form:


     Authorization: Bearer <TOKEN_VALUE>


The token can be decoded by the resource server only (opaque to the client as well) in order to check whether client and user have permissions to access the requested resource.


This is where scopes (a key aspect of OAuth 2.0) come into play. A scope is an access range. As opposed to unlimited access - as it would be without scopes (e.g. using basic authentication) - OAuth enables an application to call services with exactly the right portion of privileges. The scopes are derived by the resource server solely by decoding the token. Other information can be encoded in the token as well. The most important values (besides the scopes) are client ID, target resource ID and the current user ID.


Prominent examples of OAuth 2.0 authorization servers are Google, Facebook and Twitter. All of them also provide resource servers (respectively Google APIs, graph API, streaming API). You will find plenty more providers at e.g. Wikipedia. Wikipedia also serves a good explanation of OAuth.



A real life example

Let's look at a real life example.

The user accesses an application in the browser. This application runs on HANA XS and needs information from another application server, e.g. SAP JAM. To access the data, JAM allows the application to authenticate its users by OAuth. For this purpose, JAM acts as authorization server and issues access tokens for the user. These are consumed by the resource server of JAM to provide access to the required services.


As a prerequisite to access resources, the first check determines whether an OAuth token is already existing for this user. This is not done directly by the application, but by the OAuth 2.0 client functionality of HANA XS. Assuming no token is available, the OAuth client initiates the token flow to the OAuth authorization server. The access token request contains the requested scopes and information authenticating both - the calling system and the end user. The resulting access token response contains the access token and additional information regarding token lifetime and granted scopes. This data is persisted for future re-use in the token storage. Now we have a token. Hence, in a subsequent call the access token is retrieved from the token storage and added to the http request enabling direct access to the resource.



The theory behind OAuth 2.0 in HANA XS

As we now know about OAuth and HANA XS, let’s see how they come together. Before continuing with a concrete example in the next post, we'll look at the theory (which you may skip, if you want to).


To connect to a remote service, an OAuth HANA XS client application uses the destination service. In analogy to the destination service in NW AS JAVA or SM59 in NW AS ABAP, it contains all configuration information required for the remote connectivity, like the target URL.

The application uses the HTTP client which facilitates the HTTP protocol. This client is initialized with an HTTP destination. If the latter is configured to use OAuth, the OAuth client is initialized for the target system. To send the HTTP request to the resource server (e.g. JAM, Google), control is passed over to the OAuth client.


The OAuth client first checks whether an adequate access token already exists for the current user and OAuth client ID combination and that this token is covering the scopes desired by the application. If no token is found, the access token request is prepared and send using a new HTTP client instance (to separate the cookie containers from the end-user's client). It contains the client and user credentials, plus the requested scopes. The list of scopes that is requested is the collection of all scopes as declared in the application configuration for this OAuth client ID. The access token response is then evaluated by extracting the access token and persisting it to the latter one to the token storage, together with the validity and the list of granted scopes. When finally calling the resource server, the HTTP request is enriched with the valid access token.


Obtaining client and current user's credentials or what about flows?

In HANA XS the client credentials are maintained during the configuration and persisted in the secure store. Depending on the OAuth flow, different ways exist how to obtain the user's credentials.


The most important flow is the authorization code flow. It’s being used by most ID providers and we are going to look at an example in the next post. To receive an access token using this flow the following steps have to be carried out:


  1. create a client application at the authorization provider and remember the presented credentials
  2. obtain an authorization code from the authorization server using these credentials and by granting permissions in the user consent
  3. exchange this authorization code for an access token and a refresh token
  4. use the access token to make API calls as long as it is valid
  5. get a new access token via the refresh token when the lifetime of the access token is over


Another important flow, being used by e.g. JAM, is the SAML flow. This flow involves the creation of a SAML assertion for a user. The OAuth client initializes the SAML issuer for JAM, which includes the JAM-specific SAML configuration. The SAML issuer than creates a SAML assertion for the current user. This assertion is added to the access token request.


Other OAuth flows, such as Implicit flow, Resource Owner Password Credentials or Client Credentials are currently not supported.




I hope you got properly equipped for the upcoming practical part and start implementing straight away.




Next post



Find below SAP HANA SQL Script to get Current Quarter Beginning Date, Last Quarter Beginning Date, and Next Quarter Beginning Date.

Suppose you have business requirements based on some date column you need to do some manipulation or calculation based on Current Quarter Beginning Date, Last Quarter Beginning Date, or Next Quarter Beginning Date. In below example you can Change CURRENT_DATE From actual table date field and DUMMY with your actual table according to your requirements, for example i took dummy and current_date.











            END AS Quarter_Begining_Date












            END AS Last_Quarter_Begining_Date












            END AS Next_Quarter_Begining_Date










  • Series Data Column Table "STOCKS_DATA_SERIES" has to be present in Schema “SERIES_DATA".



[CSV Files (STOCKS_DATA_SERIES.csv) is attached in this Post,

Using Flat File Import Create tables "STOCKS_DATA_SERIESin schema SERIES_DATA" in your landscape.]


Exercise 1:  Create & Compare the Series table with Column table



Examples / Screenshot

Step 1:


Creating Regular column Table and

Series Data Table.

Create column Table "SERIES_DATA".MyTab

(key int, ts timestamp, value int);


Create column Table "SERIES_DATA".MyTabSeries

(key int, ts timestamp, value int)


(series key(key)

period for series(ts,null)


increment by interval 60 second);

Step 2:


Inserting Data to Regular column Table and

Series Data Table using



Insert into "SERIES_DATA"."MYTAB" select 1, GENERATED_PERIOD_START, 7 from SERIES_GENERATE_TIMESTAMP ('INTERVAL 60 SECOND', '2010-01-01', '2011-01-01', null, null, null);


Insert into "SERIES_DATA"."MYTABSERIES" select 1, GENERATED_PERIOD_START, 7 from SERIES_GENERATE_TIMESTAMP ('INTERVAL 60 SECOND', '2010-01-01', '2011-01-01', null, null, null);


  1. No.of rows in both the table has to be 5,25,600.

Step 3:


Execute Merge Delta Operations for both the tables.



merge delta of "SERIES_DATA"."MYTAB";


with parameters ('optimize_compression' = 'force');


merge delta of "SERIES_DATA"."MYTABSERIES";

update "SERIES_DATA"."MYTABSERIES" with parameters ('optimize_compression' = 'force');

Step 4:


Verifying Series Table Storage and Comparing Memory size and Compression with Regular Column table.


Select Table_name, column_name, memory_size_in_total, sum(memory_size_in_total) over (partition by table_name) as tab_memory_size, uncompressed_size,

sum(uncompressed_size) over (partition by table_name) as tab_uncompressed_size, compression_ratio_in_percentage as ratio, compression_type, "COUNT", distinct_count

from m_cs_columns where table_name in ('MYTAB', 'MYTABSERIES')




Verify both normal column table and series Table.

Normal Column table --> TS Column memory size --> 5 MB

Series Table --> TS Column memory size --> 10 KB

Now You can understand How efficiently Series Table stores the data for  Time values.



Exercise 2:  Series Table Built-in and Analytic Functions:



Examples / Screenshot

Step 1:


Check the data Preview of Series Data Table "STOCKS_DATA_SERIES"




The table (“SERIES_DATA”.”STOCKS_DATA_SERIES”) is having Stock Market data with values (TICKER_ID, TICKER_DESCRIPTION, DATE, OPEN, HIGH, LOW, CLOSE, VOLUME, ADJ_CLOSE and DAILY_PERCENT_CHANGE) Since Year 1959 to 2015 (all the business days)

for Deutscher Aktien Index (DAX).


Total no.of rows in the Table 13895.




Step 2:


Exploring Series Data Built-in Functions:




SERIES_DISAGGREGATE (Built-in Function):


Transforming an equidistant time series with a coarser delta to one with a finer delta can be

performed using the SERIES_DISAGGREGATE function.


SQL Query:


We have data on daily basis.

We are going to disaggregate data to hourly basis from Daily.



select * from



s.close * g.FRACTION_OF_SOURCE_PERIOD AS "By Hour Close"




'INTERVAL 1 HOUR', '2015-01-19', '2015-01-20') g



where DATE = '2015-01-19';






Step 3:


Exploring Series Data Built-in Functions:



SERIES_ROUND (Built-in Function):


Horizontal aggregation transforms an equidistant series with a narrower interval to a new series with a coarser interval. Horizontal Aggregation functionality performed using the SERIES_ROUND function.


SQL Query:


We have data on daily basis.

We are going to Aggregate data to monthly basis from Daily.


Select rounded.TICKER_ID, Month, Avg(CLOSE) as Monthly_avg




  1. t.TICKER_ID,





As rounded

Group By rounded.TICKER_ID, Month





You have completed the exercise!

You are now able to:

1)  Create the Time Series Table.

2)  Understand the Storage of Series Data Table.

3)  Usage of Series Data Built-in Functions.


I chose to model fire propagation across the suburban landscape. Obviously, since I’m not a subject matter expert in that, I didn’t try to make the model actually useful for real world predictions. Rather my aim was to pick a realistically looking aspect of fire propagation and use it to showcase spatial analysis capabilities of HANA.


So, leaving aside all other factors of fire propagation, let’s assume that in an event of fire the wind may carry away burning particles, which, when falling to the ground, may start new fires.


Spatial data used

I used three sources of data, all available for free from Australian Bureau of Statistics:

  • - boundaries of postcodes in New South Wales,
  • - boundaries of Australian Statistical Geography Standard (ASGS) mesh blocks,
  • - census data associated with ASGS mesh blocks: population, number dwellings, type of land use.


I imported all that data into HANA using IMPORT statement and using SRS 1000004326, which would allow to measure distances – more about that in the next section.


Flat Earth, Round Earth

The Earth has an ellipsoid shape, hence one geographical degree means different distance on the equator and in Sydney. That makes using degrees difficult for measuring distances. HANA provides a few pre-set Spatial Reference Systems (SRS), falling into either “flat Earth” or “round Earth” category. Measuring distances only works in “flat Earth” one.


In a fully-fledged GIS system, there would be some sort of transformation available to convert a shape from linear to polar units and back. HANA lacks that functionality: the ST_TRANSFORM() function will serve some other purpose (and only in SPS10), and the ST_TRANSLATE function has just been added to the feature list. One is left with either the option to implement Vincenty’s formulae, or assume that 1 polar degree has the same value within a local map. For example,

SELECT new st_point(151,-31,1000004326).st_distance(new st_point(151.001,-31,1000004326), 'meter')

  as v_dx

from dummy

should give a metric value of the longitude polar unit in the near proximity of the point -31 lat 151 long.

My understanding of the example above is that behind the scene, HANA converts geographic degrees into some internal “flat Earth” coordinates (of SRS 1000004326) and calculates the distance. The values of those internal coordinates are not made available to the user.

Using one of these options, one can use HANA math functions and build a shape in linear system, translate it to geographic degrees and save to the database, or just pass back to SAPUI5 application as GeoJSON().


I picked Google Maps API, as it seemed to have the easiest learning curve for me. Other choices would be probably Nokia Here maps and SAP Visual Business. All these APIs can be integrated into SAPUI5, and SAP Visual Business, in fact, has its own controls in SAPUI5 framework.


To make the application portable between map providers, I moved all calculation and measuring logic to database procedures and used the Google Maps API only for geospatial data visualisation and for user input capture.


Google Maps visualises javascript objects of class google.maps.Data.Feature, where a Feature has the ID, some properties and, lastly, its geometry represented as GeoJSON. Conversion of HANA internally stored spatial objects into GeoJSON has to happen in database procedures, and Features can be built either on the server (XSJS) or on the client (SAPUI5) side.


For my app, I implemented visualization of postcode containing a point and attached it to a Click event on the map:



Simple spatial functions

I built a simple tool to measure distances on the map. The google.maps API captures two consecutive Click events, then HANA uses ST_DISTANCE() spatial function to measure the distance between those two points.


Similarly, using distance measurement functions within an arctangent expression, I calculated the simulated wind direction from two consecutive Click events.



Modelling smoke dispersal

For a particle to be carried X units away from the fire along the wind direction, there is some probability associated with that event. That probability has normal distribution along X. Then, there is some probability for a particle to diffuse Y units across the wind direction -- that probability also has normal distribution, but obviously with different parameters; in fact, there are equations describing that along/across dispersion.


For the purpose of further work, I changed those equations to outline a patch on the surface, that encompasses all probabilities below some threshold (say, 1-2 sigmas) and created a HANA stored procedure to calculate an ellipsoid-like shape of that patch.


In this scenario, an area inside the red patch is under significant risk of fire (blue arrow indicates the wind direction).



Spatial join

I wanted to find out a) what ASGS mesh blocks would fall into the risk area and get some statistics about them, b) to what postcodes those mesh blocks belong. Both tasks would require joining tables not by usual alpha/numeric  fields, but using a spatial relationship between shapes in records. Examples would be “A within X meters from B””, “A overlaps/touches B”, “A is covered by B” etc.


A simplified way to use spatial joins would be in a Calculation View, and there is a SAP tutorial for that. An SQLScript example for “intersection join” would look like this:

SELECT t1.mb_code11 as mesh_code,

t2.POA_2006 as postcode

from "SPATIAL"."T_SHP_MESH" as t1

inner join "SPATIAL"."T_SHP_POSTAL" as t2

    on t1.shape.st_intersects(t2.shape) = 1


Here, I added a SAPUI5 Table that would interactively select the row with the data about a mesh block the user clicks on:


Performance considerations

HANA is undeniably fast, but my experience with spatial analysis in HANA so far indicates that there is some amount of optimisation to be done. I may be pushing HANA too far with the amount of data I loaded, but since it’s just one Australian state I doubt that.


So, performance degrades dramatically with increased complexity of spatial features being analysed, increased both in terms of the number features and number of vertices in features. One should be careful, for example, with using ST_BUFFER(), as it produces a rather finely shaped circle polygon with the number of vertices that can totally choke the database. It would be good of SAP to provide functionality to reduce the number of vertices in a shape, I remember having that in ESRI’s Arc/INFO.


Another idea that proved useful was to build a “spatial index” of loaded shapes, for example by creating a rectangle containing each shape:

insert into "SPATIAL"."T_SHP_MESH_INDEX"

( select mb_code11, mb_cat11,

         new st_polygon('Polygon ((' ||

               shape.st_xmin() || ' ' || shape.st_ymin() || ', ' ||

               shape.st_xmin() || ' ' || shape.st_ymax() || ', ' ||

               shape.st_xmax() || ' ' || shape.st_ymax() || ', ' ||

               shape.st_xmax() || ' ' || shape.st_ymin() || ', ' ||

               shape.st_xmin() || ' ' || shape.st_ymin() || '))')

               as shape

               from "SPATIAL"."T_SHP_MESH")

Rough and slightly redundant selection of features might be made using that “spatial index” and then fine selection with real shapes would be performed on a subset. In my case, this trick reduced selection time from eternity to a few seconds.


Possible extension

This model has some potential for further extension for predictive analysis.


The ASGS mesh blocks already provide land use type and population density, which may give a clue on how fire-prone a mesh block is: say, a high-rise block is less prone to catch a fire than a bushland piece. Further, some historical data on detected fires, with their coordinates, time of detection, wind parameters etc. could be used to derive spatial relationships (clustering? distances?) between historical fires and build a more thorough predictive model.

In SPS 09 the unit test framework XSUnit (which bases on open-source framework Jasmine) was introduced by SAP. By using this tool server-side JavaScript unit tests can be created for XS applications. As well, the mocking framework Mockstar was introduced which permits to mock database objects or substitute tables/views by stubs. In interaction both tools make it possible to write unit tests for complex database objects like attribute views, analytic views, calculation views and procedures.

This blog guides through the initial difficulties to draft the first unit tests. Furthermore, it examines advanced aspects like measuring the code coverage and debugging unit tests.


All relevant test tools/frameworks have been bundled by SAP in delivery unit HANA_TEST_TOOLS which is not automatically deployed but available as non-automatic content. For us it worked to download HANATESTTOOLS.tgz and deploy it using the HANA Studio (REPO.IMPORT privilege is required for this activity). According to the SAP development documentation it is possible as well to deploy the delivery unit via Application Lifecycle Management.


Note: After installation the test tools are available in package sap.hana.testtools. In HANA Studio the package content can be inspected in Repository Browser or Project Explorer. On the opposite, the System view will show “hidden objects” since it only displays objects of type Analytic View, Attribute View, Calculation View, Analytic Privileges, Decision Tables, and Procedures.


The following privileges and roles are required to create and run unit tests:

  • User must be a standard user (since stub tables are always are created within the user’s schema).
  • Select permission must be granted on tables/views under test (in schema where the original tables/views reside).
  • (optional) For viewing and updating unit tests the following privileges must be granted on the package where the unit tests reside:


    Additionally, the execute privilege on
    SYS.REPOSITORY_REST must be granted to access the HANA repository.
  • (optional) Roles sap.hana.xs.ide.roles::EditorDeveloper and sap.hana.xs.debugger::Debugger are required for using the web-based development workbench and debug unit tests while runtime.
  • (optional) Role sap.hana.testtools.common::TestExecute is required for using the code coverage facility. (Even if this rule is designed for another purpose it is the only role which grants SELECT on the SAP_HANA_TEST schema.)


Writing Unit Tests

Unit tests are supposed to be created as XS JavaScript Library files (suffix .xsjslib) in HANA Studio. The basic syntax is explained in detail on the Jasmine project page. Simplest example:


/*global jasmine, describe, beforeOnce, it, expect*/
describe("test suite", function() {
       beforeOnce(function() {
              // called before running the test suite

       it("unit test", function() {

describe() implements a test suite and it() implements one unit test within that suite. The comment in the first line is specific for HANA. It must be added to each unit test file and tells the HANA Studio to consider the listed functions to be available. expect() brings along many functions to implement the unit test assertions.


A great variety of unit tests examples is provided with the test tools demos which can be found in package sap.hana.testtools.demos. Thus, this blog leaves it with a pretty simple unit test example. The object under test is an attribute view (AT_EMPLOYEE) which joins tables EMPLOYEE and DEPARTMENT with an outer join:


Note: Both, the attribute view as well as the unit test file must be located in a package which belongs to an XS Application.


When it comes to tests on views it’s mandatory to manipulate the content of referred tables in order to compare the view’s results afterwards with expected values. Certainly, that contradicts the idea of unit tests which are supposed to  test objects isolated. To solve that issue the Mockstar framework can be used to create a temporary copy of the view under test and replace the originally referenced tables with stub tables. These stub tables can be manipulated without worries. The following code would be used to isolate AT_EMPLOYEE:


var testEnv = mockstarEnvironment.defineAndCreate({
    targetPackage : 'UnitTesting.temp.' + mockstarEnvironment.userSchema,
    schema : 'UNIT_TESTING', // original schema
    model : { // object under test
        schema : '_SYS_BIC',
        name : 'UnitTesting/AT_EMPLOYEE'
    substituteTables : { // tables to be substituted
        "empl" : 'EMPLOYEE',
        "dept" : 'DEPARTMENT'

It’s common practice (within SAP’s demo unit tests) to do that isolation within the beforeOnce() function. After running the unit test for a first time the object under test will be copied into the specified target package. Its source tables will be substituted by identical copies of the original tables which are located in the user schema of the user who ran the test.


Note: It’s not possible to configure the target schema. The substitute tables will always be created in the user’s schema who run the test. If the original tables are located in the same schema (of the testing user) the unit test execution will cause the original tables to be deleted!

Note: Objects under test, in our example the attribute view, are taken from schema “<definition.schema>_SYS_BIC”, since that schema contains all activated objects.


A first, simple unit test could be one which truncates the employee table and checks if the view is running and is returning an empty result set:


// truncate employee table (substitute)
testEnv.clearTestTables([ 'empl' ]);

// check if the view’s result set is empty
sqlExecutor = new SqlExecutor(jasmine.dbConnection);
var actual = sqlExecutor.execQuery('select * from ' + testEnv.getTestModelName());
expect(actual).toMatchData({}, [ "ID" ]);



Complete Unit Test Example

After discussing all aspects of creating unit tests in the previous chapter I would like to show a complete unit test example in this chapter. Therefore, let’s extend the previous example and create a second calculation view (CA_DEPARTMENT_STATS) which simply aggregates the results of the first calculation view (CA_EMPLOYEE - I re-built AT_EMPLOYEE as calculation view with identical functionality for this example):




That view counts the number of employees per department. Two simple unit tests which prove the view’s functionality would be:

  1. Is the number of employees per department is calculated correctly
  2. Are employees without assignment to a department are counted correctly, as well


The following unit test suite implements both unit tests for view CA_DEPARTMENT_STATS:


/*global jasmine, describe, beforeOnce, beforeEach, it, xit, expect*/
var SqlExecutor = $.import("sap.hana.testtools.unit.util", "sqlExecutor").SqlExecutor;
var mockstarEnvironment = $.import("sap.hana.testtools.mockstar", "mockstarEnvironment");
* Test suite for testing CA_DEPARTMENT_STATS calculation view
describe("CA_DEPARTMENT_STATS", function() {
  var sqlExecutor = null;
  var testEnv = null;
     * Setup unit test environment
    beforeOnce(function() {
        testEnv = mockstarEnvironment.defineAndCreate({
            targetPackage : "UnitTesting.temp." + mockstarEnvironment.userSchema,
            schema : "_SYS_BIC",
            model : { // object under test
                name : "UnitTesting/CA_DEPARTMENT_STATS"
            substituteViews : { // views to be substituted
                "empl" : "UnitTesting/CA_EMPLOYEE"
     * Before each unit test: Reset test data in stubs
    beforeEach(function() {
        sqlExecutor = new SqlExecutor(jasmine.dbConnection);
        testEnv.fillTestTable("empl", [{
            ID : 1,
            NAME : "John Doe",
            DEPARTMENT : "IT"
            ID : 2,
            NAME : "Jane Doe",
            DEPARTMENT : "IT"
            ID : 3,
            NAME : "Peter Carrot",
            DEPARTMENT : "Accounting"
     * Check if aggregation by department works
    it("should return correct number of employees per department", function() {
      var expected = [{
          DEPARTMENT : "IT",
          NUM_EMPLOYEES : 2
          DEPARTMENT : "Accounting",
          NUM_EMPLOYEES : 1
      var actual = sqlExecutor.execQuery("select * from " + testEnv.getTestModelName());
      expect(actual).toMatchData(expected, [ "DEPARTMENT" ]);
     * Check if employees without assignment to a department are shown as well
    it("should aggregate employees without assignment to a department", function() {
        testEnv.fillTestTable("empl", [{
            ID : 4,
            NAME : "John Dolittle"
            ID : 5,
            NAME : "Jane Auster"
        var expected = [{
            DEPARTMENT : null,
            NUM_EMPLOYEES : 2
        var actual = sqlExecutor.execQuery("select * from " + testEnv.getTestModelName() + " where department is null");
        expect(actual).toMatchData(expected, [ "DEPARTMENT" ]);

Running Unit Tests

Because XS applications are run on the HANA server unit tests can’t be started out of HANA Studio. Before a unit test can be run it must be activated and then be run on HANA server using HANA’s web frontend which is accessible in default setup on URI http://<hostname>:80<instance>.


On the one hand, on HANA’s web frontend the test runner tool can be used. It is contained in the HANA test tools and does accept parameters which define the target unit test. It searches <package> for unit test with name <pattern>:


On the other hand, unit tests can be run from HANA’s Web-based Development Workbench:


Note: Microsoft Internet Explorer 10+, Mozilla Firefox, and Google Chrome are supported.

Test Data

The XSUnit framework brings along two mechanisms to populate substitute tables with test data. These test tables can be populated with single test records. With regard to the unit test example above the following snippet demonstrates how to insert single records:

testEnv.fillTestTable('empl', {
       ID : 123,
       NAME : 'John Doe',
       DEPARTMENT_ID : 1

On the other hand, substitute table can be populated from CSV files which are available in the HANA repository. Thereby, the CSV properties and source package must be defined within the Mockstar environment definition, the table load is done by a separate command which can be placed anywhere in the unit test suite:

testEnv = mockstarEnvironment.defineAndCreate({ // only supplement for definition!
     csvPackage : "UnitTesting.object",
     csvProperties: {
         separator: ";",
         headers: true,
         decSeparator: ","

testEnv.fillTestTableFromCsv("empl", "employee.csv");

Expected values can be matched against the actual view output as illustrated by the following example:

var expected = {
     ID : [ 123 ],
     NAME : [ 'John Doe' ] ,
var actual = sqlExecutor.execQuery('select * from ' + testEnv.getTestModelName());
expect(actual).toMatchData(expected, [ "ID" ]);

Truncating substitute tables works for single as well as for all dependent tables:

testEnv.clearTestTables([ 'dept' ]);

Code Coverage

Since SRS 09 the open-source code coverage library BlanketJS is integrated into the HANA Test Tools. It measures to which extent JavaScript code is covered by unit tests. Unfortunately, for unit tests on model views (attribute views, analytic views, calculation views and procedures) this approach can’t be applied.

Debug Unit Tests

When running unit tests for a first time you might miss privileges or have semantic failures in your code. If that occurs you will be provided with stack traces and error messages. If those information are not sufficient to understand the issue there are options to debug server-side JavaScript code. Thereby both, your unit test code as well as the HANA test tool libraries can be debugged (to a certain extent).


For enabling the debugging functionality follow the instructions on These instructions will guide you (1) to deploy delivery unit HANA_XS_BASE and (2) add section “debugger” to “xsengine.ini”. Afterwards the debugging can be initiated from HANA Studio or from Developer Workbench.


Regarding the example unit test above a possible scenario for receiving a meaningless error message is when you miss privileges to run the attribute view. In that case error “258 - insufficient privilege” will be returned. When running the unit test from Developer Workbench the following output will be shown:


To initiate debugging (1) make sure that the target session for debugging is THIS_SESSION in editor settings, (2) set a breakpoint on the code line were you want to start the debugging by clicking on the line number and (3) click on link “Create test report” which is located bellow the test results. The link will re-run the unit test in a new browser tab. Because you set a breakpoint the execution will stop at the specified code line and the debug control will appear in the Workbench:


Use the Step In (F3) and Step Over (F4) buttons to navigate through the code.


Note: When it comes to debugging the Development Workbench behaves different depending whether the file is a standard JavaScript file or an Unit Test file. Standard JavaScript files can be run immediately in debug mode by setting breakpoints and run the script/library. Unit Test files must be run in another browser tab (via TestRunner.xsjs) to debug that file in Workbench.

Hi all,


I think this an simple idea as many of us will be getting confused while working with transactions like  for example while working with smartforms we may get confused on bar code creation and uploading image in smartforms etc.


Suggestions while typing in User command depending upon the starting letter of the TCODE.


for example : What do I mean is in google when we are searching for anything we get autocomplete suggestions in the down. In the same way it will be useful for us if SAP will provide suggestions for transcation codes as it is little bit confusing bewtween some TCODEs.


If this ides sounds good to you then you can click on below link to vote for this:


Provide TCODE Suggestions to users when user types TCODE in user command : View Idea







Sneha Jadhav



I have posted an idea to refer multiple members of a page axis dimension to other sheets in a workbook. Please help me to improve this idea with your valuable feedback and suggestions.



Please go through the below link for more information -

Refer multiple members of a Page Axis dimension : View Idea



Thank you..



I have posted an idea few days back to have the ability to customize Refresh and Save button in EPM Add-in as many of us faced the requirement to customize these 2 buttons based on some condition.

Please go through the below link for more information.

Ability to customize Refresh and Save button in EPM Add-in : View Idea


Help me to improve this idea with your valuable feedback and suggestions.



Thank you..


Filter Blog

By author:
By date:
By tag: