Smart Data Streaming received some good coverage at TechEd last week including a demo in Bernd Leukert's keynote on Tuesday morning. If you want to get a better idea of the capabilities of Smart Data Streaming and how it contributes to real time event analysis and processing as part of an integrated HANA system, then check out this recording of the "DMM104: Event Stream Integration and Analysis for SAP HANA and the IoT" session:



To start learning how to build Smart Data Streaming projects, you can work through our hands-on Freezer Monitoring tutorial in the Smart Data Streaming Developer Center.


If you need a HANA system with Smart Data Streaming already installed and configured, then the HANA Developer Edition running on either Amazon AWS or Microsoft Azure already includes Smart Data Streaming. The blog post Using HANA Smart Data Streaming with the HANA Developer Edition  provides detailed instructions to help get you started.

Last week I had the privilege to attend my first SAP Tech Ed.


I had heard about some of the possible new features and also eager to learn more on the new features. One that caught my attention due to my main work stream is the addition to Nodejs into the XS world initially on SP11.


While I was waiting for a session on the new Extended Application Services hands on workshop, I attempted to learn more about nodejs and learn what this was all about.


As I understood from Thomas Jung and other colleagues in the room, is that Nodejs is a framework that will allow us to do web development local to our developer machines instead of having to have everything on the SAP HANA repository.


there may be more advanced things as well that could be done, however, this is just to showcase how simple it is..

It took me less than 30 mins to set this up.



3 things I really liked from nodejs


1) simple and easy way to run a web app in a dev environment

2) easy way to separate web apps, web services, etc into their own node processes - smaller processes = run faster

3) nodejs has many modules that are available for everyone and they are very easy to use



I look forward to learning more and more on nodejs and be able to contribute more



1) you would create a simple nodejs app by writing text to the screen



2) same exercise from above, however, running a sample index.html file which could be your SPA app



before being able to run this app, you would need to run the nodejs javascript file from above from your command line




and be able to see it from your windows task manager





similarly you would run a restful web service which returns JSON and following the same steps from above return a JSON response  from another nodejs thread (notice different port number on this service)






happy coding!

Hi Folks,


Couple of weeks ago I have successfully passed on SAP Certified Development Specialist - ABAP for SAP HANA with a great score. So, I decided to share how I got prepared and some few tips.

To apply the exam there is a pre-requisite certification on ABAP, so I am going to consider that ABAP it is not a problem.

In order to get prepared not only to this exam but truly understand SAP Hana platform as a developer, we are going to need reading these three books

·         SAP HANA An Introduction (SHI)

·         ABAP Development For SAP HANA (AFH)

·         SAP HANA Administration (SHA)

Bellow I put all the areas and sections that in some way it is relevant to study. Even thou all the topics can be found in just one book; it is mandatory reading the topic in all listed books, because one book goes deeper than the other does, So to have a real 360 point of view, please read all of them.


Ø  CORE COMPONENTS – AFH – 1.1.0 / SHA 1.1.0


Ø  COLUMN STORE – AFH – 1.2.2 / SHI 1.2.3 / SHA – 8.3.1


Ø  ENCODING – AFH - 1.2.2

Ø  PARTITIONING – AFH – 1.2.2 / SHA 9.3.0 / SHI - 13.2.8

Ø  TEXT-SEARCH - AFH – 9.0.0







Ø  ATRIBUTE VIEW - AFH – 4.1.0 / SHI – 10.3.0 / SHI 11.2.0

Ø  ANALYTIC VIEW - AFH – 4.2.0 / SHI – 10.4.0

Ø  CALCULATION VIEW – AFH – 4.3.0 / SHI – 10.5.0


Ø  PROCEDURES – AFH 5.2.0 / AFH 5.3.1 / AFH - 5.3.2

Ø  PLAN OPERATORS(POP) – AFH – 5.2.4 / SHI -10.6.0




Ø  DATA TYPES - AFH - 3.1.3

Ø  NATIVE SQL – AFH -  3.2.4 / AFH - 4.5

Ø  TRANSPORT – AFH – 6.2.1


Ø  ERRROR ANALYSIS (ST22)  - AFH - 7.2.2

Ø  SQL TRACE (ST05) – AFH – 7.2.3 / AFH - 7.4.3






Ø  EXPLAIN PLAN – AFH 7.4.5 / SHA -15.4.0

Ø  PLANVIZ – AFH 7.4.6 / SHA -15.5.0

Ø  DBACOCKPIT (DBACOCKPIT) – AFH – 7.5.1 / SHA 4.2.0 / SHI 13.8.0





Ø  ROBUST PROG – AFH – 14.3.2


Ø  AMDP – Material complementary not available in the books

§  Refer to []

§  Refer to []

Ø  ADBC - Material complementary not available in the books

§  Refer to []

§  Refer to []

Ø  CDS - Material complementary not available in the books

§  Refer to []




Ø  ECLISPE PLATAFORM - AFH - 2.1.0 / SHA 4.1.0


·         PS: After the topic, there is the book, chapter and section where you can find all the information needed. For example  SHI 1.4.3 =  (SAP HANA An Introduction (SHI) Chapter one, section 4, item 3)

·         When there is no section or item, I recommend you to read the whole chapter or section.


·         Not all chapter on these three books are really needed for this exam, so I recommend you to stick to the plan.

·         Reading all those topics gives you 95% of all content on the exam.

·         Put some extra effort to implement examples of these topics AMDP, CDS and AMDP, since it is not fully available in the books.

·         Use and understand all tools on run-time and analysis tool.

·         Know how to answer how many replication methods there are, which one is real-time; which one is ETL.

·         Understand the difference of all joins types and union too.

·         Fully understand code pushdown.



GOOD LUCK! I hope I could help you all.









I had the privilege of being invited to join the Bloggers Program and SAP Mentor Program at SAP TechEd 2015 in Las Vegas this week.


I also got to spend a little time with Thomas Jung. Tom is always willing to help me understand just what SAP is doing in either the ABAP or HANA world. While we had a brief earlier discussion at SAPPHIRE in June about the HANA roadmap it was not until this week that I fully appreciated how some of the new architecture that is coming with HANA SP11 will change native HANA development.


Up until now native HANA development has mainly been done using something called Extended Application Services (XS). The XS engine is a lightweight application server inside HANA. Developers built scripting applications using JavaScript to deliver native HANA applications, web services, etc.


SAP had always said HANA would support other languages, but to do so they have had to re-architect the XS runtime. My very amateur diagram of this new architecture looks like this.



When I checked with the official SAP architecture slide I wasn’t that far away - I might even argue my diagram is better.

Screen Shot 2015-10-24 at 10.24.30 PM.png


Essentially SAP have built a XS runtime platform that can support different types of other runtimes within it. I know what you’re thinking. That’s Cloud Foundry right? Actually you are close but think more Cloud Foundry Lite.


So now SAP can pick and choose which runtimes, and therefore which languages, they want to support on HANA. In SP11 they will deliver Node.js for JavaScript development, TomCat (TomEE) for Java developers and FastCGI for C++ developers. In theory others might follow. In theory you could build your own.


As you see on the right-hand side of the architecture slide SAP will also still deliver the existing XS JavaScript runtime for backwards compatibility. Be warned though that the JavaScript runtime engine has been changed from SpiderMonkey to Google V8 so your existing XSJS code might need minor tweaks. Some of the API’s have been rewritten as well.


But for me the really interesting part of this was the realisation that SAP have really committed to Node. The Node community have been trying to gain “enterprise” acceptance for some time now. Sure there are lots of examples of large implementations, including some very high profile ones, but it seemed to me their argument was not fully won.


Now, in placing Node right at the centre of their strategic HANA platform, SAP have made a huge statement of confidence. I am a bit surprised the Node advocates aren’t shouting it from the roof tops. Maybe they haven’t figured it out yet?

Screen Shot 2015-10-24 at 10.34.29 PM.png

I know there are countless blogs out there on how set up a trial HANA instance in the cloud but I could not find a place that showed me all the parts together.  Here is my attempt to do that...


I recently took the Open SAP course Software Development on SAP HANA (Delta SPS 09, Repeat) by Thomas Jung and Rich Heilman.  It was a great and very informative class.  In order to do the assignments you need to build your own HANA instance.

SAP allows you to do this with a trial version and they have partnered with Amazon Web Services (AWS) for the hosting.  SAP does not charge for the HANA instance but AWS does charge you for the cloud space. 

1.)  Register at SAP SCN to obtain your SCN Account number.  You can find your account number by going to your SCN profile and clicking on the expand triangle next to your name.


2.)   Set up an account with Amazon Web Services.  Click on "Sign In to the Console"  and sign in with your existing credentials that you shop with or create a new account. 


From the Console go to the IAM Section - Identity and Access Management


Inside the IAM you will need to create your user and obtain the Access Key ID and Secret Access Key that will be needed for your CAL account.

              User 1.png

This user will have the authorizations to manage your AWS settings.  You can manage the user's authorization and add the user to authorization groups if you would like.  However, all we need of this user is to obtain the Access Key ID and Secret Access Key.  Make sure you download the credentials because you will not have another opportunity to view them.  This user, through its credentials will link your AWS cloud to your CAL HANA instance.

User 2.png

You will want to create a password if this user will need to access AWS.

The User must be assigned several policies in AWS so that your CAL account can be created with the proper Authorizations:

      • AmazonEC2FullAccess
      • AmazonVPCFullAccess
      • ReadOnlyAccess
      • AWSAccountUsageReportAccess

                    CAL user setup.png

         If you have any questions on setting up your users in AWSCAL User Setup for AWS FAQ

3.)  Install Eclipse on your PC.  Eclipse is one of the tools used for HANA development.  Download and install Eclipse (Luna version) – also free.  Instructions for downloading eclipse, Note that LUNA is not the most current version of eclipse but it is the one you will need.  If you follow the link to download Eclipse you will get the following message that Luna is not the most recent version of eclipse, ignore this and click on download.



Select the following Eclipse IDE:

Eclipse - 2.png

Eclipse will continue to try and re-direct you to a newer version.   Make sure you select Luna each time you have an opportunity.

Eclipse 3.png

On the right hand side of the screen select your machine type for the download.

eclipse 4.png



4)      You will need to add SAP’s tool kit for Eclipse so that you can have the correct perspectives that you will need. Detailed Tutorial for setting up Eclipse for SAP


From the Eclipse menu, choose Help > Install New Software.  You will need to enter the URL  in the section "Work with".  You will then need to select the components that you want to install.

Eclipse add SAP.png


5)      Set up your SAP’s Cloud Appliance Library (CAL) Account; you will have to enter the AWS Access Key and Secret Key and your SAP SCN Account Number.

               CAL Login.png


Go to the Accounts Tab and then create a new Account.   Select Amazon as your cloud provider and enter your AWS Access Key ID and Secret Access Key.  This is how your CAL account will be linked to your AWS account.

CAL Accounts.png


6)    Build your HANA instance.  Once you have your AWS account you can go to SAP’s Cloud Appliance Library and find the solution below.  You will need to click on Try Now.  Below, because I've already created my instance I no longer have "Try Now" available, however you will need to select the instance that is indicated.



Name your instance.  I just selected the version of HANA as 01 and the support pack stack of 09. 

Select all the defaults including region being us-east-1.


Make sure that you select that you want the Static IP Address during the setup.  It costs pennies extra and if you don’t your Eclipse work space will cause you problems.

Next you will be asked to provide the access points that are allowed to access your IP address.  I just use the default.  If you need to change this later that can be done from AWS.    I didn’t make any changes and I just clicked on Next.


Next you just pick a password.    Note, you will log onto HANA using the user name SYSTEM and this password.


       CAL setup 4.png


7)      Activate your HANA instance--Go to the "Instances" tab and then click on "Activate".


After you click on Activate, you will get this popup; this is so that your instance will be automatically suspended after 8 hours.  Very important in case you for get to shut down your instance because you are charged by the hour when your instance is active for the AWS cloud space.


It takes about 45 minutes for the instance to be built.  Each subsequent time you activate it, it will only take about 10 minutes to boot up.

After about 5 minutes you can click on the instance as below to obtain your HANA IP address:



You will be given information about your instance.    Note the Static Public IP address check mark. 


Very important to shut down your instance when you’re finished so the hourly AWS charge will end!



8)      Viewing your instance in AWS.  This is not really necessary but you might be interested in viewing your instance here.



9)      But how much does the AWS cloud space cost????  Click on the expand triangle beside your name and select Billing & Cost Management to see your current invoices.


Below is my month-to-date October invoice that shows the hours I’ve been logged on and the cost.  Note that the instance you created is "High-Memory Extra Large (r3 xlarge); my bill shows pricing for and additional instance that is different and more expensive.



10)  Now follow this link to the assignments for the OpenSAP class Software Development on SAP HANA.  This is a 284 page document, very well done that has tons of screen shots that will teach you how to develop in HANA.  The first section of the document will lead you through setting up Eclipse to work with your brand new HANA Instance.

Current Situation (at least <= SPS10)

In different projects there is almost always the need to create some catalog objects which cannot be transported by the lifecycle management tools like it is possible for repository objects. Such catalog objects are for example:

  • Virtual Tables
  • Triggers
  • Indices and Full Text Indices on Catalog Tables
  • ...


Because of the lack of corresponding repository objects or a transport mechanism, by default the creation of the objects has to be done on each target system manually. In many project set ups the development team is not allowed to create the objects by their own on test, quality and production systems. So other services like e.g. application operation has to do the manual steps. But in big landscapes with automatic deployments and several system copies/system set ups, it is not really an option to do the things manually because it is to error-prone and time consuming.


Why I wrote "at least <= SPS10" in the header line of that paragraph? With SPS11 the new HANA Development Infrastructure (HDI) will be introduced. It is a new a service layer of the HANA database that simplifies the deployment  of artifacts. For that new approach it is planned to support many artifacts  (like triggers, virtual tables) which are not supported in the "old" world. With SPS11 HDI will be shipped in a beta stadium. It is expected that it will be available in general with SPS12.


Solution Approach

In that space the question how to deliver such objects automatically was discussed already several times, so I wanted to share how I solved that issue in our projects for almost all situations (there are still some gaps regarding special object dependency situations which require some manual effort, but 90% are running automatically now). Consider that the following approach is just one approach of several possible ones.


The idea behind the "transport" of catalog objects like triggers is, to deliver the create statements for the objects in transportable repository objects like procedures (.hdbprocedure). So in a first step I started to create repository procedures containing the create statements for the required objects. That was already an improvement because the "complexity" of the create statements was encapsulated in a procedure which just had to be executed by the responsible team on the target systems. But because the procedures still had to be executed manually in the required target systems, I searched for an option to automatize the execution.. The answer was a thing that HANA XS provides out of the box. It was an XSJOB. So I started to implement an XSJOB which calls an XSJS function. The XSJS function then calls the procedures which create the catalog objects. On the target systems the XSJOB than could be scheduled in the required time intervals (in my case hourly right after the hourly deployment of new builds). All objects, the XSJOB definition, the XSJS file and the procedures, are repository objects which can be transported by the standard lifecycle management tool (e.g. transport of changes via CTS+).

Maybe someone asks now why the XSJS function is necessary, cause a XSJOB can directly call a procedure. The answer is that XSJS provides the better options for logic orchestration, better error handling and SMTP usage for mail notifications in case of errors.


Following picture gives a brief overview about the used objects/services:


One point which has to be considered for the procedure implementation is, that the procedures have to check if the object which has to be created already exists. That is necessary, cause of the scheduling. Each time the procedure is called after the object is created, the procedure would result in an error, cause of the already existing object. There are two options to react on that situation. The first option is to avoid the execution of the create functionality. That makes sense for cases for which no changes are expected in the created objects. The second option is to drop the object and to create it new. So changes are also reflected by the re-creation on target systems. But for that option dependencies to other objects should be analyzed/considered upfront.



With a simple example I wanna describe the necessary steps and how they could look like. In that example a trigger will be created which inserts a log entry into a log table after each insert into a specific table.


So we have these two tables. eTest01 for which the insert trigger should be created and eTest01Log in which the trigger should insert an entry after each insert on eTest01 (the id value of eTest01 and a time stamp).



First we define the procedure which creates the insert trigger for table eTest01. It is a very simple case where the procedure checks if the trigger is already existing. If not the trigger is created.



Next the XSJS function is displayed which executes the procedure. Consider that the schema, package and procedure name is defined in a JSON object which can be enhanced by further procedures. So not for each new procedure an own call has to be implemented.



After the XSJS function is defined an XSJOB definition is created which calls the XSJS function.



In the XSJOB scheduler the XSJOB has to be activated then. The XSJOB scheduler is available in the HANA XS Admin tool which can be reached via URL http(s)://<host>:<port>/sap/hana/xs/admin.

For the activation following information has to be supplied:

  • User name (and password) which is used to execute the job.
  • Language
  • Start/End time interval in which the job will be executed
  • Active flag has to be set to true



If everything worked fine in the job log a schedule entry and entries for the executed runs should be found.



Finally in the catalog the working trigger can be found.




If you maybe wanna try out the approach by yourself I wanna give you following hints:

  • The user used in the XSJOB scheduler for the job execution must have the privileges to create (and if necessary drop) the objects in the defined schemas. Consider also that the used user is the owner of the objects which are created. So please use a dedicated user which is never removed from the system, otherwise also your objects will be lost. I would also not recommend to use DEFINER MODE procedures which are executed by _SYS_REPO.
  • In the XSJS file I added a comment that an exception handling has to be done in case the exeuction of a procedure raises an exception. For my case I implemented a logic which collects the errors and sends it then to a defined pool mail address using the SMTP XSJS library.
  • In case it is necessary to define system specific things for the execution of a job (e.g. the ABAP schema name which has in most cases a different name on each system following the template SAP<system id>), this can be done by parameters. Parameters can be entered in the XSJOB file directly, but also in the XSJOB scheduler to define them system specific.
  • To activate the XSJOB scheduler itself role sap.hana.xs.admin.roles::JobSchedulerAdministrator is necessary.
  • To schedule a XSJOB role sap.hana.xs.admin.roles::JobAdministrator is necessary.
  • The XSDS Procedures is available with HANA SPS09. In earlier releases you can call the procedure by the legacy DB interface $.db.



Hello colleagues,


I recently shared a XS Engine library into Github that may be useful for those working with XS Engine and DataTables plugin, a popular free and lightweight jQuery plugin for table creation in web applications with a wide range of features available.


With small amount of data, the plugin itself can handle almost every data manipulation case in the client-side, however, as the data increases (real life scenarios), it becomes more efficient to have a server-side logic to retrieve just the desired part of data in each moment, according to pagination, filtering and sorting of the displayed table.


The plugin's developer team has an example of how this Server-Side Processing would work with PHP on their site. Basically, I developed a xsjslib that does all the work that their PHP code does, expecting to receive the input parameters defined on this reference, plus I implemented some additional options in order to handle different scenarios where the project I am currently working on demands. These additional options are described in the inside the Github repo (hopefully, clearly described ).


In summary, this library pushes the following to the server-side:

  • Pagination;
  • Filtering (some additional options extend this feature, enabling different operators than the default);
  • Sorting;
  • Format parsing (additional feature).


which enables data from huge tables (tested with tables with ~1B rows) to be quickly displayed in a web interface.


I included some XSJS code as example how this library could be easily exploited.


The Github repository can be found here.


The library still uses the methods of XS API $.db, so it works will almost every SAP HANA revisions (tested on revs. 85, 97 and 102).


Hope this is useful for someone.

Let me know what your thoughts about it and, if someone find any bug, please let me know.



Andre Rodrigues

Dear community members based in Italy a storm of HCP based ideas is going to be generated soon, don't miss the opportunity to add yours owns!



Sapstorming is an envisioning workshop, a path of inspiration dedicated to people who believe in innovation as a driver of development.



The program is aimed at young entrepreneurs and developers who want to learn about the new opportunities offered by SAP. The workshop is also open to graduates and students who are looking for expertise and news ideas and run in collaboration with Tag Innovation School.



October the 29th



In Milan, during the Italian SAP FORUM at Fieramilanocity



The program includes four hours of training, work, discussion and learning. A full immersion that alternates lecture on SAP HANA CLOUD PLATFORM, a collaborative brainstorming and networking.


9h00 – 9h20 Welcome

The trainer will give an explanation of the learning objectives.


9h20 – 10h15 Lecture

During the first part of the workshop the trainer will present SAP HANA Cloud Platform, an open platform-as-a-service providing unique in-memory database and business application services. The aim of this Lecture is to offer a range of opportunities that will allow participants to imagine new solutions.


10h15 – 10h45 Brainstorming First question:

Divided in groups of 8 people, the participants will need to answer a question through creative brainstorming: “How can a startup working in the sharing economy sector scale thanks to SAP HANA Cloud Platform?”


10h45 – 11h15 Brainstorming Second question:

Divided in groups of 8 people, the participants will need to answer questions through creative brainstorming.


11h15 – 11h30 Break


11h30 – 12h00 Working

The groups will try to develop the ideas they discussed during the brainstorm session, preparing a 4-minute pitch.


12h00 – 12h30 Pitch


12h30 – 13h00 Feedback and Q&A

At the end of the presentations, participants will receive feedback from SAP’s trainers and TAG Innovation School staff.


Curious to know more? Get registered soon!

What's New?


We have just started recording a number of video tutorials for the SAP HANA Academy specifically targeted to Oracle developers.


In this series, we assume that you are familiar with the Oracle database but that you might need some guidance for SAP HANA.


Where can you find documentation? Any free training? How does the SAP HANA in-memory technology relates to Oracle's?


This will be an ongoing project, so please feel free to leave your comments below about topics that you wish to have addressed.


  • PL/SQL vs SQLScript?
  • Column storage?
  • What tool to use?


Complete playlist: SAP HANA for Oracle Developers - YouTube





In the first video, the SAP Community Network, the SAP Help Portal, the SAP Support Portal and the Product Availability Matrix are introduced. Knowing where to find information is critical for success.




Deployment Options


The next video discusses deployment options for SAP HANA with some background about the start of HANA as an appliance and the Tailored Data Center Integration option introduced with SPS 06.


The hosted options are presented:



Developer Edition(s)


The next video discusses the SAP HANA developer editions hosted by Amazon Web Services and Microsoft Azure.


See also the great Step-by-step guide for deployment of SAP HANA Developer Edition on Microsoft Azure | Microsoft Azure Blog and AWS | Getting Started with SAP.



Connecting to the Developer Edition


The next video shows how to connect to the developer edition and shows the web interface to manage your SAP HANA instance, how to connect using SAP HANA studio and also how to connect on the command line using SSH.





Now that we can connect to the SAP HANA database, we take a look at the SAP HANA architecture as compared to Oracle's.


Connectivity options are discussed: ODBC/JDBC, ODBO, XML/A, OData.


The different system services: indexserver, name server, compile server, etc.





The next video discusses SAP HANA in-memory: What's the difference between a traditional database and an In-Memory Platform?



Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at, follow us on Twitter @saphanaacademy, or connect to us on LinkedIn.

Analyze the followings and re-create after moving to HANA if needed.


The first one will be captured while running the BW checklist tool

1. BWA snapshot indexes.


2. Explorer infospaces on top of BWA indexes – if you have Explorer on Blade Negotiate with SAP toward to see if you can get BWA credit toward HANA licenses.


3. Check with your BWA hardware vendor to see what you can do for your existing BWA boxes


4. Check the contracts with the vendor who is providing BWA maintenance

Readjust documents for monthly outages, emergency outages, upgrades, backup-restore, on-call etc to remove BWA steps (and replace with HANA where necessary)


5. Prep your data center if you’re not removing BWA machine(s) immediately


6. Inform your network team since BWA has dedicated network


7. Check your DR/HA procedures


8. Run program RS_BW_POST_MIGRATION after the DB migration to remove BWA metadata


9. When you’re testing HANA make sure all BWA indexes were deleted and BWA machine is unplugged.




Shekhar Chauhan

We have many articles available to talk about performance tuning in HANA models and even so many articles for best practices too.


This document something will deal about importance of building blocks (base views) and how to build the model with base views and how that support and reduce the time when there is any major or minor changes in that model.


For example,


We have built a calculation view using 4 to 5 projections and joins. On top of the model, we have built variables which will act as a prompt in Reporting tools like Web Intelligence, Crystal Reports, Explorer, etc.


In Future, if there is a changes in the applied logic as we need to modify the tables or need to add new tables then it will take more time to modify the model to accommodate the new logic if the existing model is simple and without any building blocks.


Consider we have model like below, we need to add new table after Join_2.


  1. We don’t have any building blocks in this models here, all the projections are getting data from straight table.
  2. This has VBAP, MARA, MAKT tables in each projections and joined as per joining conditions.
  3. All the naming conventions (technical names to business names) are taken place in aggregation layer.




If we got new Requirement from Client as like below,


  1. Add table KNA1 to get customer name on the above model.


Following changes required,


  1. Need to remove the joining line between Join_2 and Aggregation which will erase the naming conventions and variable mappings.
  2. Need to add KNA1 table after that naming conventions should be applied as before.
  3. Remap the variables.


To avoid the above steps, we can have this model with building blocks.


For the above tables I have created a 2 building blocks, building blocks is nothing but the base views.

  1. View 1 – MARA & MAKT
  2. View 2 – VBAP & VBAK


The above view 1 is used in projection_1 and View 2 is used in Projection_2.


If we got new update to add new table KNA1, then we add that in base view 2 and we can select only the fields in main model.


It will not affect the naming conventions or variable mappings.


Always do the naming conventions in base level itself.



Thanks and let me know, if any further explanation needed on this.

As a “Data Architect” at one of the big utility companies in Australia, I was wondering whether we as in the company, should be considering VORA or not. Certainly we could get SAP to do a little presentation to us, but I thought of doing a bit of digging myself. I would like to share my thoughts and observations with you and hope this will assist you or your organization in some shape or form.

As VORA is a new product and knowledge/information in the market is not yet widely available, there are quite a few questions for whom finding an answer is tricky. I tried to collaborate all this information into one place from an Analyst, Architect, and BI manager perspective.

“Let’s start with basics “

What does VORA means?

VORA is the Latin root for “VORAcious” or in other words “big”. As VORA can consume large amounts of data, it was given this name as per comments from a SAP spokesman.


What does it do?
VORA is an in-memory query engine which plugs into apache Hadoop framework to provide interactive analysis. VORA is using SPARK SQL library and HANA compute engine.

How does it do it?

HANA VORA is a combination of Hadoop/YARN (resource allocation), Spark (in memory query engine) and HANA push down query delegation capabilities. VORA handles OLAP analysis & hierarchical queries very well as it does layers in few enhancements to Spark SQL. VORA can exist on standalone basis with one of the Hadoop nodes but can also integrate with classic HANA. Classic HANA integration of-course will incur infrastructure cost but Hadoop integration should cost next to nothing in terms of infrastructure cost.

"We're taking the lessons learned with what we've done with HANA, the real-time, interactive experiences which you can do in the enterprise cloud and applying this to Hadoop," Tsai said. "But it's not just making Hadoop interactive .... a lot of people are working on that;  but how you also provide those real-time, interactive experiences and that business semantic understanding in Hadoop, and I think that's the biggest thing that SAP has put in."

What are the specific features of VORA vs Apache Spark?

VORA is an extension to the Hadoop platform and includes the following features in its first version:

  • Accelerated In-Memory processing
  • Compiled Queries
  • Support for Scala, Python and Java
  • HANA and Hadoop mash-ups
  • Hierarchies
  • Support for HDFS, Parquet and ORC
  • NUMA awareness

Is VORA based on SAP HANA?

No, VORA is a completely new code base, but the engineering team is the same group as the HANA engineering team, so many concepts and ideas have been borrowed from SAP HANA, as you can see by the feature list. VORA and SAP Hana can exist separately.

Who will benefit by using SAP HANA VORA?

SAP HANA VORA will deliver the most value to people in the following positions:

Business analysts can perform root cause analysis using interactive queries across both business and Hadoop data to better understand business context.

Data scientists can discover patterns by trying new modelling techniques with a combination of business and Hadoop data, all without duplicating data copies within data lakes.

Software developers can deploy a query engine within applications that can span enterprise and Hadoop systems using familiar programming tools.

What type of licenses are there and how much will it cost (just the application)?

What are the challenges which SAP is trying to address using VORA?

Currently “batch process” based tools in Hadoop landscape does not provide fast and drill down mechanism to slice and dice the data. VORA will complement the stack of tools Hadoop enabled enterprises have

When is SAP releasing VORA to the market?

SAP VORA will be released on 18th September 2015. As per SAP roadmap and strategic directions it will be available in cloud first. I am expecting all type of licenses to be available from 18th September, but if in case there is a delay that could only be to the on premise version.

Integration to Hadoop

As you can guess from the screen shot below, SAP HANA VORA will be available as a configurable tool within Hadoop landscape. The question now arises is around Hadoop enterprise versions e.g. HORTONWORKS and CLOUDERA, when are they going to accept and release this into their landscape.

Steve Lucas, president of SAP’s Platform Products Group mentioned in his conversation with “Fortune” that VORA is to augment and speed up data queries of unstructured data, but not to displace Apache Spark.



What are the high level differences between SAP-HANA, VORA and Apache SPARK?


According to me SAP VORA will be a good addition for companies who are already on SAP platforms. Such companies can integrate their transactional, lake and other data sources into one VORA and create mash-up queries for deep dive and interactive analysis. For others I recommend to explore the options for a tool within Big Data Space or they can certainly consider to buy VORA which is a commercial product and offered separate to HANA.

Any question feel free to reach out to me.



SAP HANA VORA & HADOOP | Amandeep Modgil | LinkedIn

& SAP Product guide

When writing Hana Adapters supporting Realtime, one of the parameters to be sent along with each row is its Opcode: Insert/Update/Delete/... Hana then does receive such row and does insert/update/delete the record. Not much magic there is, is there?


For normal source databases and simple 1:1 replication of tables, indeed there is not much more needed. Hence these are the two areas requiring more than just that.




The RSS Adapter is a first example where more than that is needed. An RSS feed is nothing else than a web page read frequently where the web server lists all recent changes in a standardized (XML) format. Each page has an unique URL - could be seen as primary key - and therefore the adapter knows what the 10(?) most recent added webpages had been. If each of these 10 rows would get an Opcode of Insert, the first refresh would work, the second for sure would get a primary key violation for some rows at least. It would not even help to remember the last call and the fetched pages there as the adapter might have been restarted.

Hence one solution would be for the adapter to compare if such row exists already and if it has the same contents. And when there are transformations applied to the data? Does not make sense.

So there is another Opcode to be used for those sources where the adapter has no knowledge if that changed row was processed already, is brand new or got changed - the Opcode Upsert. For such rows the source table requires a primary key.




Speaking of primary key, an update row is actually two rows produced by the adapter, the row to be updated (before image values) and its new values (after image values). Only with both pieces of information the update does work correctly.

Example: In the source there is a customer record with the customer id = 1234 and name = Jack, this row gets updated to customer id = 9999 and name = Jil. Without the information it is row with customer id = 1234 to be updated, the applier cannot create the correct update statement.

update table set customer_id=9999, name='Jil' where customer_id=1234 and name = 'Jack';

Which again leads to the question, what if the old values are unknown?

Most sources do not provide access to both, the previous values and the new values. As long as the source table has a primary key, no problem to send the Update row only and no Before row.




The Delete requires the old values as well to delete the old row like in

delete table where customer_id=1234 and name = 'Jack';

If the table has a primary and only the key to be deleted is known, an eXterminate row should be sent instead.




As said, the Exterminate is a delete with the PK columns being set only, all other column values are null. The reason this is a second Opcode will be seen shortly when talking about transformations.




Sometimes it is required to delete an entire set of rows with a single command. An obvious example would be a truncate-table command in the source. Then the adapter does not know the individual rows to be deleted but the range, in this example all rows. Hence the adapter would send a row with Opcode Truncate and all columns being null.

Another example would be a source command like alter table drop/truncate partition, e.g. the partition with the column YEAR = 2010. In order to replicate such source change, the adapter has to send a Truncate row with all columns being null except the YEAR column, this has a value of 2010.




Another delta strategy could be to truncate a range of records and then insert the same rows again. As there is a dependency between the Truncate and the Insert rows, the adapter should create a truncate row to remove the contents and then send Replace rows with the new values.

Such a delta approach is especially useful if the adapter knows something changed but not all, especially not deletes.

Example: The source tells that there was a change in the SalesOrder ORDER=1234. One line item got added ITEM=4, another updated ITEM=3, a third deleted ITEM=1. But this is unknown to the adapter. Hence the adapter would send a Truncate row with ORDER=1234 and then send all currently found rows in the source, so ITEM = 2,3,4.



Opcodes and Transforms


The main issue we faced during development of realtime transform was to process the change rows and their opcodes properly.

Example: Source is a customer master table with the primary key CUSTOMER_ID. This source data goes through a filter transform, taking only the US customers, feed them through an Address Cleanse transform and load a target table that has a surrogate key as primary key.

Inserts are simple, they flow through all transforms, a surrogate key value is assigned from a sequence and inserted into the target.

Updates and Deletes are also flowing through all transforms. The only requirement for them is that the source primary key columns exists in the target as fields as those are used as the where clause of the update/delete.

Exterminate and Truncate rows are a problem as these rows do not have values on all columns, hence the transformations could produce totally different results. The filter transform has a condition REGION='US' but for those rows the REGION is null always,hence the rows do not make it through the transforms and are never applied to the target, which would be wrong. Therefore these rows are processed out-of-band. The first thing the flowgraph does is scanning the input for these rows. All of these rows are then applied in the target first, for example the Exterminate row for CUSTOMER_ID=4567 causes a delete where CSUTOMER_ID=4567 in the target table. For these opcodes the target table has to have the source primary key columns loaded unchanged and all the columns used as truncate scopes.

Replace rows are like Insert rows.


Opcode and TableComparison


If the flowgraph has a TableComparison transform, above out-of-band processing is rather than in the TC transform, not the loader itself. Same with Exterminate rows.

The task of the TC transform is simply to take the input and to create a dataset with Insert/Update/Delete Opcodes only. So even a Truncate+Replace is turned into a dataset with Insert/Update/Delete and some rows being discarded in case they did not change at all.

Author: Saurabh Raheja, Infosys Ltd., Chandigarh


Hello All,


I wanted to share my SAP HANA Certification Experience so thought of SCN as a best platform where we can learn, share and connect with SAP professionals.


My journey "To Be SAP HANA Certified" begin when I get to know that my organisation is keeping a SAP Certification Drive in collaboration with SAP. The moment I heard this news I was just praying and hoping to get myself in the list of selected nominations. I had a very strong inclination for SAP HANA from the very first day.Being SAP HANA certified was my dream and it was dream come true when I got a mail from one of my senior folks of the organisation that I have been nominated to appear for SAP HANA certification. That day I felt very happy that my orginasation has nominated me to appear for SAP HANA certification.


There are various Service Packs of SAP HANA available now. The latest one as of now is SP9 for which we have different Certification Code.

C_HANAIMP141 Certification Code is for SAP HANA SP7.


To register for the certification, one has to submit their profile/resume at

The profile is verified by SAP certification team within 4-5 working days. If the profile is approved then we receive consent form. where we fill the payment details.The certification date and venue is shared by SAP only before 4 days before the certification date.


This Certification drive was held at various locations Bangalore, Pune, Chennai and Noida and candidates can choose their preferred location to appear for the certification.As Noida is near to Chandigarh and Chandigarh not being SAP Certification Center so I opted for Noida as certification center. I have got approximately one month for the preparation of HANA certification. I used the materials available at learning hub( Eleanring,Handbooks,etc). Elearning in Learning Hub helped me a lot to prepare for certification through its online videos, simulations and demos. The Handbooks used for the preparation are majorly are HA100, HA300 and HA350. One should go through these books at least twice before they appear for certification.HA100 and HA300 are more important and cover most of the topics of certification.


The cut off score to pass this certification was 59 percent. There were total 80 questions in the exam which do not include negative markings.

However, there were questions which have single correct as well as multiple correct answers. No partial marking were there for questions having multiple correct answers. We have total 3 hours to complete the certification exam. The computer screen has timer on it. The questions were more scenario based. Many questions had choices which were very closely related. Therefore, one must have a clear understanding of all the concepts.


For C_HANAIMP141 information check the link


Major percentage of questions were from Data Provisioning. So I would suggest to get through this topic very thoroughly and understand the concepts.

Plus one should practice creation of attribute views,analytic views and calculation views in detail for SAP HANA modeling through SAP HANA Studio.


Finally the certification day came when I have to travel to Noida center . Some students had exam in the morning from 10 to 1 and some in afternoon from 2 to 5.

Luckily I got the morning slot which I also wanted to get. The exam started bit late than the scheduled time but it should not bothered much as exam has its own timer of 3 hours. I was bit nervous and tensed too as my organisation nominated me by keeping trust on me that I would clear the exam. I also could not take the exam lightly.The Exam was started and I was given the credentials by the proctor through which we have to login. We have to cross verify the exam code and passing marks at that time. Then the time came to click the start button to begin with the exam.


Once the Exam started, questions started flashing on the screen. There were few buttons for which we should be very carefully. There is no save button. Our answers are saved automatically each time we select the options. We can change our answeres any number of time. There are previous and next buttons through which we can navigate between questions sequentially. There is also one assessment navigator button through which we can open a small window and all the question numbers are mentioned on it so that we can navigate to any question randomly. We can also flag the question if we have doubt so that we can return to that question on later time. On completion of the exam, there is no need to unflag the flag questions as it would simply waste the time.


When we complete the exam only then we have to click to submit button. The moment we click on submit button the results are immediately flashed on the screen.


I called the proctor and clicked the Submit Button and I was confident enough that I am not going to fail this exam. It was just the check for me how good I score.I got 90 percent marks and tagged myself with SAP HANA certified. I am very happy as I am now globally recognized as SAP HANA Certified.


We have to note the Participation Id for future references after completion. We receive the hard copy of Certification within 6 weeks from the certification exam.


Hope this blog might provide you an insight for the SAP HANA Certification (2014 Edition)

All the best for those preparing and appearing for the certification exam.


Do not forget to like, share and comment on the blog as it would help me to improve in the next blogging.


Your feedback is valuable


Thanks a lot !!


Signing off for now.


Warm Regards,

Saurabh Raheja

Introduction :

For consuming HANA data on UI we usually develop XSJS , XSODATA depends on situation. As you know in every release of HANA , SAP is increasing XSODATA features. In this blog will give you basic idea how can you use XSJSLIB based modification exit for updating and returning value in XSODATA response.


Scenario :

Let's consider a scenario we've created employee table which contain empid , first name , department , email ...etc. For performing CRUD operation I've created XSODATA service. we can pass each and every information from UI except employee id because it should be generated at server side with the help of sequence / custom logic and should return at UI side after successful creation . I've seen many threads in which many Native Hana developer faced issue to get newly created value in XSODATA .


Objects :




XSODATA : EMS.xsodata

service  {
    "EMS"."EMS.Employee.HANATABLE::EmpPersInfo" as "Pinfo"
     create events ( before "EMS.Employee.XSJSLIB:emp_oprtn.xsjslib::usersCreate" ) //before create operation this exit will be trigger 

SEQUENCE: EMS.hdbsequence

schema= "EMS";
increment_by = 1;      //  -1 for descending
start_with = 100;
maxvalue= 99999999;
cycles= false;         // when reaching max/min value
depends_on_table = "EMS.Employee.HANATABLE::EmpPersInfo";

XSJSLIB: emp_oprtn.xsjslib

function usersCreate(param){
       $.trace.debug('entered function');
       let after = param.afterTableName; // temporary table
       // Updating Employee Id Before Create operation via sequence
       let pStmt = param.connection.prepareStatement('update "' + after + '" set EID = "EMS.Employee.SEQUENCE::EMS".NEXTVAL' );

UI5 code -

var oModel= new sap.ui.model.odata.ODataModel('/EMS/Employee/XSODATA/EMS.xsodata', false);
var inputData={};
inputData.EID= '';
oModel.create('/Pinfo',inputData,null, function(odata,oResponse){
  alert("Creation successful");

Testing : Everything is ready we can check it now -

see below screenshot of debugger we've send just dummy EID = 1 but in response our newly created Employee id is available





Filter Blog

By author:
By date:
By tag: