The following blog entry shall mirror first experiences made with XSJS in SPS9. XSJS is the server-side JavaScript that is used to create powerful services in the HANA backend. In the use-cases shown down below, the focus will be on database communication, service calls via AJAX and some useful hints for beginners. The service will be consumed by an OpenUI5 application.

For this tutorial I’ll be using the Web-based Development Workbench v1.91.1 of HDB. The payload of the requests will be delivered in the JSON format. You can find a more formal introduction on Software Development on SAP HANA at


First steps


Once you have created a database model and inserted some data with an ODATA-service for instance (See the following links for help on that:



(Useful introduction on ODATA create/update/delete requests by Thomas Jung),

(Tutorial on how to create an ODATA UI5 application by Ranjit Rao)),


you may do something like creating an Email out of the data modified or manipulate the data in some kind of way ODATA won’t provide you with. That’s when XSJS becomes useful. Let’s say we have a button in our app that shall trigger an XSJS call which will insert the data provided with the service call into our db. Based on that it will request some other data to create a mail with data-specific content.

The first thing you will have to do is creating a new XSJS file by adding the .xsjs suffix to a new file. This will do the trick so that it’s interpreted as a server-side JavaScript file.


Calling a service from the UI5 controller

Our model’s data will be sent in the JSON format. A local “model.json” file stores all the data – also the specific object we want to send (in this case a repository object which has attributes like a name, Descriptions, LicenseType, and a creation Date). The object can be easily accessed with the model we are using so that all we need to do is creating an AJAX request which looks as follows:


The “$” identifier starts the jQuery command. An AJAX call gives us the opportunity to call any service we want with more settings available than you’ll ever need (See the following link for the jQuery.ajax() documentation: ).


All you’ll need to know for the beginning is that you need the URL of the service which ends with “.xsjs”, the data to be delivered and the contentType being “application/json” to make sure it transmits the data in the right manner. The data is accessed through the JSONModel which links to the “localModel.json”. It’s then stringified with a predefined JSON method. If you need the application to do something after the request has finished successfully, you can add a callback-method “.done(function(odata){ //what shall be done }))” and there is also one for error-handling.

Now that you know how to call the service, let’s have a look on what it actually does:


Creating the service logic

Due to the fact that it’s basically just JavaScript we are going to write there’s not much to say about any specific syntax. Of course it makes sense to wrap a lot of our coding into functions that we’re just going to invoke afterwards.

The first function will get us the data of the body that we sent with the request and call a HDB procedure which will insert the new repository into the database.


Again, the jQuery library gives us some nice features. The documentation of XSJS contains all the useful classes and their methods which you’ll probably need. Keep in mind that two different versions of the API exist

($.hdb.html$.db.html ).

As the second API is outdated and lacks some useful classes and methods which the new one ($.hdb) provides, you should probably go for the latest one.

The first line initializes the function just as you know from JavaScript, after that the body of our request is taken as a String and parsed to a JSON object via “JSON.parse($.request.body.asString))”. The next line gets us a connection to the database. After that a procedure call is created which will insert the new object into the database. The procedure itself is not a part of this blog. Pay attention to the syntax of the schema and procedure description because it’s easy to get irritated at the beginning. The question marks at the end are the input parameters which will be filled with our JSON data. Unfortunately it’s not possible to hand a complete JSON object to procedures as a row and single values as an output at the same time with the old API. This might not have been implemented so far. As a workaround, splitting the JSON object and giving the procedure multiple inputs with simple data types, did the trick. After the request is executed it’s possible to fetch the output parameters (in that case an integer). Next, the procedure call is closed and the changes are committed to the connection. It’s not going to be closed yet, because there is still some work left to do for it. The “getMailData()” method selects all the values being connected to the repository object by calling prepared select statements which is also part of the documentation.


The “sendMail()” function which is invoked after the mail data has been collected has several JSON objects as input parameters and creates a new mail. Fortunately, it is fairly easy to create a mail in XSJS. We just need to create a mail by template and fill the settings. A funny security issue here is the possibility to enter any address for the sender and the mail is going to look like it’s been created by him/her. The neat thing here is that the content of the mail is made up of “parts”. As we want to create a pretty HTML mail we’ll use the contentType “text/html”. After that the mail’s first part’s text is filled with all the data we want to be shown in the mail. You can also add value to the look of the mail by using in-line CSS. Last of all, the mail is send via “mail.send()”. An interesting security gap is the opportunity to type in whatever you want for the sender and the received mail displays this sender. The result of the mail looks something like the following:



Issues and Conclusion


XSJS services are easy to use if you know how to code JavaScript and the functions regarding the db connection become clear very fast. You just have to keep in mind that for simple use cases ODATA services might be more efficient because you don’t need to define the service logic for them. If you need to modify data in some way before it reaches the database level, XSJS might be very useful to you because it gives you all the opportunities of JavaScript to modify JSON objects, arrays and invoke functions. Furthermore, it lets you send mails and helps you get as much logic in the backend as possible, so you do not have to worry about API-keys or credentials within frontend controllers. Dealing with authentication (which many applications need) is a lot easier with serverside XSJS than within the frontend. An issue I faced was the lack of opportunity to include multiple HTML parts in one mail. The mail would not be rendered correctly and there was no workaround except for creating one big HTML mail part. The procedure which creates the new repository entry had to be modified a lot in order to work correctly. The procedure call in XSJS didn’t allow to pass a complete row as an input parameter, but via ODATA this was always the case. The documentation is still pretty helpful, even if it is short and needs to grow and include more classes in the future. How was your first experience with XSJS? Which problems did you face? Feel free to express your thoughts in the comments section!

Core data services (CDS) is an infrastructure for defining and consuming semantically rich data models in SAP HANA. Using a a data definition language (DDL), a query language (QL), and an expression language (EL), CDS is envisioned to encompass write operations, transaction semantics, constraints, and more.


A first step toward this ultimate vision for CDS was the introduction of the hdbdd development object in SPS 06. This new development object utilized the Data Definition Language of CDS to define tables and structures. It can therefore be consider an alternative to hdbtable and hdbstructure.


In SPS 10 we continue to develop CDS with a focus on expanding the SQL feature coverage and improving complex join operations on views.


SQL Functions


In SPS 10, CDS is expanded to support almost all of the HANA SQL Functions. This greatly expands the kinds of functionality that you can build into views by formatting, calculating, or otherwise manipulating data with these functions. The following functions are the only ones not yet supported:

  • Fulltext functions
  • Window functions
  • the functions GROUPING, GROUPING_ID, and MAP in the section Miscellaneous function


Geo Spatial Types and Functions


In SPS 09, CDS first offered support for the usage of the Geo Spatial types in entity definitions. In SPS 10 we expand this support for Geo Spatial in CDS with the addition of GIS functions. This example shows how you can use the function ST_DISTANCE to calculate the distance between two geometry values. Specifically in this example we are taking the address of a business partner which is stored in the database and calculating the distance between it and Building 3 on the SAP Walldorf campus.


define view BPAddrExt as select from MD.BusinessPartner {
                NEW ST_POINT(8.644072, 49.292910), 'meter')/1000, 1) as distFromWDF03

Foreign Keys of Managed Associations in Other Associations

In the past using a managed association in a "circular" relationship where the key of entity is used in the association to another entity which in turn uses its key back to the parent would simply have resulted in an activation error. In SPS 10, the compiler now recognizes such relationships. When it sees that the referenced field is actually part of the base entity and thus can be obtained without following the association, it allows activation and doesn't generate any additional columns in the underlying database tables.


The following is a common example of just such a Header/Item relationships:

entity Header {
  key id : Integer;
  toItems : Association[*] to Item on = id;
entity Item {
  key id : Integer;
 head : Association[1] to Header { id };

Unlike a normal managed association, no additional column is generated for the association in the underlying database table. So this case it acts very much like an unmanaged association.



Filter Conditions

Another new features in SPS 10 is the addition of filter conditions. When following an association, it is now possible to apply a filter condition which is mixed into the ON-condition of the resulting JOIN. This adds more power and flexibility to the views you can build via CDS while also following the idea of CDS to make the definition more human readable and maintainable than the corresponding pure SQL functionality.


In this first example we apply a simple, single filter on LIFECYCLESTATUS to the Business Parnter -> Sales Order join.



view BPOrdersView as select from BusinessPartner {

The resulting generated view is:


Associations with filters are never combined.  Therefore in order to tell the compiler that there actually is only one association, you have to use the new prefix notation. In this example we want the LIFECYCLESTATUS filter apply to both the SALESORDERID and GROSSAMOUNT retrieval via association.


view BPOrders2View as select from BusinessPartner {
                                GROSSAMOUNT  as grossAmt }

The resulting generated view is:


But we also see that by using the prefix notation, that such filters can be nested. This example expands on the earlier one. It still filters business partners who only have orders with LIFECYCLESTATUS = N, but now also only selects those who have ITEMS with a NETAMOUNT greater than 200.


view BPOrders3View as select from BusinessPartner {
                                GROSSAMOUNT  as grossAmt,
                                ITEMS[NETAMOUNT>200].{ PRODUCT.PRODUCTID,
                                                       NETAMOUNT }

The resulting generated view is:




The final new feature in CDS to discuss today is Series. Series data allows the measuring of data over a time where time is commonly equidistant; it allows you to detect and forecast trends in the data. You can read more about the general functionality of Series data in SAP HANA here:


The major addition from the CDS side is that you can define Series data within CDS entities.  Here is a small example of the use of the series keyword:

  key setId : Integer;
  key t : UTCTimestamp;
 value : Decimal(10,4);
  series (
series key (setId)
period for series (t)          
equidistant increment by interval 0.1 second


With the recent release of HANA SPS 10, its time once again to give a quick look at the highlights of some of the new features. This blog will focus on the new development tools features in HANA SPS 10. I will say up front that the amount and scope of additions in SPS 10 for the developer topic isn't as large as what we saw in SPS 09. Now that isn't to say we aren't investing. In fact we have some really big things in store for the future and it just so happens that most of our development teams were already working on SPS 11 and beyond. Therefore you will mostly see catch up features and usability improvements in SPS 10 for the development topic area.


SAP HANA Web-based Development Workbench


Calculation View Editor

The first area I want to touch on is the calculation view editor. The calculation view editor was first introduced to the SAP HANA Web-based Development Workbench in SPS 09; but it wasn't feature complete.  In SPS 10, we've spent considerable effort rounding out all the missing features. I won't go into details of all the new modeler features here; as that topic is actually covered separately by other colleagues.  However I still wanted to point out that now you should be able to create and maintain most any calculation view from the web tooling; make complete end-to-end development in the SAP HANA Web-based Development Workbench a possibility.


Auto Save

One of the architectural differences between a local/client tool and web based is fundamentally how they react when they get disconnected from the server or encounter some other unforeseen technical issue. In the SAP HANA Studio, a disconnect or crash often still meant that your coding was safe since its first persisted on your local file system. However IDEs in the web browser need to take other measures to ensure your content isn't lost. With SPS 10, we introduce the option to auto save your editor content in the local browser cache.


This is a configurable option which isn't enabled by default since some organizations may have security issues with the fact that the content is stored insecure in the browser cache. However if you enable this option and the browser crashes, you accidentally close the browser tab, or you lose connection with the server; your edited content isn't lost. Instead the "local version" is visible in the version management tool and can be compared to the active server version or restored over the top of it.




GitHub Integration

Another major new feature for the SAP HANA Web-based Development Workbench in SPS 10, is GitHub integration. Although you can't replace the local repository with Git or GitHub (yet), this functionality does allow you to commit or fetch entire package structures from the public GitHub repository.



Its easy to use because its so nicely integrated into the SAP HANA Web-based Development Workbench.  Just choose the parent package from the Content hierarchy and then choose Synchronize with Github.  You can then choose the Github repository and branch you either want to commit to or fetch from. Personally I've already used this feature to share a few of the demo/educational projects which we use for the openSAP courses.  Also you can do version management from the SAP HANA Web-based Development Workbench between your local versions and the version of the object in the GitHub (GitHub version is the one with G prefix :



Quick Fix

Most developers have a love/hate relationship with ESLint and other suggestions and warnings. While we like the idea that these suggestions improve our code, we don't like the little red flags hanging around telling us that we have yet more work to do.  This is where the new quick fix option in the SAP HANA Web-based Development Workbench is so nice. You can select multiple lines in JavaScript file and choose quick fix. The system will then apply the fixes it thinks are necessary to remove the ESLint markers. For many small, stylistic warnings; this can be a great way to clean up your code in one fast action.




JSDoc is a standard for formatting comments within JavaScript which can be used to generate documentation. It is how we generate the JavaScript API documentation found on Now we integrate the generation of JSDoc directly into the SAP HANA Web-based Development Workbench.

It works for XSJS, XSJSLIB, and client side JS files. The JavaScript editor has a new option to help with the generation of JSDoc compliant function comments. There is also an option to generate a JSDoc HTML file for all the files within a package.



SQLScript Editor and Debugger

There are several enhancements to the SQLScript Editor and Debugger in the SAP HANA Web-based Development Workbench in SPS 10. You can now set breakpoints and debug from the editor without having to switch to the catalog tool. We also see full the semantic code completion in the SQLScript Editor. For more details on these enhancements, please have a look at Rich Heilman's SQLScript SPS 10 blog: New SQLScript Features in SAP HANA 1.0 SPS 10


Data Preview

The data preview tool in the SAP HANA Web-based Development Workbench has a couple of new usability features.  First there is the option to allow for the editing or creation of data directly from the data preview. This probably isn't a tool that you would want to give to end users to maintain business data, but for developers and admins this is great new tool to quickly enter test data or correct an emergency problem.



The data preview also introduces advanced filtering options to put it closer to the content preview features of the SAP HANA Studio.




As has been apparent for a few Support Package Stacks, most of our investment has been going into the web tooling and not the SAP HANA Studio.  SPS 10 is no exception, but still we see a few usability improvements in the area of the Repository browser tab.

We wanted to streamline the start up process, so every system connection automatically shows up in the Repository browser.  In order to edit files, you no longer have to create a local repository workspace. In SPS 10 you just start editing and you will be prompted to create the local workspace.


We also bring over for the folder groupings for system from the SYSTEMS tab.


Also added a new options for filtering, grouping and searching files from the Repository browser.


Recently I found myself needing to expose one of my HCP (HANA Cloud Platform) applications to the outside world without any authentication. While this is probably not the most common scenario it still can happen and of course brings a whole load of questions like how to actually expose the UI in a freely accessible way and also how to give limited access to your data?


So here we go - this scenario is where I have a split app on HCP (not the trial version) with data residing in my HANA server.


Step 1 - Roles & Privileges


We need a standard .hdbrole and .analyticprivilege file. The first should be of the standard form to give perhaps "SELECT" access to a schema or set of tables. It should also include your analytic privilege (contains any attribute, analytical or calculation views).



Figure 1

Sample .hdbrole file giving access to a schema and including an analytic privilege

* Note that normally I would never give UPDATE/INSERT/DELETE privileges to a anonymous user unless I had a good reason.

Screen Shot 2015-06-30 at 15.50.18.png

Figure 2

Sample .analyticprivilege file giving access to an Analytic view I created

Screen Shot 2015-06-30 at 15.50.43.png



Step 2 - Create basic restricted user


In order to be certain that the connecting user only has access to what we want them to have access to, create a new user and only assign the following permissions:

  • Assign the role created in step 1 to the user
  • Assign "SELECT" access to the schema "_SYS_BIC"


Step 3 - Create a SQL connection for your app


Now we need to create a XS SQL connection configuration (.XSSQLCC) file which will be the object we will use to connect our anonymous user to our project. This file simply contains one line which is a description of the connection configuration.


Figure 3

Sample .xssqlcc file contents simply giving a description of the SQL connection configuration.

Screen Shot 2015-06-30 at 15.51.12.png



Step 4 - Assign your restricted user to the SQL connection


Activation of this XSSQLCC file from step 3 creates an entry in the system table "SQL_CONNECTIONS" in the schema "_SYS_XS" and performing a select on that table where the "NAME" field is equal to your XSSQLCC file name will retrieve that entry. i.e. if your project is called "ABC" and it is in the top level package "XYZ" and your .XSSQLCC file is called myConfig.xssqlcc then your name search will be for "XYZ.ABC::myConfig".

Once you have verified the entry is in the table you can see that the field called "USERNAME" defaults to blank. This is where we need to specify our restricted user. Do this by running the command as follows using a standard SQL console on the HANA server:

Figure 4

SQL statement to update the SQL Configuration of your app to run as your restricted user.

Screen Shot 2015-06-30 at 16.11.47.png

In this case my restricted user is called DEMO_ANON.



Step 5 - Make your app use the SQL connection for all access attempts

Finally we now setup our app to use this connection for anybody who attempts to connect to the app. In the .xsaccess file we update our authentication methods to null and set our anonymous_connection to use our XSSQLCC connection.


Figure 5

Updated .xsaccess file to use anonymous authentication via our XSSQLCC file.

Screen Shot 2015-06-30 at 16.13.36.png


Once all this is complete you should be good to go for anonymous authentication to your XS application. There is some of this configuration available via SAP provided configuration apps (such as the xs admin console /sap/hana/xs/admin on your server ) however this is the workflow that works for me :-)


Any questions/comments please feel free to shout

Enhancements to SQLScript Editor & Debugger in the SAP Web-Based Development Workbench


In SPS 10, the SQLScript editor and debugger in the SAP Web-based Development Workbench has been enhanced in several ways to help the developer be more productive.  We introduced the editor in SPS 9 with basic keyword hints, but in SPS 10, we’ve expanded this to include code snippets and semantic code completion very similar to what we introduced in the SAP HANA Studio in SPS 9.  Basically, if you want to construct a SELECT statement, you simply type the word SELECT and hit CTRL+SPACE.  You will then get a list of possible code snippets to choose from.


Select the snippet you wish to insert and hit ENTER, the code snippet is inserted into the procedure.  You can then adjust as needed.


Another feature that we’ve added to the SQLScript editor in the web-based development workbench is semantic code completion.  For example, if you need to call a procedure, you can simply type the word CALL and hit CTRL+SPACE, and you will get a drop down list of procedures. Simply double click on the object you want to insert.  This is context sensitive, so it works quite well in other statements as well.



With SPS 9, we introduced the ability to debug procedures within the web-based development workbench, but only from the catalog.  As of SPS 10, you can now debug design-time artifacts(.hdbprocedure files) as well.  You simply open the .hdbprocedure file and set your breakpoints.  You can then, right click and choose “Invoke Procedure” to run it from the SQL console.  The debugging pane is show and execution stops at your breakpoint.  You can then of course single step through the code and evaluate values.





One of the many stored procedure language features that a developer expects in any database is the concept of COMMIT & ROLLBACK.  Up until now we did not support COMMIT/ROLLBACK in SQLScript.  As of SP10, we now support the use of COMMIT/ROLLBACK within procedures only, not for scalar or table User Defined Functions(UDFs). The COMMIT statement commits the current transaction and all changes before the COMMIT statement.  The ROLLBACK statement rolls back, the current transaction and undoes all changes since the last COMMIT. The transaction boundary is not tied to the procedure block, so if there are nested procedures that contain COMMIT/ROLLBACK then all statements in the top-level procedure are affected. For those who have used dynamic SQL in the past to get around the fact that we did not support COMMIT/ROLLBACK natively in SQLScript, we recommend that you replace all occurrences with the native statements because they are more secure.  For more information, please see the section on Commit & Rollback in the SQLScript Reference Guide.

Header Only Procedures/Functions

We’ve also introduced the concept of “Header Only” procedures/functions in SPS 10.  This is to address a problem when creating procedures/functions that are dependent on one another.  You can’t create the one procedure/function before the other. Basically this allows you to create procedures/functions with minimum metadata first using the HEADER ONLY extension.  You can then go back and inject the body of the procedure/function by using the ALTER PROCEDURE statement.  The CREATE PROCEDURE AS HEADER ONLY and ALTER PROCEDURE statements are only used in the SQL Console, not in design-time artifacts. Below is a sample of the basic syntax, for more information, please see the section on Procedure & Function Headers in the SQLScript Reference Guide.

CREATE PROCEDURE test_procedure_header( in im_var integer,

                                out ex_var integer ) as header only;


ALTER PROCEDURE test_procedure_header( in im_var integer,

                                out ex_var integer )





   ex_var = im_var;



SQL Inlining Hints

The SQLScript compiler combines statements in order to optimize code.  SQL Inlining hints allows you to explicitly enforce or block the inlining of SQL statements within SQLScript.  Depending on the scenario, execution performance could be improved by either enforcing or blocking inlining. We can use the syntax, WITH HINT(NO_INLINE) or WITH HINT(INLINE).  For more information, please see the section on Hints: NO_INLINE & INLINE in the SQLScript Reference Guide.


Multiple Outputs from Scalar UDFs

In SPS 8, we released the ability to call scalar functions in an assignment statement. But there was a limitation which only allowed you to return one output parameter per call.  In SPS 10, you can now retrieve multiple output parameters from a single call.


The following function output_random_numbers has two return parameters called ex_rand1 and ex_rand2.


CREATE FUNCTION output_random_number( )

        RETURNS ex_rand1 integer,

                 ex_rand2 integer




ex_rand1 = ROUND(TO_DECIMAL(1 + (999999-1)*RAND()),2);

ex_rand2 = ROUND(TO_DECIMAL(1 + (999999-1)*RAND()),2);



In this procedure, we will call the function and retrieve both return parameters in one call.


CREATE PROCEDURE test_scalar_function(

          OUT ex_x integer, OUT ex_y integer)




    (ex_x,ex_y) = output_random_number( );




You can also, retrieve both values separately with two different calls, referencing the name of the return parameter.


CREATE PROCEDURE test_scalar_function(

         OUT ex_x integer, OUT ex_y integer)




    ex_x = output_random_number( ).ex_rand1;

    ex_y = output_random_number( ).ex_rand2;



Table Type for Table Variable Declarations

In SPS 9, we introduced the ability to declare a table variable using the DECLARE statement.  At that point, you could only define the structure explicitly inline, and could not reference a table type from the catalog or from the repository. In SPS 10, you can now do so.  In the below example, LT_TAB is declared referencing a table type in a CDS(.hdbdd) file. 

CREATE PROCEDURE get_products( )







declare lt_tab "";

lt_tab = select * from "";

select * from :lt_tab;




Anonymous Blocks

Finally,  the last feature I would like to introduce is the concept of Anonymous Blocks.  This allows the developer to quickly write and execute SQLScript code in the SQL Console without having to create a stored procedure.  This is very useful for trying out small chucks of code during development.  You can execute DML statements which contain imperative and declarative statements. Again there is no lifecycle handling(CREATE/DROP statements), and no catalog object.  You can also not use any parameters or container specific properties such as language, or security mode.  The syntax is very simple, you basically use the word DO, followed by a BEGIN/END block.  Then you simply put your SQLScript code in the BEGIN/END block and execute it.  For more information, please see the section on Anonymous Blocks in the SQLScript Reference Guide.


Rohit Kumar

Dynamics of HANA Projects

Posted by Rohit Kumar Jun 22, 2015

HANA Projects – What is so special?


What is so much buzz around topic of managing / running a HANA Project? Come on HANA is just another product in line with SAPs ERP product lines and innovations. We have tons of information and loads of examples around us on how to run an ERP project. SAP as an ERP is no new to market and till date innumerable implementations of various types /categories/sizes are all around.


It’s just a revolutionary database technology HANA at its core which brings in unmatched computation speed. Lift and shift existing implementation utilizing conventional Databases to latest SAP Application running on HANA and boom! it runs at blazing speed.Even going to extremes like S4HANA it’s more over same story with added advantage of convergence of multiple separate applications like CRM, Simple Finance etc. as plugin to single application and Database platform.


All mentioned before this sentence are the ignorance areas which create issues and at times crash HANA projects.


Remember HANA does not only talks about speeding up things using high capability Memory-To-CPU Architecture, but it moreover about:

a. Changing the way organizational processes are first simplified and reflected in design strategies

b. Come up with flavour of new age architecture models like Data Vault and Layered Scalable Architecture Plus to name few

c. To understand correctly the sizing aspects to avoid oversizing or under sizing and in turn affect performance



In short we can say the conventional way of ERP projects , especially for Business Warehouse projects where it use to be mostly data driven or were – Primarily Technical Implementation does not suits mood and meaning of a HANA implementation.


It’s not only the speed of calculation engines that pump in value but other factors like actual usability of solution, proper pivoting of data level architecture, aligned architecture and understanding how HANA actually works at engine level helps in a proper implementation and can decide a success or failure.



There are broadly two important factor areas:


1. Non-Technical Aspects like – Execution plan, proper resourcing and integration of Business core requirements in deliverables.

2. Technical Aspects like – Proper sizing, selection of Packaged or Tailored appliance, Architecture, Balance in solution to strike a balance between CPU Vs Memory consumption, wise choice of data provisioning.



Where is the world heading to?


In the current scenario where the businesses are heading to, has the key to how HANA projects to be handled.


It was a time back in history when it use to be enough to have ERP solution connecting all departments and being able to generate reports  to manage an organization and to some extent help in managing operations etc. was enough.


With time we gained more speed, covered more aspects, went into more complexity, gone ahead in time to include planning, CRM and SRM on top of core modules.


What expertise was required to handle this was unpacking bundled solutions or weave around customized solution over time and recursively built on top each other. Projects were running to configure and implement new reporting requirements, standard or custom.It’s more of a two dimensional project requirement running between these two aspects.


Optimization were towards:

a. Demonstrating expertise in building more similar solution for various customers and optimizing using tools to automate frequently done things

b. Create templates to implement stuff required almost by everyone with option of customizations

c. Reduce FTEs and time by using models like Factory model.

d. Delivering one report for every business user at times even out of temptations if not actual requirement



From customer perspective also the optimization, gaining most out of investment and execution of projects were limited to per department.


The reporting development in data warehouses were limited to smaller data sets and based on aggregation due to limitations in performance.


Where we are heading now makes a huge impact on usability of these solution and when means like HANA based solutions are available now.


We have overburdened systems running expensive ETLs, aggregations and huge data volume with at times valuable information as little as 1% as per researches.


Where we are heading is need to have powerful and real-time decision capabilities. We need not only information derived from systems in our network of OLTP boxes. But data flowing in from various other real time sources are as important. Organizations are growing in size and which translates into more complex systems. Planning on even a day old data is considered old fashioned or risky. Understanding sentiments of customer in minutes is necessity and predicting customer mood well in advance is core strategy. The definition of “what actual business an organization is in” has also changed with time. Shift has happened from being just a product based companies to service to customer by product has taken place. Now a car manufacturer cannot just focus on launching cars on mere gain in technology, they have to maintain, understand and focus on catering needs of existing customer by superlative communication is of importance.



Coming back to our discussion about differentiating factors in HANA project management. It is this scenario with customers going for these implementations have a wider implication on dynamics of HANA implementations.


A HANA implementation cannot be just a technical and department wise solution and report generating project. It’s a fusion of art of business understanding driven finer technologically supported initiative.



So what is different here – Fundamental Difference.


The best trained and run SAP implementing partners use to believe that if you are able to collect all aspects of data in OLTP systems and dump them all in smaller subsets in warehouse you have a successful implementation. So was the view of client CIO and CTO offices as well.


There are so many failed HANA implementations and the reason being they were executed with a mind-set as mentioned above.


The fundamental differences are:

a. HANA implementations should aim to greater goal of higher business usability  and enhanced simplicity in terms of design – it is strategic organizational goal

b. HANA is no magic wand. It returns good ROI if used properly else not.

SAP recommends to go with a Project execution to enable Increase in Quality value proposition to customer with decreased cost impact. The decreased cost enablement also has a factor of decreased time of execution attached.


Prevalent Challenges


There are some prevalent challenges to be kept in mind while executing or even going at initial stages of bidding for a HANA project. There are some concerns and cloud of doubt always around any new technology and at times it is the understanding difference between marketing and sales team & delivery team that should also be blamed.


Some of the common prevalent challenges are:

a. HANA is hot, getting right skill is hotter

b. To go with pre-packaged HANA appliance black box or utilize existing Hardware and to go for Tailor made HANA server

c. Cost of investment for HANA is high

d. Choice for proportion to go on premise and cloud and reasoning behind that

e. Will this actually help in reducing my TCO and tangible benefits in terms of ROI

f. What about existing Hardware

g. Customer’s belief that lift and shift is enough

h. Reports running faster, what else is the benefit?



Some common mistakes


There are some common mistakes when we talk about HANA projects:


a. As mentioned by Vishal Sikka. Beauty of HANA is, “you run a process in 100 seconds using one core, you run it on 100 core it runs in 1 second”. This aspect gives a lesson also about sizing. It should be noted if 100 concurrent user are running a report with runtime of 2 seconds, with 1000 users each of them will run for 10 seconds. So, concurrency should also be factored as one parameter with expected complexity and runtime of reports.


b. HANA engine does not create persistent cubes but generates metadata to pick up data at runtime, if CPU utilization is very low as compared to memory utilization that also is a design error and might end up screwing the sizing.


c. Too much of real time data reporting and unnecessary big data combination. Not considering strategic utility of big data at times end up into bottlenecked and memory overflowing systems with literally no meaningful information flowing out of it.


d. Scrap is not good for home, society and so for HANA implementations. Fail to shed unnecessary flabs (read unnecessary processes and reports) before moving to HANA


e. As-Is migrations to HANA


f. Wasting efforts on things which were never used


g. Long duration planning and execution projects. In ever changing market at times it is too late for a client when after spending little too much time it ends up seeing the strategic and competitive edge of new solution is already lost.



Best practices



There are some best practices to be followed and which if not followed may result into failed HANA project to different extent.



A. Focus of HANA implementations are normally not only to speed up functionalities and reports. They are to achieve a higher strategic goal of utilizing more of data towards generating time critical responses and monitoring for faster decision making. This also requires less flab to be generated in terms of having as much possible to have only what is required. This all also needs to be executed and delivered in shorter period of time for better ROI and Business Values. This translates into a project methodology with 200% more Business direct involvement and moulding architecture around this. A shift in mind-set and project execution is required from being data centric technical execution to business facing method.


B. As-Is and Lift and Shift are technical possibility towards non-disruptive migration of existing developments to new HANA environment.  These are purely technical terms and not most of the time result into much optimized migration result. SAP also recommends a lift and shift only after removing clutter from system as much possible. After lifting and shifting the HANA optimized design methodologies to be followed to get benefit out of HANA. This results into additional implementation cost and time as well.


C. Trying to sell to customer just lift and shift is as bad as spreading bad words about yourself in market


D. Continuing with point – A. Involve business and if possible field sales / market staff SPOCs also in workshops to understand strategic and customer-to-customer key areas. At times business sitting in corporate office is able to give 100% understanding of processes and utility for that. But customer-to-customer key areas are best understood by field staff and at time critical analytic inputs are disclosed.


E. As HANA implementations have a high level organization strategic value proposition expectation as well and is being viewed with microscope from every level of client organization ensure to understand the greater overall expectation from client. E.g. if a telecommunication client is executing a project to deliver reports on network data, just not go with standard reports  to present that data but understand its business value expected out of it. When you are delivering solutions like - stats of this data to show number of calls done region wise, types of caller type active region wise there mix, fraction of local and STD calls etc. client might be just interested to see call dropout rate rapidly to plan their client retention. Value proposition to client is shaken by excessive timelines and cost when focussed solutions are not done.


F. Sizing should be done carefully. HANA stores data in compressed format. It stays compressed and is decompressed only very quickly if required and only in CPU memory cache. So overall foot print of data is reduced significantly up to even 40%. With simple finance it has gone down more. But at the same time sizing should be enough to support resource distribution between multiple concurrent users.


G. Proper architectural guidelines should be implemented. E.g. LSA++ to reduce layers in Warehouse to get maximum of storing once and using multiple time benefits of HANA


H. In SAP BW HANA implementations, most of active inactive settings not utilized to be able to actively flush out data from memory to disc during bottleneck. At most 50% of memory size should be equal to all of hot data in system.



Other Challenges


There are some other challenges in running HANA projects as well. For first timers the BASIS at times is not ready for HANA and that becomes bit of issue. A heterogeneous environment at times with multiple SAP and Non-SAP systems becomes a challenge. Using correct data extracting solution, transforming data before bringing to HANA and de-normalization requirement of data are important challenges.


A very important aspect which moreover questions the knowledge level and maturity of project teams is suggesting or planning what goes to cloud and what remains on premise is also a challenge. There are customers who want to make use of both the worlds, but based on line of business, department and type of service provided by organization these should be considered.


At times customers fail to understand or not consulted well or due to financial constraint – the value of archival strategy and near line storage [NLS]. With huge volume of data input Archival and NLS plays a strategic role in data management. Especially with retail, pharma and telecommunication customers. The huge data accumulates in no time and performance is impacted and the new system loses its sheen and value.


At times due to negligent planning disruption of systems when moved to production, system outages and then time taken to fix this are also the reason why HANA project like any other project fails. HANA being new and with less expertise available at times lies in higher risk area for this.


Creating a Centre of Excellence in service provider and optionally in client organization is important. Primarily in absence of this at

service provider organization, knowledge remains scattered, improperly documented, at times remains with individuals and which hampers organizational maturity. The focus on quality control and metrics is missing mostly. While leveraging COEs helps to build up more of these capabilities and reach maturity with additional focus on tool and methodology build up.


Remember the expectation from customer towards HANA project is always extremely high. It is always a high focus project. Also, the target benefit of customer is strategic organizational upgrade as well and not only a technical upgrade. This makes it extremely important to create designs for implementations only with deep understanding of business processes and bigger goal expected.


On the other hand HANA being altogether a new kind of Database and application a sound understanding of technology and bringing on board right skill is extremely important.


At organization level, building up of COEs and competency development also not only helps in proper project execution. But also helps in quality focussed and mature strategic expertise build.

UPDATE - SAP Idea Incubator is transforming into a new program, a new blog will be posted to explain the new process shortly. Please check back later. Thank you all for your participation.

SAP Idea Incubator is a crowdsourcing program that brings together customers with fresh ideas about how to use their data – and innovators who can quickly build prototypes of solutions that reflect these ideas. The result? Fast, cost efficient problem solving.


Join SAP Idea Incubator today!


How to participate?

Do you have an idea about how to better utilize your data already? Send us a brief description about it in our submission form.

Do you see an idea that interests you? You can share it in your circles, discuss your thoughts in the discussion board, or submit a proposal.


What are the benefits?

For idea submitter

  1. Get global data scientists, developers to work on your idea
  2. Leverage the crowd-sourcing benefits with confidence in data security on SAP cutting-edge technology
  3. Get working prototypes to prove feasibility of your idea


For innovators who submit prototypes for ideas

  1. Build personal skills, knowledge and expertise, hands on real customer data and solve real business problems
  2. Get a chance to participate in HANA Distinguished Engineer program, exposure to a broader business community
  3. Monetary and/or other sponsorship for the winner of every idea


Plus, now you can get badges!


Level 1: I am an SAP HANA Idea Incubator Fan


  • “Bookmark” this blog (click on the Bookmark function on the upper right hand side of this blog)

Level 2: I am a Contributor to SAP HANA Idea Incubator


  • Submit your idea here
  • and Post a blog on SCN describing your idea or proposal

Level 3: I am a Winner at SAP HANA Idea Incubator


  • Be selected as a winner for the proposal submission. Congratulations!


Some related links


Please let us know in the “Comment” section if you have any questions. We can’t wait to see your participation!


Special thanks to the SCN Gamification team---Jason Cao, Jodi Fleischman, Audrey Stevenson, and Laure Cetin for helping to create a series of special SAP HANA Idea Incubator missions and badges and making this happen!



A couple of weeks ago I was moving code from 1 hana instance to another trying to keep them in sync. However, I thought there might be a better alternative for comparing the contents of the repos across my systems to ensure that the files matched. After doing some digging and not finding a solution, I decided to write a small tool to do just this, called Syscompare. It is a open source HANA app which uses the new File API and compares the files on each system and displays the differences.


You can read more about the application here, and find the files for the HANA app in Github.





- Compare repos across HANA instances

- Display file differences in the application

- Highlights missing files on each instance




- Setup the 2 XSHttpDest files

- Specify the source and target systems (using the xshttpdest file names)

- Specify the repo to compare


Once the processing is complete the app will show a summary of differences:


Screen Shot 2015-06-16 at 10.18.20 PM.png


Screen Shot 2015-06-16 at 10.20.12 PM.png


Screen Shot 2015-06-16 at 10.20.51 PM.png




You can checkout the Github/Source code here: paschmann/Syscompare · GitHub


If you prefer to download the Delivery Unit - please click here and provide your email address (this way I can keep you up to date with changes): metric² | Real-time operational intelligence for SAP HANA


Interested in contributing to the project? Please feel free to fork or submit a pull request.

...continuing from Internet of Things Foosball - Part 1


We got a team of four undergraduates who applied for this project. They immediately recognize that this Final Project was not some ordinary project. This project would both mean a grade deal for the people in the company and also would this project challenge their techincal skills to their highest level.


They would have to come familiar with a raspberry and its sensors. Build up a knowledge with SAP HANA and its endless possibilities and to evaluate if the SAPUI5 was applicable as a web app. As much as the enthusiasm drove the team to get this started on both hardware and software they first had to go through some learning material.


When all members of the team had gone through the basics(ie blogs and tutorials from Thomas Jung ) it was now time for business. At this time they could more understand the roles of each object and finalized the architectural view of the whole project. The architectural mockup looked like this.



They soon finalized a solution in the raspberry. The raspberry had a simple role which was simply to capture the goal events and sends a HTTP POST via api to SAP HANA. The api's where created with the SAP HANA XS application framework.

The api was build upon a simple db model.



Although this model was quite simple it was enough from our original specifications. Now it was time for the imagination to start building up and find out what kind of statistical data we wanted to get from the data created with this model. We decided to do the following:

  • Most wins
  • Most played games
  • Most scored/conceded goals
  • Highest/Lowest win percentage
  • Quickest/Slowest goals
  • Winning/Loosing streak
  • Greatest comback
  • Wall of Shame
  • Best partner, Worst partner. Easiest opponent, Toughest opponent etc.

It was almost impossible to recognize at the beginning that this was all possible with a simple data model and the power of SAP HANA.


However this list of statistical data did not show which player was ranked the highest. Therefore the team implement an ELO score algorithm which was a brilliant idea. Not only did they implement a statical ELO score but also a ELO historical view.


As the time got passed and the delivery date got closer the team decided not use SAPUI5. Instead they would build the UI using AngularJS as they where more familiar with it. And also this decission was made because of the requirements to use WebSocket for an immedied score update.

The landing page whould show an ongoing game and the routes to the player/team selection, statistics view etc.



It is an understatment to say that the final outcome was much more than we hoped for.

It can be seen from this video

The Insurance Claims triangle/Loss triangle/Run off triangle


Before we delve into the prediction, scripting and all the interesting stuff, lets understand what the claims loss triangle really is. An insurance claims triangle is a way of reporting claims as they developer over a period of time. It is quite typical that claims get registered in a particular year and the payments are paid out over several years. So it becomes important to know how claims are distributed and paid out. An Insurance claims triangle does just that. Those who are familiar with Solvency II norms set by EIOPA would be familiar with the claims triangle report. The report is a mandate and is in the Quantitative Reporting Templates(QRTs).





Fig : 1 - The claims triangle


In figure - 1, the rows signify the year of claim registration and the columns the development year. Consider that we are in the year 2013 and are looking at the claims triangle. The first row focuses on the claims registered in the year 2005. The first column of the first row(header 0) gives you the claim amount paid out by the insurance company in the same year. The second column gives you claim amount paid out in the next year(2006). This goes on until the previous year of reporting i.e. 2012. The second row does the same thing, but for the claims registered in the year 2006. Logically, as each row is incremented, the number of columns would be lesser by one. This gives the triangular shape to the report and hence its catchy name. The claims triangle could be of two types - incremental or cumulative. Incremental is when each column hold the amount paid at that specific intersection of registration year and payment year. The cumulative on the hand would contain the cumulative claims paid out as of that intersection point.


The below prediction is based on the cumulative model of the claims triangle. We would base or logic on a set of records stored at the cumulative level. I have uploaded the input data as a CSV in the blog for saving your time.


The Prediction


The interesting part is to fill the second triangle of the rectangle(if you will). Typically R is used to do this work and that would be a much easier and reliable way to do it of-course. If you are interested to follow the R way, I would suggested viewing these videos presented in the channel SAP Academy - . It was out of shear curiosity that I planned on implementing an SQL Script based implementation of the loss triangle. Let's try understanding the algorithm first.


As an insurance company it would be useful to know what you would have to pay out as claims in the years to come. It helps the insurance company to maintain financial reserves for future liabilities and reduce risk of solvency. There are quite some statistical models used to predict the future numbers, but the most accepted one is the Chain Ladder algorithm presented by T.Mack.


Well, lets see the math behind the prediction. I'd have to candidly accept that my math is not too refined. So I would rather explain it in words. The algorithm itself has two parts to it - building the CLM estimator and the prediction itself.


Phase 1 : Derivation of the CLM(Chain ladder method) estimator


The first phase would be to determine the multiplication factors for each column which would later be used for the prediction. 


Fig : 2 - CLM Estimator derivation



The above figure shows the CLM estimator of each column. Basically the math is a rather simple division of subsequent columns with equal number of cells. The CLM estimator for column 3 is derived as the division of the cumulative values of column number 3 over column number 2 excluding the last cell of column number 2. The same exercise is repeated over all adjacent sets of columns to build the estimators.


Phase 2 : Predicting the values


The prediction is a recursive exercise that is done one diagonal row at a time. Each diagonal row signifies claim payments for one particular future year. Looking again at figure 1, the first empty diagonal row would hold the predicted values that would be paid out in the year 2013 for the claims registered across different years. The next diagonal row would be for 2014 and so on.



Fig : 3 - Prediction


Each predicted value is to be calculated as a product of the CLM estimator of the target column and the amount in the predecessor column of the same row. Once an entire row is calculated, the next diagonal row is calculated the saw but based on the previous predicted diagonal row. The whole process is done until the entire rectangle is complete.




The SQL Scripting


Now to get to the meat of this blog. I took a major assumption in the example that I show here; I assume the cumulative values for the run-off triangle is available in a table. The reason is that  data for claims and payments could be on a single table or multiple tables depending on how the insurance data model is implemented. An SQL/View would have to be written to build a cumulative value and the the whole SQL script done here can be pointed to it. For simplicity I just use a single table here.


The whole implementation is on a script based calculation view.






Fig : 4 - Calculation view semantics


As you see above, the calculation view gives out 5 fields

  • Claim_year - Year of claim registration
  • Pymt_year - Year of payment(cumulative)
  • Dev_year - Claim development year
  • Predict - A flag to distinguish predicted and historical values
  • Amount - Cumulative amount

Script -> Variable declarations



Fig : 5 - Variable declaration


Above is just a bunch of variables that would be used in the calculations below. I use an array of real type to store the CLM estimators.



Script -> Variable definitions



Fig : 6 - Variable definition


What you see above is building three variables - the minimum year, maximum year for calculation and their difference. The next component is building a table t_claim_table based on pre-calculated cumulative claim amounts stored in the CLAIMS_PAID table. The above part of the code could be modified based on the underlying data model and calculation requirements. For example if you are trying to execute claims triangle as of current status, the max value could be selected as  select year(current_date) from dummy and the min could be filled from an input parameter or from the table itself as done here. For simplicity of my simulation, I have hard-coded the max and obtained the min from the table itself. The select query on CLAIM_PAID also could be changed based on the data model used. Assuming we were able to get over the above hurdle of building the input data.


Script -> Building the CLM estimator



Fig : 7 - CLM Estimator



To understand the math behind the CLM estimator I recommend reading the topic on the "The Prediction" above. I use a while loop to iteratively go over subsequent columns, build the sum and in the outer query divide and arrive at the CLM estimator. The value is then saved into an array. The iteration starts from 0 to the maximum number of years for which the run of triangle goes. For our example, looking at figure 1, this would be 2012 - 2005 = 7. So we could safely assume the while loop runs 7 times to calculate the 7 CLM estimator values as seen in figure 2. The variable 'i' helps in controlling selection of the correct column. At the end of the while loop, all the 7 CLM estimator values would be in the array.



Script -> Predicting the values



Fig : 8 - The prediction


To understand math behind the prediction done here, I recommend reading the topic on the "The Prediction" above. There are two nested for loops that do the work. The inner for loop calculates each cell within one diagonal row at a time. The outer for loop runs as many times as there are diagonal rows until the rectangle is filled. The three variables 'i', 'j' and 'h' control calculation of each value. The CLM estimator is obtained from the array filled in the previous step. I used a UNION to append records to the existing historical claims. This way, once a diagonal row has been predicted, I can use those values to build the next diagonal row. At the end of the loops, the table variable - t_claim_table would have the historic as well as the predicted values filling up the rectangle.



Script -> Finally the output



Fig : 9 - Output


The var_out variable is finally filled to be displayed as output. The case statement checks whether it is a predicted or a historic value and is later used for applying a filter in the report.


Visualization - SAP Lumira Reports


Putting all the moving pieces together, Lumira is the perfect tool to show the output. I used a cross-tab report to demonstrate the triangular layout. The development year is along the columns and the claim registration year is along the rows. Additionally a filter lets you make the report even more interactive.



Fig : 10 - SAP Lumira report showing loss triangle with only historical values





Fig : 11 - SAP Lumira report showing loss triangle with the predicted values


I am quite keen on listening to your feedback and suggestions on if there is a better way to script this (Of course not using the shortcut by calling R)

Hi there,


I've recently installed the latest SLES version provided by SAP for Business One (at the moment SLES 11 PL 3) which works just fine. After some weeks of continuous work, I dicovered some packages loss when running ping with result message "No buffer space available".


It turns out that the allocated memory was getting at maximum. The solution was to extend the amount in the file




Then restart the network interface for it to take the change.


Hope this was useful.




Alejandro Fonseca

Twitter: @MarioAFC

Hi folks,


I want to share my experience concerning the two xsjs-engine database connection implementations:

  • $.hdb (since SPS 9)
  • $.db


The Story:


Some days ago I used the new HDB interface implementation for the xsjs engine to process and convert a result set in a xsjs service. Problematic for this service is the size of the result set. I am not very happy with the purpose of the service but we somehow need this kind of service.


The result set contains about 200.000 rows.


After setting up everything and having multiple test with small result sets < 10.000 rows everything works fine with the new $.hdb implementation. But requesting the first real sized set caused heavy trouble on the maschine (all xsjs connections) and the request never terminated.


As a result I found myself implementing a very basic xsjs service to get all files in the HANA Repository. (Because per default there are more then 40.000 elements in it.) I duplicated the service to get one $.db and one $.hdb implemenation with almost the same logic.


The Test:


HDB - Implementation


// >= SPS 9 - HDB connection
var conn = $.hdb.getConnection();
// values to select
var keys = [
// query
var stmt = conn.executeQuery( ' SELECT ' + keys.join(", ") + ' FROM "_SYS_REPO"."ACTIVE_OBJECT"' );
var result = stmt.getIterator();
// result
var aList = [];
    var row = result.value();
        "package" : row.PACKAGE_ID, 
        "name" : row.OBJECT_NAME, 
        "suffix" : row.OBJECT_SUFFIX, 
        "version" : row.VERSION_ID, 
        "activated" : row.ACTIVATED_AT, 
        "activatedBy" : row.ACTIVATED_BY, 
        "edit" : row.EDIT,
        "fversion" : row.FORMAT_VERSION,
        "du" : row.DELIVERY_UNIT,
        "duVersion" : row.DU_VERSION,
        "duVendor" : row.DU_VENDOR
$.response.status = $.net.http.OK;
$.response.contentType = "application/json";
$.response.headers.set("Content-Disposition", "attachment; filename=HDBbench.json" );

DB - Implementation


// < SPS 9 - DB connection
var conn = $.db.getConnection();
// values to select
var keys = [
// query
var stmt = conn.prepareStatement( ' SELECT ' + keys.join(", ") + ' FROM "_SYS_REPO"."ACTIVE_OBJECT"' );
var result = stmt.executeQuery();
// vars for iteration
var aList = [];
var i = 1;
    i = 1;
        "package" : result.getNString(i++), 
        "name" : result.getNString(i++), 
        "suffix" : result.getNString(i++), 
        "version" : result.getInteger(i++), 
        "activated" : result.getSeconddate(i++), 
        "activatedBy" : result.getNString(i++), 
        "edit" : result.getInteger(i++),
        "fversion" : result.getNString(i++),
        "du" : result.getNString(i++),
        "duVersion" : result.getNString(i++),
        "duVendor" : result.getNString(i++)
$.response.status = $.net.http.OK;
$.response.contentType = "application/json";
$.response.headers.set("Content-Disposition", "attachment; filename=DBbench.json" );


The Result:


  1. Requesting DB-Implementation: File-Download for all 43.000 rows is starting within 1500 ms.
  2. Requesting HDB-Implementation: Requesting all rows leads to an error. So I trimmed the result set by adding a TOP to the select statement.
    • TOP  1.000 : done in 168ms
    • TOP  2.000 : done in 144ms
    • TOP  5.000 : done in 297ms
    • TOP 10.000 : done in 664ms
    • TOP 15.000 : done in 1350ms
    • TOP 20.000 : done in 1770ms
    • TOP 30.000 : done in 3000ms
    • TOP 40.000 : The request is pending for minutes (~5 min) then responding with 503. The session of the logged in user expires.


As summary: The new hdb implementation performs worse then the old one and there is a treshold in hdb that leads to significant problems on the system.


I appreciate every comment on that topic.




The SAP HANA Developer Center has a new landing page full of new content for you. Check it out at

SAP HANA Landing Page2.png

The new homepage offers you a quick and easy way to access the latest developer info on SAP HANA, sign up for your free developer edition and get started building your first app.

You’ll find information about how SAP HANA works including technical aspects, core features and developer tools. 

You’ll also get an overview of the different options available for you to get started: you can sign up for your free developer edition via SAP HANA Cloud Platform (you get a free instance) or you can sign up for your free developer edition via AWS or Microsoft Azure.

In addition, you’ll find step by step tutorials to help you build your first app. The tutorials cover from how to create your developer environment to building your first app to accessing data and more.

The page also includes links to resources and tools, the community, other related documentation, education and training, certification, etc.

So, take a look and bookmark the page:

Ok, so the title doesn't include SAP or HANA, but I'm getting there. In this video blog, I will walk you through the steps to create an Azure virtual machine with the free Visual Studio 2013 Community edition pre-installed. I then go through the process of downloading and installing the SQL Server Data Tools for BI. The video is almost 17 minutes in length, but the overall process took about 1 hour and 10 minutes. To go back to the index for the blog series, check out the Part 1 – Using #SQLServer 2014 Integration Services (#SSIS) with #SAPHANA.


NOTE: SSIS is not yet certified by the SAP ICC group. However, the content of this blog series is based on the certification criteria.


On with the show!


Check me out at the HDB blog area at: The SAP HDE blog

Follow me on twitter at:  @billramo

Hi All,


I've been developing some apps in SAP HANA for desktop and mobile viewports. Here's a tip for testing those apps without using the mobile phone simulator to check the rendering for mobile screens:


In the <head> tag of the html file there is a block of code to initialise sap.ui libraries, themes and others. It looks like this:


<script id='sap-ui-bootstrap' type='text/javascript'


In order to enable the testing for mobile screens the following line should be added before closing the <script> tag:




This line allows simulation of an iPhone/iPad viewport.


Hope this helps all SAP HANA starters.




Alejandro Fonseca

Twitter: @MarioAFC


Filter Blog

By author:
By date:
By tag: