1 2 3 70 Previous Next

SAP HANA and In-Memory Computing

1,047 Posts

I'm very pleased to be able to announce the immediate availability of the Open Data Export Layer (ODXL) for SAP/HANA!

Executive summary

ODXL is a framework that provides generic data export capabilities for the SAP/HANA platform. ODXL is implemented as a xsjs Web service that understand OData web requests, and delivers a response by means of a pluggable data output handler. Developers can use ODXL as a back-end component, or even as a global instance-wide service to provide clean, performant and extensible data export capabilities for their SAP/HANA applications.


Currently, ODXL provides output handlers for comma-separated values (csv) as well as Microsoft Excel output. However, ODXL is designed so that developers can write their own response handlers and extend ODXL to export data to other output formats according to their requirements.


ODXL is provided to the SAP/HANA developer community as open source software under the terms of the Apache 2.0 License. This means you are free to use, modify and distribute ODXL. For the exact terms and conditions, please refer to the license text.


The source code is available on github. Developers are encouraged to check out the source code and to contribute to the project. You can contribute in many ways: we value any feedback, suggestions for new features, filing bug reports, or code enhancements.

What exactly is ODXL?

ODXL was borne from the observation that the SAP/HANA web applications that we develop for our customers often require some form of data export, typically to Microsoft Excel. Rather than creating this type of functionality again for each project, we decided to invest some time and effort to design and develop this solution in such a way that it can easily be deployed as a reusable component. And preferably, in a way that feels natural to SAP/HANA xs platform application developers.

What we came up with, is a xsjs web service that understands requests that look and feel like standard OData GET requests, but which returns the data in some custom output format. ODXL was designed to make it easily extensible so that developers can build their own modules that create and deliver the data in whatever output format suits their requirements.

This is illustrated in the high-level overview below:

For many people, there is an immediate requirement to get Microsoft Excel output. So, we went ahead and implemented output handlers for .xlsx and .csv formats, and we included those in the project. This means that ODXL supports data export to the .xlsx and .csv formats right out of the box.

However, support for any particular output format is entirely optional and can be controlled by configuration and/or extension:

  • Developers can develop their own output handlers to supply data export to whatever output format they like.
  • SAP/HANA Admins and/or application developers can choose to install only those output handlers they require, and configure how Content-Type headers and OData $format values map to output handlers.

So ODXL is OData? Doesn't SAP/HANA suppport OData already?

The SAP/HANA platform provides data access via the OData standard. This facility is very convenient for object-level read- and write access to database data for typical modern web applications. In this scenario, the web application would typically use asynchronous XML Http requests, and data would be exchanged in either Atom (a XML dialect) or JSON format.


ODXL's primary goal is to provide web applications with a way to export datasets in the form of documents. Data export tasks typically deal with data sets that are quite a bit larger than the ones accessed from within a web application. In addition, an data export document might very well compromise multiple parts - in other words, it may contain multiple datasets. The typical example is exporting multiple lists of different items from a web application to a workbook containaing multiple spreadsheets with data. In fact, the concrete use case from whence ODXL originated was the requirement to export multiple datasets to Microsoft Excel .xlsx workbooks.


So, ODXL is not OData. Rather, ODXL is complementary to SAP/HANA OData services. That said, the design of ODXL does borrow elements from standard OData.

OData Features, Extensions and omissions

ODXL GET requests follow the syntax and features of OData standard GET requests. Here's a simple example to illustrate the ODXL GETrequest:

GET "RBOUMAN"/"PRODUCTS"?$select=PRODUCTCODE, PRODUCTNAME& $filter=PRODUCTVENDOR eq 'Classic Metal Creations' and QUANTITYINSTOCK gt 1&$orderby=BUYPRICE desc&$skip=0&$top=5

This request is build up like so:

  • "RBOUMAN"/"PRODUCTS": get data from the "PRODUCTS" table in the database schema called "RBOUMAN".
  • $select=PRODUCTCODE, PRODUCTNAME: Only get values for the columns PRODUCTCODE and PRODUCTNAME.
  • $filter=PRODUCTVENDOR eq 'Classic Metal Creations' and QUANTITYINSTOCK ge 1: Only get in-stock products from the vendor 'Classic Metal Creations'.
  • $orderby=BUYPRICE desc: Order the data from highest price to lowest.
  • $skip=0&$top=5: Only get the first five results.

For more detailed information about invoking the odxl service, check out the section about the sample application. The sample application offers a very easy way to use ODXL for any table, view, or calculation view you can access and allows you to familiarize yourself in detail with the URL format.

In addition, ODXL supports the OData $batch POST request to support export of multiple datasets into a single response document.

The reasons to follow OData in these respects are quite simple:

  • OData is simple and powerful. It is easy to use, and it gets the job done. There is no need to reinvent the wheel here.
  • ODXL's target audience, that is to say, SAP/HANA application developers, are already familiar with OData. They can integrate and use ODXL into their applications with minimal effort, and maybe even reuse the code they use to build their OData queries to target ODXL.

ODXL does not follow the OData standard with respect to the format of the response. This is a feature: OData only specifies Atom (an XML dialect) and JSON output, whereas ODXL can supply any output format. ODXL can support any output format because it allows developers to plug-in their own modules called output handlers that create and deliver the output.

  Currently ODXL provides two output handlers: one for comma-separated values (.csv), and one for Microsoft Excel (.xlsx). If that is all you need, you're set. And if you need some special output format, you can use the code of these output handlers to see how it is done and then write your own output handler.

  ODXL does respect the OData standard with regard to how the client can specify what type of response they would like to receive. Clients can specify the MIME-type of the desired output format in a standard HTTP Accept:request header:

  • Accept: text/csv specifies that the response should be returned in comma separated values format.
  • Accept: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet specifies that the response should be returned in open office xml workbook format (Excel .xlsx format).

Alternatively, they can specify a $format=<format> query option, where <format>identifies the output format:

  • $format=csv for csv format
  • $format=xlsx for .xlsx format

Note that a format specified by the $format query option will override any format specified in an Accept:-header, as per OData specification.

ODXL admins can configure which MIME-types will be supported by a particular ODXL service instance, and how these map to pluggable output handlers. In addition, they can configure how values for passed for the $format query option map to MIME-types. ODXL comes with a standard configuration with mappings for the predefined output handlers for .csv and .xlsx output.

On the request side of things, most of OData's features are implemented by ODXL:

  • The $select query option to specify which fields are to be returned
  • The $filter query option allows complex conditions restricting the returned data. OData standard functions are implemented too.
  • The $skip and $top query options to export only a portion of the data
  • The $orderby query option to specify how the data should be sorted

ODXL currently does not offer support for the following OData features:

The features that are currently not supported may be implemented in the future. For now, we feel the effort the implement them and adequately map their semantics to ODXL may not be worth the trouble. However, an implementation can surely be provided should there be sufficient interest from the community.


Use ODXL presumes you already have a SAP/HANA installation with a properly working xs engine. You also need HANA Studio, or Eclipse with the SAP HANA Tools plugin installed.

  Here are the steps if you just want to use ODXL, and have no need to actively develop the project:

  1. In HANA Studio/Eclipse, create a new HANA xs project. Alternatively, find an existing HANA xs project.
  2. Find the ODXL repository on github, and download the project as a zipped folder. (Select a particular branch if you desire so; typically you'll want to get the master branch)
  3. Extract the project from the zip. This will yield a folder. Copy its contents, and place them into your xs project directory (or one of its sub directories)
  4. Activate the new content.

After taking these steps, you should now have a working ODXL service, as well as a sample application. The service itself is in the service subdirectory, and you'll find the sample application inside the app subdirectory.


  The service and the application are both self-contained xs applications, and should be completely independent in terms of resources. The service does not require the application to be present, but obviously, the application does rely on being able to call upon the service.


  If you only need the service, for example, because you want to call it directly from your own sample application, then you don't need the sample application. You can safely copy only the contents of the service directory and put those right inside your project directory (or one of its subdirectories) in that case. But even then, you might still want to hang on to the sample application, because you can use that to generate the web service calls that you might want to do from within your application.


If you want to actively develop ODXL, and possibly, contribute your work back to the community, then you should clone or fork the github repository and work from there.

Getting started with the sample application

To get up and running quickly, we included a sample web application in the ODXL project. The purpose of this sample application is to provide an easy way to evaluate and test ODXL.


The sample application lets you browse the available database schemas and queryable objects: tables and views, including calculation views (or at least, their SQL queryable runtime representation). After making the selection, it will build up a form showing the available columns. You can then use the form to select or deselect columns, apply filter conditions, and/or specify any sorting order. If the selected object is a calculation view that defines input parameters, then a form will be shown where you can enter values for those too.


In the mean while, as you're entering options into the form, a textarea will show the URL that should be used to invoke the ODXL service. If you like, you can manually tweak this URL as well. Finally, you can use one of the download links to immediately download the result corresponding to the current URL in either .csv or .xlsx format.


Alternatively, you can hit a button to add the URL to a batch request. When you're done adding items to the batch, you can hit the download workbook button to download as single .xlsx workbook, containing one worksheet for each dataset in the batch.


What versions of SAP/HANA are supported?

We initially built and tested ODXL on SPS9. The initial implementation used the $.hdb database interface, as well as the $.util.Zip builtin.


  We then built abstraction layers for both database access and zip support to allow automtic fallback to the $.db database interface, and to use a pure javascript implementation of the zip algorithm based on Stuart Knightley's JSZip library. We tested this on SPS8, and everyting seems to work fine there.


  We have not actively tested earlier SAP/HANA versions, but as far as we know, ODXL should work on any earlier version. If you find that it doesn't, then please let us know - we will gladly look into the issue and see if we can provide a solution.

How to Contribute

If you want to, there are many different ways to contribute to ODXL.

  1. If you want to suggest a new feature, or report a defect, then please use the github issue tracker.
  2. If you want to contribute code for a bugfix, or for a new feature, then please send a pull request. If you are considering to contribute code then we do urge you to first create an issue to open up discussion with fellow ODXL developers on how to best scratch your itch
  3. If you are using ODXL and if you like it, then consider to spread the word - tell your co-workers about it, write a blog, or a tweet, or a facebook post.

Thank you in advance for your contributions!


I hope you enjoyed this post! I hope ODXL will be useful to you. If so, I look forward to getting your feedback on how it works for you and how we might improve it. Thanks for your Time!

Key benefits of Data Modeling in SAP HANA:

Building analytics and data mart solutions using SAP HANA enterprise data modeling offers various benefits, compared to the traditional data warehousing solutions such as SAP BW.

  • Virtual data models with on the fly calculation of results, which enables reporting accuracy and requires very limited data storage – powered by the in-memory processing, columnar storage and parallel processing etc.
  • Ability to perform highly processing intensive calculations efficiently – For example identify the customers where the sales revenue is greater than the average sales revenue per customer
  • Real time reporting leveraging the data replication and access techniques such as SLT, Smart data access etc.

Apart from the HANA sidecar or data mart solutions, HANA modeling also plays an essential role in the BW on HANA mixed scenarios, S/4 HANA Analytics, Predictive Analytics and Native HANA applications etc.


Objectives of this blog:

In this blog, I would like to share some of the experiences and learnings from various projects while implementing HANA modeling solutions. The intent of this blog is to provide some insights and approaches to the HANA modelers, which can be helpful when they start working on the solution design and development. However it does not cover the detailed explanation of the HANA modeling features.

Requirement analysis and setting expectations:

Understand the reporting requirements of the project clearly and try to conceptualize the HANA models to be built based the required KPIs. Few key aspects of the solution design: KPI definition including the details such as data sources, dimensions, filters, calculation logic, granularity and data volumes etc.


At times, the business users would expect HANA models to deliver best performance even with wide open selection criteria and with many columns in the output. Even though SAP HANA data models are expected to deliver sub-second response time, we need to be aware of the fact that there will be limited resources (memory, processing engines) in a HANA instance. Hence it is essential to implement the HANA models to be more efficient as per the performance guidelines.


Test the waters before diving deep: Validate the features, tools and integration aspects

It is always better to start with a prototyping of a sample end-to-end solution, before proceeding with the full- fledged implementation. This includes the steps like setting up data provisioning using SLT, BODS etc.., building HANA views and consuming HANA views from the reporting tool. Prototyping will help us in verifying if all the functionalities are working as expected. With this approach we can check and address the connectivity and security related issues beforehand.

It is also recommended to verify the functionality, understand the pros and cons of new features before trying to use them in our models.


Decision criteria for HANA Modeling approaches:

In the recent releases like SP 10 and SP 11, HANA Modeling functionality has been greatly enhanced with several features to cover various complex requirements. Always try to implement the models using Graphical calculation views unless there are specific requirements that can be only implementing using SQL Script. In general we may need to implement SQL Script based Calculation views only for the scenarios such as complex lookup and calculations, recursive logic etc.

While creating graphical calculation views, we need to implement the entire logic virtually using various nodes in different layers. It requires innovative thinking along with solid data modeling skills and a very good understanding of different SQL statements in order to build complex and effective HANA views.

Try to implement your HANA modeling view as per the features supported by the current support pack / revision level and also consider the guidelines and future road-map of SAP:


For instance, SAP suggests that calculation views are to be implemented for most of the requirements:

  • Dimension type calculation views: To model master data or to implement "value help" views for variables etc..
  • Star join type calculation views: As an alternative to analytic views
  • Cube type calculation views: Mainly for the reporting, which includes measures along with aggregations etc.


There are few scenarios where we have to decide between “on the fly calculations” vs “persistence of the results”. For some of the highly processing intensive calculations where the real time reporting is not essential and also for the scenarios like calculating and storing snapshots of results (such as weekly inventory snapshots), we have to implement the logic using SQL Script stored procedures in HANA to persist the results in a table. Subsequently a simple view can be built on this table to enable reporting.


Seeing Is Believing: Data validation is crucial

Prepare a comprehensive data validation and test plans for your HANA views. We can leverage different techniques to ensure that the HANA view is producing the results exactly as per the requirement. Ensure that your test cases will include the validation of the attributes and measures along with any filters, calculations & aggregations, counters, currency conversions etc.


Below are the key tools and techniques to perform data validation of HANA views:

  • Data preview option: Using the data preview option at the HANA view level and also at the individual node level is the simplest option to validate the data during the development of HANA views. Leverage the various options such as Raw data with filters, Distinct value analysis, Generating the SQL statement from the log etc. to perform different types of validations using the Data Preview option


  • Custom SQL queries: We can write and execute custom SQL queries in HANA studio SQL editor, and compare the results of HANA view to ensure that the results are matching. Here we can leverage the various types of SQL statements to perform complex data validations - for example to compare the data between the HANA view and the base tables


  • Reporting from Excel, Analysis Office or other reporting tools: For validation of larger data volumes and for the validation of semantics (labels / formatting etc.) we can leverage the tools like Analysis for Office

Be conscious about the Input data – Few important aspects in the Data Provisioning:

Identify the list of tables to be imported from different sources using SLT or other data provisioning tools and assess the memory requirements. To ensure the optimal utilization of HANA database, it is advisable to replicate only those tables which are essential to meet the requirements.

Few options to optimize the SLT table replication needs:

  1. Try to leverage the BW objects (DSO or info objects) if the corresponding data is already available in the connected BW schema – This will save the space in HANA as are avoiding the table replication
  2. Apply filters to avoid SLT replication of unwanted data for large tables into HANA

Try to leverage the transformation capabilities at the SLT or BODS level, wherever feasible. Especially in the scenarios where we need to filter the data model based on a calculated column, it would be ideal to derive this calculated column during the data provisioning.

Smart tools will enable better Productivity:

There are several tools and options available in the HANA studio, which helps us in maintaining the views in a simplified manner and increase productivity. Leverage these tools and features while building and maintaining HANA views.


Listed below are some of these tools and their utility in HANA modeling process:


  • Show lineage (columns): helps us to trace the origin of attributes and measures in HANA views.



  • Replacing nodes and data sources (in Graphical views) – to replace the nodes (projection, join..) with a different node OR replace the data sources (views or tables) with a different view or table within a modeled view

          Replace node.jpg    


  • Import columns from Table type or flat file (For script based Calculation views): This will simplify the creation of output structure for a script based calculation view – instead of manually maintaining the output columns, we can import the column details from an existing table or view

      Import columns.jpg


     Note: The following options are available when we right click on a HANA view:


  • Generate Select SQL - Using this option we can get the generated SELECT statement for any of the HANA views, which can be customized and executed from the SQL editor

  • Refactoring views: Using this option we can move the views across the packages, which automatically adjusts the inner views to reflect the new package.

  • Where-used list: To identify the list of objects where the current views has been used and assess the impact of any changes

  • Auto-documentation: To generate the documentation of a modeled view, which can be leveraged as part of the technical documentation   

Conclusion: My sincere thanks to the SCN community and especially to all the experts, who has been a great source of inspiration. I am hoping that this blog will be useful to you in learning and implementing the HANA modeling solutions.

>>Check out our new SAP HANA blog post for the SPS12 webinar schedule and meeting invitations.

SAP HANA Product Management and the SAP HANA International Focus Group (iFG) are rolling out the SPS12 Webinar Series. iFG is the premier channel for ABM customers to receive these updates. As the sponsor for prior releases (i.e. SPS10, SPS11, etc.), the feedback has been overwhelmingly positive.

SAP HANA SPS12, with an expected release just before SAPPHIRE NOW, offers customers a compelling value proposition. It can help existing or new customers run mission critical applications and analytics on SAP HANA.

For many, this series is the MUST see and hear event of the year. We will have “operator assisted” calls and expect to break all records for participation.

Following each session, the recordings and available materials can be accessed on the SAP HANA iFG Jam group.

New to the iFG community?

SPS12 Photo.png

SAP HANA Vora provides an in-memory processing engine which can scale up to thousands of nodes, both on premise and in cloud. Vora fits into the Hadoop Ecosystem and extends the Spark execution framework.


Following image shows you where Hadoop fits in the Hadoop ecosystem:




Recently, Vora 1.2 has been released with the following new features:

  • Support for MapR Hadoop distribution
  • Vora modeler – for building data models on top including OLAP
  • Added features to Hierarchies
  • Dlog, Discovery services using Consul tool
  • Enhanced performance through partitioning and co-located joins

The focus of this blog is to introduce you to the Vora Data Modeling tool.

For more information on other features released with 1.2 please refer to:


and for more information on the concepts such as DLOG, Discovery Service, Vora configuration and installation please refer to:

Vora 1.2 installation Cheat sheet: Concepts, Requirements and Installation


Vora 1.2 Modeling Tool:

In order to communicate with Vora engine, you could use Apache Zeppelin or Jupyter Notebook( http://scn.sap.com/community/developer-center/hana/blog/2016/01/21/visualizing-data-with-jupyter), mostly for coding.

We also designed the Vora modeling tool (modeler) to facilitate the development across structured, unstructured and semi-structured data and has been released as a beta version with Vora 1.2 .With Vora modeler you have access to SQL editor and also the Modeler perspective which give you the option to code or drag and drop the artifacts to develop your data view.


By installing the Vora Tools as part of the Vora 1.2 installation, you will have access to Vora modeler through your browser and via the port 9225:


Here is how your modeling home page should look like. This page will give you access to the main perspectives, connection feature and help.


Vora Modeling has three main perspectives:

  • Data Browser
  • SQL Editor
  • Modeler

Data Browser allows you to view the available tables, views, dimensions and cubes in Vora engine. It also allows you to have a preview of the data, download the data as a CSV file, filter the columns and refresh them.


Here is a view of your Data Browser:



SQL Editor perspective allows you to run the queries on Vora engine using Vora SQL, it also shows you the compilation warnings, errors and outputs and the result of the query when you run the select.








The Modeler perspective could be used to create SQL views, Dimensions or Cubes. You could also use the subselect artifact to create the nested queries. Below you could see a view of the Modeler:


For more information on how to use the Vora modeler and create data analysis scenarios (using joins, unions, etc) refer to Vora Developer's Guide, Chapter 11.


To summarize, we should mention that the support for the following features are available in Vora 1.2:


  • Dimensions and Cubes
  • Annotations
  • Joins
  • Simple and multiple joins
  • Auto propose the join conditions
  • Define the join condition using an editor
  • SQL Editor supporting VORA SQL
  • Modeling perspective
  • Subselect
  • Unions
  • Union has been visualized with the notion of resultset, provide better management of groupby, orderby etc at different levels
  • Regenerate the views as Spark SQL
  • Exporting the tables as CSV

Dear SAP HANA aficionados


If have not been living under a stone lately and visited the SAP HANA corners of SCN (SAP HANA and In-Memory Computing, SAP HANA Developer Center) recently you surely will have noticed that there is currently one particular person putting in effort, patience and willingness to help others in extraordinary measures.

Of course I am writing about Florian Pfeffer here.


Florian, as you may know by now, is not only a HDE but also had been SCN Member of the Month just this March.

He also managed to earn the 'Super Answer Hero' badge which showcases his commitment to the community.

So, it's fair to say, that this star is flying high right now.


As my interest with SCN is in community development, I took the chance and asked if he would like to become a moderator, which Florian agreed to.

From now on, the SAP HANA and In-Memory Computing space has three permanent moderators assigned:


Once again, I would like to thank both Lucas and Florian for their engagement and also encourage others to step up and become more involved with SAP HANA and the community around it. There is always room for more high profile contributors!




SAP HANA Vora provides an in-memory processing engine which can scale up to thousands of nodes, both on premise and in cloud. Vora fits into the Hadoop Ecosystem and extends the Spark execution framework.

Concepts and Requirements:

Sap HANA VORA 1.2 consists of the two following main components:


  • SAP HANA Vora Engine:
    SAP HANA Vora instances hold data in memory and boost the performance.
  • SAP HANA Vora Spark Extension Library:
    • Provides access to SAP HANA Vora through Spark.
    • Makes available additional functionality, such as a hierarchy implementation.




These two components are included inside the Vora packages which are available as follows and you could choose based on your Hadoop distribution.


  • SAP HANA Vora for Ambari: VORA_AM<version>.TGZ
  • SAP HANA Vora for Cloudera: VORA_CL<version>.TGZ


To download the packages: https://launchpad.support.sap.com/#/softwarecenter/search/vora%25201.2

Vora 1.2 supports the following operation systems:

  • SUSE Linux Enterprise Server (SLES) 11 SP3
  • Red Hat Enterprise Linux (RHEL) 6.7 and 7.2

You should also follow the Installation and Administration guide for the compatibility pack installations: http://help.sap.com/hana_vora

Following table shows you the combination of operating system, cluster provisioning tool, and Hadoop distribution:


Remember that the minimal setup for Vora 1.2 is :

  • 4 cores
  • 8 GB of RAM
  • 20 GB of free disk space for HDFS data
  • Note: You can’t install Vora 1.2 on a single node

In order to have the Vora 1.2 running, you have to have the following Vora services installed and configured and I will walk you through their installment and configurations on the clusters.

  • SAP HANA Vora Base: Vora libraries and binaries. Installs on all hosts.
  • SAP HANA Vora Catalog: Vora distributed metadata store. Installs on one node and usually on DLOG node.
  • SAP HANA Vora Discovery Service: Manages service registrations and installs on all nodes. In server mode installs on 3 nodes(Max 7) and selects the bootstrapping host. In client mode, installs on all remaining nodes. Note: You can’t install DS server and client both on the same node.
  • SAP HANA Vora Distributed Log: Provides persistence for Vora Catalog. Install usually on the master node(5 nodes recommended).
  • SAP HANA Vora Thriftserver: Gateway compatible with Hive JDBC connector. Usually install on the jumpbox where DS, DLOG and Catalog servers are not installed.
  • SAP HANA Vora Tools: Web UI for Vora 1.2 modeler. Install on the same node as Vora Thriftserver.
  • SAP HANA Vora V2Server: Vora Engine. Installs on all worker nodes ( Datanodes)


The installation and configuration should either happen at the same time for all the services or you should follow the following order to make sure of handling the dependencies:


The following schema shows you the architecture for clusters with 4 nodes and the assignment of different Vora 1.2 services which we will set up in this document:

One Master node, One Server node and two workers.

Screen Shot 2016-04-07 at 1.39.42 PM.png


*** Our assumption is that you have your Hadoop clusters set up with HDFS 2.6.x or 2.7.1, ZooKeeper 3.4.6, Spark 1.5.2, Yarn cluster manager 2.7.1 components.

Installing Vora 1.2 Services:

Step 1) Adding Vora Base: You have to add Vora base on all nodes and they have to be installed as clients as shown below.

Screen Shot 2016-03-29 at 4.53.49 PM.png

Screen Shot 2016-03-29 at 4.58.24 PM.png

— no extra configuration is needed.

— you can click on the proceed button as is shown below even if you get the error since you’re not using MapReduce jobs:


Screen Shot 2016-03-29 at 4.59.49 PM.png


— Click on complete.

Screen Shot 2016-03-29 at 5.02.04 PM.png

— notice that the Vora base is now added to your services:

Screen Shot 2016-03-29 at 5.02.51 PM.png

Step 2) Now we add Vora discovery as 3 Vora discovery servers and one client.


Screen Shot 2016-03-29 at 5.03.40 PM.png


Adding the Vora Discovery client:

Screen Shot 2016-03-29 at 5.18.04 PM.png

-- Vora discovery servers need extra configurations:

— in vora_discovery_bootstrap add the master DNS

— in vora_discovery_servers add your server DNS’s

Screen Shot 2016-03-29 at 5.20.32 PM.png



— proceed and deploy the service

notice that vora discovery service is now installed:


Screen Shot 2016-03-29 at 5.24.47 PM.png


Step 3) Now we add Vora Distributed Log service :


Screen Shot 2016-03-29 at 5.26.11 PM.png


— we install DLOG servers on the same machines where we installed our Discovery Servers.


Screen Shot 2016-03-29 at 5.29.49 PM.png


— No extra configurations are needed.

— click Next-> click Proceed anyway—>click  Complete

— Notice that vora DLOG is now added to the services:


Screen Shot 2016-03-29 at 5.31.47 PM.png


Step 4) Next step is to install Vora Catalog:


Screen Shot 2016-03-29 at 5.33.03 PM.png


— Install Catalog on your master node:

Screen Shot 2016-03-29 at 5.35.12 PM.png

— click Next->click Proceed anyway—>click Complete


— Notice that vora Catalog is added to the services:

Screen Shot 2016-03-29 at 5.36.55 PM.png


Step 5) Time to install V2Server as shown below:

Screen Shot 2016-03-29 at 5.38.10 PM.png

— extra configuration: add the Vora V2Server Worker service to worker1 and worker2 nodes and remove it from your server node.



Screen Shot 2016-03-29 at 5.40.45 PM.png


— click Next->click Proceed anyway—>click Complete


— Notice that vora V2Server is now added to the services:


Screen Shot 2016-03-29 at 5.43.47 PM.png

Step 6) Time to install Vora Thriftserver and Vora Tools:


Screen Shot 2016-03-29 at 5.45.13 PM.png

Screen Shot 2016-03-29 at 5.47.40 PM.png


— you have to add more configurations to the thrift server as it’s shown below:

— add vora_thriftserver_java_home = /usr/lib/jvm/java --this value depends on where JAVA installed on your system

— add vora_thriftserver_spark_home =  /usr/hdp/ --this is your Spark Home value


Screen Shot 2016-03-29 at 5.51.58 PM.png


— click Next-> click Proceed anyway—>click Complete


— Notice that vora thriftServer and Vora tools  are now added to the services:


Screen Shot 2016-03-29 at 5.53.03 PM.png


Now click on HDFS, MaprReduce2 and YARN services which are in red and restart all affected as shown below:

Screen Shot 2016-03-29 at 5.58.15 PM.png

Congratulations!! You now have Vora 1.2 services installed on your clusters.


Step 7) To validate your Vora:

— SSH to your worker1 node and run:

— source /etc/vora/vora-env.sh

— $VORA_SPARK_HOME/bin/start-spark-shell.sh

and you should now see the SQL contexts (Vora SQL Context and Spark SQL Context) bieng available.

Pratik Doshi

HANA: Lost Updates

Posted by Pratik Doshi Apr 20, 2016

Brief Intro of what happens in Lost Updates?

A simple example which will make you understand Lost Update.

For an example:

Session 1: User A reads record 1

Session 2: User B reads record 1

Session 1: User A updates record 1

Session 2: User B updated record 1

User B have not seen the record updated by User A and updated the existing record resulting in Lost Updates.


How can we tackle it programmatically?

  1. We maintain a time stamp field on the record and give it to the user who is requesting the record. Now when the user wants to save the modified record we just check it again time stamp of the record if the time stamp are not same, record have been updated before and return an error.  
  2. We do the checksum of all the fields. Now when you user go back to update we just check the checksum if the checksum are not same we return an error.

   There are the other solutions too such as generating a random number and assigning the number to a record.

Let’s see how we can handle these in ODATA

It’s too cumbersome to do the above things and put into the service with Hana. Here where the etag functionality of ODATA services comes into picture. We need to take care of few things and we are ready to save database from lost updates.

These mechanism can be applied both on the tables and views. For views you have to specify the key.

Here is an example of simple service that does the task for us:


Now when you do get the data from the service you will always get the etag in the metadata. As show in screen shot below:Etag-Token.PNG


Look at the screen shot we get the etag in request from server.

Now the ETags comes with two different options weak and strong tags.


  • A weak ETag could be considered the last updated time or the version of a document. Weak ETags are prefixed with the “W/” to indicate they are weak.
  • A strong ETag is considered if the entire representation of the entity is considered binary identical so all fields are compared and typically a hash of the entity is used as the strong ETag. Strong ETags do not have the leading prefix weak ETags do.


We will go here with weak tag. Screen shot from using the postman:



If-match header have two options one is the value etag value and the other can be *.  If value is provided it will validate against the token and for * it will not validate it.

If you update the record twice it will give you error as “412 precondition failed” because the etag token is changed.


Can find more information about the etags at:

OData ETag Support - SAP HANA Developer Guide for SAP HANA Studio - SAP Library


Hope this will help.

The SAP HANA Webinar Series helps you gain insights
and deliver solutions to your organization.

This acclaimed series is sponsored by the SAP HANA international Focus Group (iFG) for customers, partners, and experts to support SAP HANA implementations and adoption initiatives. Learn about upcoming sessions and download Meeting Invitations here.

>>> Check out our new SAP HANA blog post about the following April & May webinars:

SAP HANA international Focus Group (iFG) Sessions:

  • April 14 – What’s New in SAP HANA Vora 1.2
  • April 21 – Introduction to OLAP Modeling in SAP HANA VORA
  • April 28 – Falkonry; Intelligent Monitoring of IoT Conditions
  • May 5 – Preview of SAP HANA @ SAPPHIRE NOW and ASUG Annual Conferences

SAP HANA Customer Spotlights:

  • April 19 – National Hockey League (NHL) Enables Digital Transformation with SAP HANA Platform: Register >>
  • April 26 – CenterPoint Energy – Analyzing Big Data, Faster with Reduced Storage costs: Register >>

New to the iFG community?

iFG Webinar Series 4.jpg



At the SAP HANA Academy we are currently updating our tutorial videos about SAP HANA administration [SAP HANA Administration - YouTube].


One of the topics that we are working on is SAP HANA smart data access (SDA) [SAP HANA Smart Data Access - YouTube].


Configuring SDA involves the following activities

  1. Install an ODBC driver on the SAP HANA server
  2. Create an ODBC data source (for remote data sources that require an ODBC Driver Manager)
  3. Create a remote data source (using SQL or SAP HANA studio)
  4. Create virtual tables and use them in calculation views, etc.


As of SPS 11, the following remote data sources are supported:



  • Apache Hadoop (Simba Apache Hive ODBC)
  • Apache Spark


In the SAP HANA Administration Guide, prerequisites and procedures are documented for each supported data source, but the information is intended as a simple guide and you will need 'to consult the original driver documentation provided by the driver manufacturer for more detailed information'.


In this series of blogs, I will provide more detailed information about how perform activity 1. and 2,; that is, installing and configuring ODBC on the SAP HANA server.


The topic of this blog is the installation and configuration of the Microsoft ODBC driver for SQL Server on Linux.



Video Tutorial


In the video tutorial below, I will show you in less than 10 minutes how this can be done.



If you would like to have more detailed information, please read on.



Supported ODBC Driver Configurations


At the time of writing, there are two ODBC drivers for SQL Server available for the Linux (and Windows) platform: version 11 and 13 (Preview).


Microsoft ODBC driver for SQL Server on LinuxSQL ServerOS (64-bit)unixODBCSAP HANA Smart Data Access support
Version 13 (Preview)2016, 2014, 2012, 2008, 2005RHEL 7, SLES 122.3.1Not supported
Version 112014, 2012, 2008, 2005RHEL 5, 6; SLES 112.3.0SQL Server 2012


For SAP HANA smart data access, the only supported configuration is Microsoft ODBC Driver 11 in combination with SQL Server 2012. Supported means that this combination has been validated by SAP development. It does not mean that the other combinations do not work; they probably work just fine. However, if you run into trouble, you will be informed to switch to a supported configuration.


Information about supported configurations is normally provided in the SAP Product Availability Matrix on the SAP Support Portal, however, so far only ASE and IQ are listed. For the full list of supported remote data sources, see SAP Note 1868209 - SAP HANA Smart Data Access: Central Note.





On the Windows platform, the ODBC driver manager is bundled together with the operating system but on UNIX and Linux this is not the case so you will have to install one.


The unixODBC project is open source. Both SUSE Linux Enterprise Server (SLES) and Red HatEnterprise Linux (RHEL) provide a supported version of unixODBC bundled with the operating system (RPM package). However, Microsoft does not support these bundled unixODBC packages for the Microsoft ODBC Driver Version 11 so  you will need to compile release 2.3.0 from the source code. This is described below.


unixODBCRelease DateOS (64-bit)Microsoft ODBC Driver
2.3.4 (latest)08.2015 7, SLES 12Version 13 (Preview) 11
2.2.1411.2008RHEL 6
2.2.1210.2006SLES 11



System Requirements


First, you will need to validate that certain OS packages are installed and if not, install them (System Requirements).


This concerns packages like the GNU C Library (glibc), GNU Standard C++ library (libstdc++), the GNU Compiler Collection (GCC) to name a few, without which you will not get very far compiling software. Also, as the Microsoft ODBC Driver supports integrated security, Kerberos and OpenSSL libraries are required.



Installing the Driver Manager


Next, you will need to download and build the source for the unixODBC driver manager (Installing the Driver Manager).


  1. Connect as root
  2. Download and extract the Microsoft driver
  3. Run the script build_dm.sh to download, extract, build, and install the unixODBC Driver Manager




The build script performs the installation with the following configuration:


# ./configure --prefix=/usr --libdir=/usr/lib64 --sysconfdir=/etc --enable-gui=no --enable-drivers=no --enable-iconv --with-iconv-char-enc=UTF8 --with-iconv-ucode-enc=UTF16LE"

# make

# make install


Note the PREFIX, LIBDIR and SYSCONFDIR directives. This will put the unixODBC driver manager executables (odbcinst, isql), the shared object driver files, and the system configuration files (odbcinst.ini and odbc.ini for system data sources) all in standard locations. With this configuration, there is no need to set the environment variables PATH, LD_LIBRARY_PATH and ODBCINSTINI for the login shell.



Installing the Microsoft ODBC Driver


Next, we can install the ODBC driver [Installing the Microsoft ODBC Driver 11 for SQL Server on Linux].


Take a look again at the output of the build_dm.sh (print screen above). Note the passage:






For this reason, you might want to make a backup of the driver configuration file (odbcinst.ini) before you run the installation script.


  1. Make a backup of odbcinst.ini
  2. Run install.sh --install





The script will register the Microsoft driver with the unixODBC driver manager. You can verify this with the odbcinst utility:

odbcinst -q -d -n "ODBC Driver 11 for SQL Server"


Should the install have overwritten any any previous configuration, you either need to register the drivers with the driver manager again or, and this might be easier,  restore the odbcinst.ini file and manually add the Microsoft driver.


For this, create a template file (for example, mssql.odbcinst.ini.template) with the following lines:



[ODBC Driver 11 for SQL Server]

Description=Microsoft ODBC Driver 11 for SQL Server




Then register the driver with the driver manager using the command:

odbcinst -i -d -f mssql.odbcinst.ini.template



Create the data source and test the connection


Finally, we can register a data source with the driver manager. For this, create a template file and save it as mssql.odbc.ini.template.


You can give the data source any name. Here MSSQLTest is used, but for production systems, using the database name might be more sensible (spaces are allowed for the data source name).


Driver = name of the driver in odbcinst.ini or the full path to driver file

Description = optional

Server = host (FQDN); protocol and port are optional, if omitted tcp and 1433 will be used.

Database = database name (defaults to Master)



Driver = ODBC Driver 11 for SQL Server

Description = SQL Server 2012 test instance

; Server = [protocol:]server[,port]

; Server = tcp:mo-9e919a5cc.mo.sap.corp,1433

Server = mo-9e919a5cc.mo.sap.corp

Database = AdventureWorks2012


Register the DSN with the driver manager as System DSN using the odbcinst utility:

odbcinst -i -s -l -f mssql.odbc.ini.template



odbcinst -q -s -l -n "MSSQLTest"


Test connection:

isql -v "MSSQLTest" <username> <password>


The -v [erbose] flag can useful in case the connection fails, as it will tell you, for example, that your password is incorrect. For more troubleshooting, see below.



System or User Data Source


It is up to you, of course, whether to register the data source as a system data source or a user data source. As the SAP HANA server typically is a dedicated database system, using only system data sources has two advantages:


  1. Single location of data source definitions
  2. Persistence for system updates


With the data sources defined in a single location, debugging connectivity issues is simplified, particularly when multiple drivers are used.


With the data sources defined outside of the SAP HANA installation directory, you avoid that your odbc.ini will be removed when you uninstall or update your system.


To register the DSN with the driver manager as User DSN using the odbcinst utility, connect with your user account and execute:

odbcinst -i -s -h -f mssql.odbc.ini.template


The difference is the  -h (home) flag and not  - l (local).



odbcinst -q -s -h -n "MSSQLTest"


Test connection (same as when connecting to a system data source):

isql -v "MSSQLTest" <username> <password>




Note that when no user data source is defined, odbcinst will return a SQLGetPrivateProfileString message.





Before you test your connection, it is always a good idea to validate the input.


For the driver, use the "ls" command to verify that the path to the driver is correct.




For the data source, use the "ping" command to verify that the server is up and use "telnet" to verify that the port can be reached (1433 for SQL Server is the default but other ports may have been configured; check with the database administrator).



If you misspell the data source name, the [unixODBC] [Driver Manager] will inform you that the

Data source name not found, and no default driver specified


If you make mistakes with the user name or password, the driver manager will not complain but the isql tool will forward the message of the database server.




If the database server cannot be reached, for example, because it is not running, or because the port is blocked, isql will also inform you by forwarding the message from the database server. Note that the message will depend on the database server used. The information we get back from SQL Server is much more user-friendly then DB2, for example.




If the driver manager cannot find the driver file, it will return a 'file not found' message. There could be a mistake in the path to driver file.





More Information


SAP HANA Academy Playlists (YouTube)


SAP HANA Administration - YouTube

SAP HANA Smart Data Access - YouTube.


Product documentation


SAP HANA Smart Data Access - SAP HANA Administration Guide - SAP Library


SAP Notes


1868209 - SAP HANA Smart Data Access: Central Note


SCN Blogs


SDA Setup for SQLServer 12

SAP HANA Smart Data Access(1): A brief introduction to SDA

Smart Data Access - Basic Setup and Known Issues

Connecting SAP HANA 1.0 to MS SQL Server 2012 for Data Provisioning

SAP Hana Multi-Tenant DB with Replication


Microsoft Developer Network (MSDN)


Microsoft ODBC Driver for SQL Server on Linux

Download ODBC Driver for SQL Server



Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy, follow us on Twitter @saphanaacademy., or connect with us on LinkedIn.

Today's tidbit is one of those little dumb things that happen every now and then and when I think: "Great, now this doesn't work... WTF...?"

Usually that's a bit frustrating for me as I like to think that I know how stuff works around here (here, meaning my work area, tools, etc.).


So here we go. Since the SAP HANA Studio is currently not "an area of strategic investment" and a the Web based tools are on the rise, I try to use those more often.

I even have the easy to remember user-friendly URL (http://<LongAndCrypticNodeName.SomeDomainname.Somethingelse>:<FourDigitPortNumber>/sap/hana/ide/catalog/) saved as a browser bookmark - ain't I organized!

And this thing worked before.

I have used it.

So click on the link and logon to the instance and get this fancy "picture" (as my Dad would explain it to me -  everything that happens on the screen is a "picture", which is really helpful during phone-based intra-family help-desking...):



Pic 1 - The starting 'picture', looking calm and peaceful... for now


Ok, the blocky colors are due to GIF file format limitation to 256 colors, but you should be able to see the important bits and pieces.


There is some hard to read error message, that I choose to ignore and click on the little blue SQL button and then ... nothing happens.

I click again and again as if I cannot comprehend that the computer understood me the first time, but no amount of clicks yields to open the SQL editor.

What is going on?

Next step:


Do the PRO-thing...

     ... open Google Developer Tools...

     ... delete session cookies and all the saved information.

     ... Logon again.


Lo and behold, besides the much longer loading time for the page, nothing changed.


Great. So what's else is wrong? Did the last SAP HANA upgrade mess with the Web tools?

Pic 2 - wild clicking on the button and visually enhanced error message indicating some bad thing


Luckily, that wasn't it.

Somewhere in the back of my head I remembered, that I had a couple of browser extensions installed.


Now I know what you're thinking: Of course it's the browser extensions. That moron! Totally obvious.

What can I say? It wasn't to me.


Pic 3 - there's the culprit, the root cause and trigger for hours of frustration


It just didn't occur to me that e.g. the Wikiwand browser extension that I use to have the Wikipedia articles in a nicer layout would install a browser wide hook to the CTRL+CLICK event and that this would prevent the Web tools to sometimes not open.

After disabling this (there's a settings page for this extension) the Web tools resumed proper function.

Good job!


So is the Wikiwand extension a bad thing? No, not at all. There are tons of other extensions that do the same.


While I would really like to demand back the precious hours of my life this little mishap took from me, I assume that this request would be a bit pointless.

To me, at least, this experience, leaves me with the insight, that I clearly thought to simplistic about the frontend technology we use today. Web browsers are incredible far from a standard environment and controlling what the end user finally sees is not easy (of really possible).


Ok, that's my learning of the day.






the error message "Could not restore tab since editor was not restorable" not only seems to be a tautology, but also had absolutely nothing to do with the problem in this case.

Are you exploring the possible benefits that SAP HANA may provide for your company? Are you confident there are strong use cases, yet challenged by putting together that all important Business Case to “sell it” internally? Then this session is for you!


Please join us for this interactive session where we will discuss how to prioritize your use cases and determine the critical value drivers to generate a Business Case that will resonate within your company.


The session also includes live customer insights, describing their personal experiences through this effort and how they successfully convinced their company of the value and benefits possible with SAP HANA through a solid Business Case.


The Agenda for this half-day Pre-Conference seminar includes:

  • Why do you need a business case anyway?
  • Methodology for building a business case
  • Levels of value
  • Value management life cycle
  • Create the storyline
  • Adding the financial dimension
  • Example of the process
  • Best practices approach
  • SAP Benchmarking
  • Bringing it all together
  • Customer testimonial


You can find more details about this Pre-Conference and Registration details at:



We look forward to meeting you at this ASUG Pre-Conference Seminar on Monday morning, May 16, in Orlando!


SAP HANA Solutions GoToMarket team

SAP Global HANA Center of Excellence team

The blog post is to bring attention to an issue we have been facing on our HANA Multitenant Database container(MDC) setup



We have a Scale up MDC setup with more than 15 Tenant Database's in non prod on SPS10

Part of quarterly release activities we refresh non prod systems from production MDC tenant backups

Until last year we had less than 10 tenants and the regular refresh was working as expected



We had introduced more non prod tenants end of last year and during the next refresh cycle we started noticing a tenant crash while we were working on refresh of another tenant

A complete check of trace logs of the crashed tenant confirmed we had signal 6 errors exactly around the same time the other tenant was being refreshed

After multiple attempts to being up the tenant did not work, we had to involve SAP support to check the cause of the issue

Meanwhile we restored the crashed tenant using backups


SAP Support took more than a month to identify the cause of the issue and another occurrence of the same issue while restoring a different tenant confirmed there was a correlation

SAP confirmed the following, when we have more than 10 Tenants on a single MDC appliance we will come across this issue(on version SPS11 revision 112.02 and below)

For example if we have 15 tenants and lets say the tenant with Database ID 5 is being restored using a backup of production tenant it will impact the tenant with Database ID 15 and this tenant will crash and fail to start up. Same issue would occur on tenants with Database ID 13 and 14 if tenants with Database ID 3 and 4 are recovered using a backup




SAP has addressed the issue in SPS11 Database maintenance revision 112.02 that released today 12-Apr-2016

Please find the link below for the same and the screenshot that confirms the issue in the note




Please let me know if anyone has any thoughts or inputs on this issue and hope the blog is useful in understanding the cause of the issue and available solution

SAP HANA Vora 1.2 was released recently and with this new version we have added several new features to the product. Some of the key ones I want to highlight in this blog are


  • Support for MapR Hadoop distro
  • Introducing new “OLAP” modeler to build hierarchical data models on Vora data
  • Discovery service using open source Consul – to register Vora services automatically
  • New Catalog to replace Zookeper as metadatstore
  • Native persistency for metadata catalog using Distributed shared log
  • Thriftserver for client access thru jdbc-spark connectivity


The new installer for Vora in ver1.2 extends the simplified installer to be able to use Hadoop Management tools like MapR Control System to deploy Vora on all the Hadoop/Spark nodes. This is an addition to what was provided in ver1.0 for Cloudera Manager and Ambari admin tools.




Vora Modeler provides a rich UI to interact with data stored in Hadoop/HDFS, parquet, orc and S3 files by either using the SQL editor or the Data Browser. Once you have the Vora tables in place you can create “olap” models to build dimensional data structures on this data.



At the core of Vora we are looking to enable the distributed computing at scale when working with data both in SAP HANA and Hadoop/Spark environments. By pushing down processing of different algorithms to where the data is and by reducing the data movement between the two data platforms we deliver fast query processing and performance for extremely large volumes of data. We have also introduced new features like distributed partitions and co-located joins to achieve these performance optimizations.


HANA Vora went GA early March and we are seeing several customer use cases that enables BigData Analytics and IoT scenarios. If you are at ASUG/Sapphire during May2016, stop by to hear about real life customers discuss their implementations and gain insights from these technologies.


Vora Developer edition has been updated to ver1.2, you can access it from here

An update for HANA users who want to know further on the OpenSSL DROWN attack.


SAP HANA and HANA based applications should not be affected by the DROWN vulnerability.

SAP HANA database uses SAP’s own CommonCryptoLib for communication encryption purposes, which is not affected by DROWN.


SAP HANA can be configured to use the OpenSSL instance which is provided by the Linux operating system (provided by Suse or RedHat). SSLv2 is not offered/used in these scenarios.

Therefore this configuration is also not affected by DROWN. Customers are advised to update their operating system according to their maintenance agreements with their operating system vendors. SAP explicitly allows customers to deploy security updates of the operating system.


More information:

http://service.sap.com/sap/support/notes/1944799 (SLES)http://service.sap.com/sap/support/notes/2009879 (Red Hat, see attached document)

SAP HANA extended application services, advanced model (XS Advanced) shipment contains OpenSSL for communication encryption. These channels do not support SSLv2 and are therefore not affected by DROWN.

Over a series of five tutorial videos Tahir Hussain "Bob" Babar provides an overview on how to setup and use the KPI Modeler in SAP S/4 HANA. This series is part of the SAP HANA Academy's S/4 HANA playlist. These videos were made with the greatly appreciated help and assistance of Bokanyi Consulting, Inc.'s Frank Chang.

How to Set up a SAP S/4 HANA ERP User

Screen Shot 2016-03-30 at 4.06.00 PM.png

Linked above is the first video in the series where Bob details how to set up a SAP S/4 HANA ERP user. This is accomplished by copying the roles and profiles from an existing user. If you don't want to use your main BPINST user then please follow the steps Bob outlines.

First, log into SAP Logon. This is Bob's connection to both the back-end and the front-end server as he is using a central hub installation. Use 100, the pre-configured S/4 client, as the client and login with your BPINST username and password. Next, choose to run a SU01 - User Maintenance (Add Roles etc.) transaction from the SAP Easy Access screen. Then, choose to look at the BPINST user's rights and navigate to the Roles tab.

Screen Shot 2016-03-31 at 10.39.50 AM.png

Copy all of the roles and then launch a new window by running the command /osu01 to create a new user. Bob names his new user KPI and clicks on the new button. The only information you need to allocate in the Address tab is a last name. In the Logon Data tab enter a password. Then, in the Roles tab, paste in the roles you copied from the BPINST user. Be aware that sometimes all of the roles aren't copied. So double check to make sure that your new user has all of BPINST's roles.

Next, copy the first three profiles (SAP_ALL, SAP_NEW, S_A_SYSTEM) that are listed in the BPINST user's Profiles tab and paste them into the Profiles tab of your new KPI user.

Screen Shot 2016-03-31 at 11.30.33 AM.png

Now you have a duplicate of the BPINST user.

How to Change the SAP Fiori Launchpad with the Launchpad Designer

Screen Shot 2016-03-30 at 5.21.34 PM.png

In the second video of the series Bob provides an overview of the SAP Fiori Launchpad in SAP S/4 HANA. Also, Bob shows how to change the SAP Fiori Launchpad using the SAP Fiori Designer.

In a web browser log into the SAP Fiori Launchpad Designer with the recently created KPI user on Client 100. The SAP Fiori Launchpad Designer enables you to change the look and feel of certain tiles in your SAP Fiori Launchpad. A list of tiles is located on the right side of the SAP Fiori Launchpad Designer and a list of catalogs is along the left.

Screen Shot 2016-03-31 at 11.39.42 AM.png

The tool that the end-user will see is the SAP Fiori Launchpad for SAP S/4 HANA. Bob opens the SAP Fiori Launchpad in another tab. The example Bob shows of a SAP Fiori application is for Operational Processing. Clicking on the hamburger button on the left will open the Tile Catalog. Bob elects to open the KPI Design Catalog.

Screen Shot 2016-03-31 at 11.44.17 AM.png

To provide an example of what an end-user might experience, Bob opens the Sales - Sales Order Processing catalog and then opens the Sales Order Fullfillment All Issues tile. This gives the end user a normal tabular report on Sales Order Fullfillment Issues by connecting to a table located in SAP S/4 HANA through OData.

Screen Shot 2016-03-31 at 11.50.11 AM.png

Another tile, Sales Order Fillfillment Issues - Resolved Issues, has an embedded KPI which shows that there are 64 issues that need to be resolved on 29 sales orders.

Screen Shot 2016-03-31 at 5.41.48 PM.png

Back in the SAP Fiori Launchpad Designer, Bob searches for ssb in the Tile Catalog. Bob opens up the SAP: Smart Business Technical Catalog. This is where you can change the form of navigation for a tile including all of the options related to the KPI monitor. The KPI Design Catalog is very similar.

Screen Shot 2016-03-31 at 6.23.05 PM.png

The SAP Fiori Launchpad Designer is used to direct target navigation. To demonstrate, Bob searches for order processing and opens up the Sale - Sales Order Processing catalog. If you view the tiles in list format you will find an Action and a Target URL for each of the tiles. This informs you what will happen when the tile is selected. With the Target Mappings option you can define what will happen when you select a specific tile. You can also choose whether or not the tile can be viewed on a tablet and/or phone as well.

Screen Shot 2016-03-31 at 6.30.29 PM.png

How to Create and Secure a Catalog

Screen Shot 2016-03-30 at 5.22.00 PM.png

Bob details how to create a catalog in the series' third video. Bob also walks through how to secure the catalog so users who are on the SAP Fiori Launchpad can access it

To create a new catalog, first, click on the plus button at the bottom of the SAP Fiori Launchpad Designer. Bob elects to create a catalog using Standard syntax and gives his a title and an ID of ZX_KPI_CAT. Once the new catalog is created, click on the Target Mapping icon. You can create a new Target Mapping here but the simplest way is to copy a Target Mapping from an existing catalog. So Bob navigates to the Target Mapping for the Sales - Sales Order Processing catalog. Then, Bob selects the Target Mapping at the bottom that has * as its semantic object before clicking on the Create Reference button at the bottom.

Screen Shot 2016-04-01 at 11.43.25 AM.png

Selecting the catalog you've recently created (ZX_KPI_CAT) will create a Target Mapping in that catalog with the same rights as the semantic object you selected from the existing catalog. Now, back in the ZX_KPI_CAT catalog you can confirm that the Target Mapping of * has been replicated.

Screen Shot 2016-04-01 at 7.05.12 PM.png

Next, you must enable a user be able to access the catalog. So go back into SAP Logon and login as the KPI user on client 100. Running the command /npfcg will open up role maintenance. This is where you can build a role. Bob names his role ZX_KPI_CAT and selects single role. Bob duplicates the name as the description and saves the role. Then, in the menu tab, Bob chooses SAP Fiori Launchpad Catalog as the transaction. Next, Bob finds and selects his ZX_KPI_CAT in the menu for Catalog ID.

Screen Shot 2016-04-01 at 7.31.09 PM.png

This has built a role that grants access to the ZX_KPI_CAT catalog. Next, Bob opens the User tab and enters KPI as the User ID. Now, after saving, the KPI user can access the ZX_KPI_CAT catalog and the security has been fully setup.

Accessing Core Data Services

Screen Shot 2016-03-30 at 5.22.40 PM.png

In the fourth video of the series Bob shows how to access a Core Data Service. Core Data Services access the SAP S/4 HANA tables which are ultimately exposed as OData. For more information on how to build and use CDSs please watch this series of tutorials form the SAP HANA Academy.

First, in Eclipse, Bob duplicates the connection he's already established but opts to use the KPI user with client 100 instead of the his original SHA user. Now Bob is connected to the SAP S/4 HANA system as the KPI user. Next, Bob finds an already existing CDS by opening a Search on the ABAP Object Repository and searching for an object named ODATA_MM_ANALYTICS. Once the search has located ODATA_MM_ANALYTICS (ABAP Package), Bob opens it and navigates to it's Package Hierarchy in order to see its exact link.

Screen Shot 2016-04-05 at 10.04.58 AM.png

ODATA_MM_ANALYTICS is in a sub-package of APPL called ODATA_MM. Navigate to the ODATA_MM package from the System Library on the left-hand side and find ODATA_MM_ANALYTICS before adding it to your favorites. Opening the Data Definitions folder from the Core Data Services folder in the ODATA_MM_ANALYTICS package will show the pre-built Core Data Services. Bob opens C_OVERDUEPO. C_OVERDUEPO is a consumption view. So a BI tool will directly hit it.

Screen Shot 2016-04-05 at 11.25.05 AM.png

Another way to view a CDS's syntax is to right-click on it and choose to open it with the Graphical Editor. This depicts the logical view of the data. The C_OVERDUEPO view comes from the P_OVERDUEP01 view. This is a great way to track the data back to its source table.

Screen Shot 2016-04-05 at 5.11.00 PM.png

To check that the data from the C_OVERDUEPO CDS is correctly exposed as OData, Bob resets his perspective. Then, Bob right clicks on and opens OData Exposure underneath the secondary objects header in the outline. This opens the OData in a browser and Bob logins as the KPI user. To test, you can append $metadata to the end of the URL to see the various columns for the entities of the CDS view.

Screen Shot 2016-04-06 at 10.33.16 AM.png

Using the KPI Modeler

Screen Shot 2016-03-30 at 5.23.13 PM.png

In the fifth and final video of the series Bob details how to use the KPI Modeler.

First, Bob opens the KPI Design catalog in the SAP Fiori Launchpad and selects the Create Tile tile. Bob names it KPI Overdue PO and chooses C_OVERDUEPO as the CDS View for the Data Source. Then, Bob selects the corresponding OData Service and entity set called /sap/opu/odata/sap/C_OVERDUEPO_CDS and C_OverduePOResults respectively. For Value Measure Bob selects OverdueDays. Then, he clicks Activate and Add Evaluation.

Screen Shot 2016-04-06 at 10.53.19 AM.png

The evaluation is a filter that regulates what you want the data to show. Bob names the evaluation Last Year - KPI. For Input Parameters and Filters Bob elects to only display EUR as his currency and sets his evaluation period for 365 days. For his KPI Goal Type Bob keeps the default, Fixed Value Type. Bob sets his target threshold for 500, his warning threshold for 300 and his critical threshold for 100. Then, Bob clicks Activate and Configure New.

Screen Shot 2016-04-06 at 11.46.34 AM.png

There Bob is presented with various tile formatting options. In his simple demonstration Bob keeps the default tile configurations. Bob chooses ZX_KPI_CAT as his catalog before clicking on Save and Configure Drill-Down. Drill-Down determines what happens when the KPI is selected. Bob chooses to filter down with a Dimension of Material and a Measure of Overdue Days. This will create the chart depicted below.

Screen Shot 2016-04-07 at 10.21.27 AM.png

Bob gives his view a title of By Product and chooses to use Actual Backend Data. So when the tile is clicked on in the SAP Fiori Launchpad it will link to the chart. After clicking OK, Bob clicks on the + button at the top of the screen to add some of the various charts that are subsequently listed. The selections will appear when the tile is drilled into. You can add additional graphical options if you desire different views of the data. Bob selects two charts before clicking on Save Configuration.

Screen Shot 2016-04-07 at 10.33.49 AM.png

Back on the homepage of the KPI Design Window, Bob clicks on the pen object on the bottom right of the screen to configure what will be seen in the window. Click on the Add Group button and name it. Bob name's his KPI's Fiori Tile Group. Then, clicking the + button below the name allows you to add catalogs. It will load all of the catalogs your user has created. Bob adds the ZX_KPI_CAT catalog.

Screen Shot 2016-04-07 at 11.17.03 AM.png

Once you turn off edit mode you can view your Overdue PO tile.

Screen Shot 2016-04-07 at 11.19.32 AM.png

For more tutorial videos about What's New with SAP HANA SPS 11 please check out this playlist.

SAP HANA Academy - Over 1,300 free tutorials videos on SAP HANA, SAP Analytics and the SAP HANA Cloud Platform.

Follow us on Twitter @saphanaacademy and connect with us on LinkedIn to stay abreast of our latest free tutorials.


Filter Blog

By author:
By date:
By tag: