1 2 3 65 Previous Next

SAP HANA and In-Memory Computing

966 Posts

With the new SPS 11 release of SAP HANA, it's a good time to start planning for upgrades.


As of SAP HANA revision 110 or higher, the database is built with a newer compiler version, than originally delivered with RHEL 6 and SLES 11. Therefore an additional runtime environment for GCC 4.8 needs to be installed before updating the SAP HANA database to SPS 11. We ask customers to update their operating systems in the following way:


  • RHEL 6 - The GCC 4.8 runtime is available via the normal software update repositories.
  • SLES
    • SLES 11 SP4 - The GCC 4.8 runtime is available via the normal software update repositories.
    • SLES 11 SP2 and SP3 - Update to SLES SP4 or manually download the required packages. See SAP Note 2228351 for more details.
    • SLES 11 SP1 (no longer supported) - The GCC 4.8 runtime is not available for SP1, therefore updating to SAP HANA SPS 11 is not possible. Update to SLES 11 SP2 or higher.


Please note: Systems running a HANA DB client only, e.g. SAP application servers, do not require an update of the libraries, as the client software is still built with the original GCC 4.3 compiler.


For more the most up-to-date information about these changes, refer to SAP Note 2228351.

SAP HANA's Web tools for administration, development, and application lifecycle management all provide embedded user assistance that's both task and context specific. As SAP HANA auto content, embedded help is automatically installed and updated. This means it's always available – no Internet connection required.


Here's a short demo video:



So look out for the "?" symbol or Help button in the following tools:

  • SAP HANA Cockpit
  • SAP HANA XS Administration Tools
  • SAP HANA Developer Workbench
  • SAP HANA Application Lifecycle Management


For more on additional user assistance like embedded help, see the post Additional User Assistance for SAP HANA SPS 11.


Your SAP HANA platform documentation team



In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 11.


What's New with SAP HANA SPS 11 - by the SAP HANA Academy


The topic of this blog is application lifecycle management.


What's New?


Integration with SAP HANA cockpit


Although the web version of the SAP HANA Application Lifecycle Management (HALM) was already available in the previous version of SAP HANA, it is now also integrated with the generic tool for administration using web technologies, SAP HANA cockpit. Using HANA cockpit as a single home for all management tasks simplifies administration.


As a result, you can now access HALM in a browser using

  • SAP HANA cockpit (available from tile catalog) = NEW
  • SAP HANA Web-based Development Workbench
  • Web application URL (http://<host>:80<instance_number>/sap/hana/xs/lm)
  • SAP HANA Studio internal/external browser access through context menu: Lifecycle Management > Application Lifecycle Management


Of course, the command line tool hdbalm is also still available.


Tutorial Video


SAP HANA Academy - SAP HANA SPS 11: What's New? - Application Lifecycle Management - YouTube





SAP Library



SAP Notes


Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.


Follow us on Twitter @saphanaacademy


Connect with us on LinkedIn

We are pleased to announce that SAP HANA SPS11 is now available for download from the SAP Marketplace.  In addition to the software, the accompanying documentation and release notes are posted here. . SAP HANA SPS11 delivers new capabilities to help customers to innovate modern applications, accelerate insights, and simplify IT. Stay tuned to the HANA.SAP.com blogs in early December for detailed content on SAP HANA SPS11 including technical Expert Sessions, HANA Academy videos and much more.

In a set of three tutorial videos the SAP HANA Academy's Tahir Babar Hussain (aka Bob) details how to connect SAP Lumira to SAP HANA Vora.

All of the files and code used throughout the series are available for free on GitHub.

Overview and Connecting with the SAP Thrift Server

Screen Shot 2015-11-24 at 4.47.23 PM.png

In the series' first video Bob provides a brief overview of what he's attempting to achieve with regards to a SAP Lumira analytic. Then Bob shows how to install all of the relevant drivers within SAP Lumira and how to connect to the SAP HANA Vora system with SAP Lumira.

First, open SAP Lumira and navigate to file and then preferences. Go to the SQL Drivers page and install the Generic JDBC data source drivers. This series assumes that you're using at least version 1.29 of SAP Lumira. With Generic JDBC driver selected click on install driver and then navigate to the Spark JDBC folder. Select all of the lib folders before clicking open and then done. Finally restart SAP Lumira.

Currently, Bob has a SAP HANA Vora instance running on a five node cluster. In the PuTTY for his master node Bob logs in as the Vora user. Then he navigates to his Vora bin folder and then starts the SAP thrift server.

To connect SAP Lumira to SAP HANA Vora go to new in SAP Lumira and choose query with SQL. Choose Generic JDBC datasource and click next. As of late November 2015 we don't need a user name and password so Bob enters X in both text boxes. For the JDBC URL Bob enters a URL with port 10000 as the thrift sever is running on port 10000. Bob has created a Hadoop entry on AWS that is referencing a certain IP Address in an ini file. You will also need to include a pair of switches that are depicted below.

Screen Shot 2015-11-24 at 5.54.03 PM.png

The class value is com.simba.spark.jdbc4.Driver. Then hit connect. Of course you won't be able to see any objects as we have yet to load any data into our SAP HANA Vora system.

Screen Shot 2015-11-24 at 5.56.53 PM.png

Loading Data with Beeline and Building a SAP Lumira Analytic

Screen Shot 2015-11-24 at 4.50.47 PM.png

Next in the second video Bob examines how to use Beeline, a JDBC tool. Bob will show how to create a table and how to load data to HDFS using Beeline. Also, Bob will demonstrate how to build a simple analytics in SAP Lumira using that recently loaded data.

In PuTTY Bob navigates to the home folder of Vora and inserts a new CSV file that lists seven of the richest Football clubs in the Europe. Bob then runs the below command to put the CSV file into HDFS.

Screen Shot 2015-11-25 at 9.41.21 AM.png

Next Bob starts up Beeline. Beeline is a tool that enables JDBC connectivity to a variety of sources including the SAP thrift server. Bob runs the below command to connect to the thrift server.

Screen Shot 2015-11-25 at 9.44.14 AM.png

Then Bob enters the below syntax to create a table for his richest Football clubs data. The table has five columns that include Club string (team name), Revenue double, MatchDay double, Broadcasting double and Commercial double.

Screen Shot 2015-11-25 at 9.49.49 AM.png

The table that Bob just created will be alive as long as the thrift sever is alive.

Back in SAP Lumira the Football clubs table is now visible in the connection that Bob established in the previous video. To preview the data Bob enters the select * command shown below. This is the same command that one would run in PuTTY to view the table's data.

Screen Shot 2015-11-25 at 10.03.30 AM.png

Now Bob creates a report in Lumira from that data preview. Bob adds a new measure for MatchDay and constructs a simple stacked bar chart plotting his club dimension against a combination of the Broadcasting, Commercial and MatchDay measures.

Screen Shot 2015-11-25 at 10.06.37 AM.png

Appending Data in SAP HANA Vora and Refreshing SAP Lumira Analytics

Screen Shot 2015-11-24 at 4.52.24 PM.png

In the third and final tutorial video in the series Bob details how to append data in SAP HANA Vora in order for those changes to be subsequently reflected in the SAP Lumira analytic.

Back in PuTTY Bob stops and then restarts his SAP thrift sever. Now the tables that you've perviously created will no longer exist on the thrift sever but they do exist in Zookeeper. So back in Beeline Bob runs the below command, that specifics each of his Zookeeper severs, to reregister his Football club table back in HDFS.

Screen Shot 2015-11-25 at 10.21.05 AM.png

Next Bob inserts a pair of new CSV files into his home Vora folder to demonstrate that you can append multiple files at a time in SAP HANA Vora. Then Bob runs the commands shown below to put both of those new files into HDFS.

Screen Shot 2015-11-25 at 10.25.07 AM.png

Back in SAP Lumira if you refresh the analytic it won't show any of the new data because it has yet to be appended into the SAP HANA Vora table. To append the data Bob connects to his HDFS system with Beeline and runs the below command.

Screen Shot 2015-11-25 at 10.27.37 AM.png

Now when Bob refreshes his analytic in SAP Lumira the Broadcasting, Commercial and MatchDay revenue for a set of additional Football clubs will be depicted.

Screen Shot 2015-11-25 at 10.30.32 AM.png

Finally, Bob appends his even more Football club data into the HDFS table. He is now able to use SAP Lumira to preform analysis on 20 of the richest Football clubs in Europe.

Screen Shot 2015-11-25 at 10.32.54 AM.png

For more SAP HANA Vora tutorial videos please check out the playlist

SAP HANA Academy - Over 1,200 free tutorial videos on SAP HANA, SAP Analytics and the SAP HANA Cloud Platform.

Follow us on Twitter @saphanaacademy and connect with us on LinkedIn.

New and updated product documentation for SAP HANA Platform (Core) SPS 11 is now available on SAP Help Portal. Here you'll find all the essential SAP HANA guides and references from installation, security and administration through to application development and information modelling.


While SAP Help Portal remains the go-to place for SAP HANA documentation, many additional user assistance resources are available to support and guide you: embedded help, how-to videos, tutorials, as well as guides and blog posts dedicated to key feature areas.


Over the coming weeks and months, we will use this blog post to bring you the latest on these additional offerings. So don't miss out, follow this post!


Your SAP HANA platform documentation team




Product Documentation for SAP HANA, SPS 11


25.11.2015SAP HANA Platform (Core) on SAP Help PortalSAP HANA guides and references
25.11.2015SAP HANA Options and Additional Capabilities on SAP Help PortalSAP HANA guides and references


Additional User Assistance for SAP HANA Platform (Core), SPS11


26.11.2015Getting Help in SAP HANA Web ApplicationsEmbedded help and how-to video



In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 11.


What's New with SAP HANA SPS 11 - by the SAP HANA Academy


The topic of this blog is database backup and recovery.


What's New?


Enhancements for SAP HANA Cockpit


Using the SAP HANA Backup tile in the SAP HANA cockpit, you can now not only make full backups, but also delta backups. Delta backups, both incremental and differential were introduced in SAP HANA SPS 10. For more information about delta backups, see SAP HANA SPS 10 What's New: Database Backup and Recovery - by the SAP HANA Academy. To make delta backups in SPS 10, you could only use command line and SAP HANA studio. With SPS 11, this functionality has now been made available for cloud computing in the SAP HANA cockpit.


Screen Shot 2015-11-25 at 12.45.40.png


Resuming an Interrupted Recovery


Recovering a very large database can be time consuming. Having to repeat the whole recovery process because the connection to the recovery tool was temporarily lost, for example, is nobody's definition of fun. As of SPS 11, it is now possible to resume a SAP HANA database recovery.

The way it works, is that during recovery, SAP HANA automatically sets fallback points. The first fallback point is after the backup restore phase when the data files are put back into place. Then at intervals during the recovery phase, that is when HANA is going through the log backups, additional fallback points are set. When there is a failure during this stage, recovery can be resumed. These fallback points are recorded in backup.log.


You can recover a database using SQL statements and the tool (Python script) recoverSys.py.




Multistreaming Data Backups with Third-Party Backup Tools


You can now use multiple channels to write the backup data for each service with third-party backup tools. This will make it possible to considerably speed up the backup time by distributing backup data in parallel to multiple devices.


New Certification for Third-Party Backup Tools


In the last months, several new tools have been certified in the BACKINT program, amongst which HP StoreOnce Catalyst Plug-in for SAP HANA.

You can look them up on the SAP Partner Directory with filter "HANA-brint".

For the program, see Backup/recovery API for the SAP HANA database (Backint for SAP HANA (HANA-BRINT 1.1))


Tutorial Video


SAP HANA Academy - SAP HANA SPS 11: What's New? - Backup and Recovery - YouTube





SAP Library


SAP Notes


Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.


Follow us on Twitter @saphanaacademy


Connect with us on LinkedIn



In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 11.



What's New with SAP HANA SPS 11 on YouTube


You can view the full playlist here: SAP HANA SPS 11 - What's New - YouTube




What's New with SAP HANA SPS 11 - SCN Blogs


We also have posted additional blogs to SCN to provide some more context and links to additional documentation (when available) and other resources. We will keep these blogs updated in the next couple of weeks and feel free to add your comments should you have any questions.


SAP HANA SPS 11 What's New: System Administration - by the SAP HANA Academy


SAP HANA SPS 11 What's New: Backup and Recovery - by the SAP HANA Academy


SAP HANA SPS 11 What's New: Application Lifecycle Management - by the SAP HANA Academy


Product Documentation and Additional User Assis... | SCN






For more information, see


SAP Notes


Product Availability Matrix (PAM)


Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.


Follow us on Twitter @saphanaacademy


Connect with us on LinkedIn



In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 11.


What's New with SAP HANA SPS 11 - by the SAP HANA Academy


The topic of this blog is system administration.


What's New?


SAP HANA Cockpit for Offline Administration


The SAP HANA cockpit for offline administration is a tool with the same Fiori look and feel as the SAP HANA cockpit (to keep it simple), and the same functionality as the SAP HANA administration console in diagnostic mode (cloud-enablement).

The regular SAP HANA cockpit is a web application that is powered by SAP HANA, which also means that when HANA is powered off, the web app is not available. The tool SAP HANA cockpit for offline administration fills up this hiatus as it is hosted by the SAP host agent, just like the tool for platform lifecycle management: HDBLCM on port 1228 for HTTP and 1229 for HTTPS.


Screen Shot 2015-11-25 at 11.10.13.png


You can use the SAP HANA cockpit for offline administration to:


  • start, stop, or restart a SAP HANA system
  • view or download log and trace files generated by the different SAP HANA processes (indexserver, daemon, xsengine, etc.), and to collect diagnostic information and/or runtime environment (RTE) dump files
  • troubleshoot an unresponsive system by analysing connections, transactions and or threads, with the option to cancel (all) transactions


You connect to the tool using operating system credentials of the <SID>adm user. For this reason, SAP strongly recommends to use HTTPS and replace the self-signed certificate with a properly signed one.


Statistics Server Data Management


To avoid that the tables that store statistics and alerts data in the _SYS_STATISTICS schema grow to manageable sizes, we can now perform data management operations on these tables using SQL.

Additionally default limits have been set to the tables: 42 days max for collectors and 1.000.000 rows for alerts.


SET RETENTION_DAYS_CURRENT = 42 -- Enter here the retention period in days
WHERE ID = 8; -- enter here the collector identifier
WHERE type = 'Collector';


VALUES ('internal.alerts.maxrows', 500000);


Tutorial Video


SAP HANA Academy - SAP HANA SPS 11: What's New? - System Administration - YouTube





For more information see:


Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.


Follow us on Twitter @saphanaacademy


Connect with us on http://linkedin.com/in/saphanaacademy

SAP HANA is one of our most popular topics on openSAP and we want you to continue to learn everything about the product that is of interest to you. To do this, we are happy to announce the new SAP HANA Core Knowledge Series with the first course starting in January, Text Analytics with SAP HANA Platform.


The key to extracting important information from unstructured and semi-structured textual data is using natural-language processing. Text analytics within SAP HANA is a suite of linguistic, statistical, and machine learning capabilities that model and structure the information content of textual sources in multiple languages. This technology is a strategic asset that forms the foundation for natural-language processing for a range of discovery applications.


With SAP HANA, text is a first-class data type.


Text analytics in SAP HANA provides search, text analysis and text mining capabilities to gain insights from text sources. From a usage perspective, these 3 areas are distinct, but they are interrelated on a technical

level and largely depend on the same foundation technology. This course, Text Analytics with SAP HANA, will provide details regarding each area and is ideal for data scientists, application developers, and technical business analysts.


Demos and optional hands-on exercises will be made available so you can experience text analytics first-hand within SAP HANA. The course begins January 19 and will run over five weeks. All you need to participate is a valid email address. Registration, learning content, and final exam are all provided completely free of charge.


If you’d like to learn more about Text Analytics within SAP HANA, sign up today!



openSAP is SAP’s Open Online Course provider. You don’t need to travel or take weeks out of the office to attend training. With openSAP, you can complete the course content at your convenience – anywhere, any time and on any device. openSAP is open to anyone interested in learning about SAP’s innovative products and solutions. Registration, learning content, final exam and Record of Achievement are provided free of charge. Find out more at openSAP.

Starting the Migration Tool Dialog

To start the migtool mode of SUM, open a browser window and enter the following internet address in the address bar:





Replace <SID> with the system-ID

1129 is the https-port of the SAP Host Agent


If the secure socket layer is not configured, use http instead of https:



Use full Commands during SUM with DMO

Temporary database space, which is required for the shadow system and is calculated during the execution of the roadmap step Checks.

To check the used space at different points of time.

Use the command:

<SUM directory>/abap/bin/SAPup dmpspc


To change the number of R3load processes dynamically,

Use the following commands:

cd <SUM directory>/abap/bin

./SAPup set procpar gt=scroll


Performance Optimization with Migration Duration Files

During a migration, SUM creates text files with the extension XML that contain the information about the migration duration for each migrated table.

The files are created in directory SUM\abap\htdoc\ and are called

MIGRATE_UT_DUR.XML for the uptime migration

MIGRATE_DT_DUR.XML for the downtime migration

They can be used for the next DMO run to improve the table splitting. For this, put the files into the download folder so that SUM can consider the files during the next run.

Add to file SAPup_add.par the following line

/clonepar/clonedurations = <absolute_path&gt;/MIGRATE_UT_DUR.LST,<absolute_path&gt;/MIGRATE_DT_DUR.LST

Oracle: Suppressing Long-Running Phases

Update the database using the below command before starting the Update

brconnect -u / -c -f stats -o <schema_owner&gt; -t all -f allsel,collect,space –p 83

Add to file SAPup_add.par the following line:

/ORA/update_spacestat = 0

The file SAPup_add.par is located in the directory <SUM directory>/abap/bin/. If it does not exist yet, create it manually.

Required Keys

Valid migration key for the new target database

Request the migration from the SAP Service Marketplace at https://support.sap.com/migrationkey

Permanent SAP license for your system that will be migrated to the target database

Download the license key from http://support.sap.com/licensekey

Use full log files


Analyze the log file EUMIGRATERUN.LOG for benchmarking results

Row Store vs Column store

The table type in SAP HANA database (row store vs. column store) is controlled based on the file <SUM directory&gt;/abap/bin/ROWSTORELIST.TXT. (Target Release SAP NW 7.31 only:)


Time estimation

Overall process and for long-running phases is written to the file

<SUM directory&gt;/abap/log/SAPupStat.log

Changing Schema Name

After the extraction of SUM add the following line to the file SAPup_add.par

/migrate/targetschemasid = <VAL&gt;;


<VAL> is the desired identifier to be appended to form the schema name SAP<VAL>.


Using the Migration Repetition Option for Testing

If you have enabled the Migration Repetition option, this dialog displays an additional warning that the SUM will stop after the downtime migration phase and allows you to repeat the phase. Be aware that you must not use this option for a productive run.


if you want to disable the option now, set the following parameter in the file SUM\abap\bin\SAPup_add.par:

/migrate_option/OPTIMIZECLONING = 0


Downloading Files from Maintenance Optimizer


When you select the kernel files using MOpz, consider that you have to select the kernel files of the target software release for both the source database and the target database. The reason is that the system creates the shadow instance for the target software release on the existing source database first and then copies it to the target database.


Upload the following files to a folder on the Primary Application Server (PAS)

The files calculated by MOpz including kernel for the source and target database

stack.xml file

Target database client for the required operating system

SAPCryptoLib (if encryption is used in the source system)


Reset Option for DMO



To carry out the reset, it is required that

anyDB is still available

the SUM directory is not cleaned up


Target database client is not deleted from PAS/AAS automatically

BR tools are not restored

User DBACOCKPIT and related roles are not deleted in target database

No Reset possible after Cleanup

After the update and migration procedure comes to an end, the Software Update Managers offers in addition the option Cleanup. Be aware that after a cleanup, a reset is not possible anymore.


Required Migration Specific Passwords


User DBACOCKPIT (only if it does not yet exist on the target database and SUM has to create it)

User SAP<SID> (SUM will create it on the target database)


Retired Parameters for SAPup_add.par file


/clonepar/imp/procenv = HDB_MASSIMPORT=YES (default values for current R3load Versions SAP Note 21181195)

/clonepar/indexcreation = after_load (default values for current R3load Versions SAP Note 21181195)

SAP DB Control Center (DCC) and Fiori allows users to create and use System Groups to organize and simplify the system view in each administration tile. In this blog I will introduce System Groups and how it relates to filtering in DCC.


Note: This blog post was originally created by a former colleague of mine, Yuki Ji, in support of a customer engagement initiative regarding the SAP DB Control Center (DCC) product.  To preserve this knowledge, the post was migrated to this space.

System Groups

To create a group, start by launching the System Directory tile, then select the System Groups tab.


Here you can create and manage your existing System Groups. I'm not going to cover creating System Groups in detail but it is covered here.

System Groups can be created according to preference and need. For example, I may want to group all the systems that are hosted at a certain location, systems that are in production, test, QA. Below in TestGroup, I randomly selected PM1 to be in this group.


The groups listed below are fairly straight forward. "MDC system group" contains the system database and all tenant databases of the MDC system. "Single DB" contains two Single-instance HANA Database systems. TestGroups contains a single system, in this case PM1.



What we can do with the System Groups is filter the systems displayed in the Enterprise Health Monitor, the Alerts Monitor, and all other DCC Administration tiles.

For example, in the Enterprise Health Monitor the norm is to see all the registered systems (5 in this case):

When filtering systems by clicking on the filter button SysGrp_filterbutton.PNG, you can select one or more System Groups to form the criteria. The systems that are then displayed are the ones that are a part of any one of the selected System Groups. If there is a group that is included in multiple System Groups it is only included in the results once.

MDC System Group and TestGroup applied as filters:


Single DB and TestGroup applied as filters - Although PM1 is included in both "Single DB" and "TestGroup", it only appears in the results once:


All in all System Groups can be used to organize your systems and simplify your view.




During my latest development, I came upon an interesting scenario that had to deal with manipulating data linked to WBS trees.


My first impression, was to use Analytical Views to take advantage of MDX plugins that nicely pivot and showcase hierarchies in MS Excel.

We have been doing UI5 webapps for our customer and they wanted to use a similar data consumption tool instead of Excel.

This was going to be challenging project since it required to present the WBS hierarchies as flat data as I had done for previous projects.


Time to start working.....



First order of business, had to deal with getting the WBS tree ready for manipulation.


Consider many unbalanced WBS trees of similar fomat:




Each node will posses a value(s), some of those values will be aggregated to it's parent, and to it's grandparent, etc. some won't.



I needed a view that would show the full tree as shown above.


Tables PRPS and PRHI contain the information I needed.


PRPS will contain the WBS element

PRHI will contain the siblings, parent, first child of the node, and conveniently the Hierarchy pointer (PRHI-PSPHI) which accompanies nodes that belong to the same tree.


For my tree, I needed each node to also showcase the root node, which the PRHI table does not provide.


I created a view that would map each WBS element to it's root node.


How to achieve this?

Join the Hierarchy table to the WBS table with only the root nodes, (Root nodes are the ones where UP='0000000', similarly leaves have DOWN='0000000')

See below:



Now each WBS element would have a root and a hierarchy pointer attached to it.




To finish building the tree, I needed to add the rest of the information that the PRHI provides (siblings, parent, first child)

Index this view to the PRHI table and now I have the WBS tree that I want.




Notice how the root node shows up(parent as NULL) this can be useful if you are building some sort of recursive functionality. As of now, I'm not quite sure we can do recursion (e.g. WITH RECURSIVE or something similar in HANA), but I left it there for demonstrative purposes.




So far I just have plain mapping of data, I still need to actually give it a hierarchy.



Time to start taking care of your kids....



To create a truly useful mapping. I need a parent-child hierarchy. PSPNR and UP provides such mapping.

Create the hierarchy as seen below.




In this case, I've used POSID, and UP_POSID as my hierarchical mapping for business reasons. PSPNR and UP_PSPNR are better candidates since they actually show a numerical order.


Note: Up to this point what I have described can also be used for MDX, if that is what your business case is. You just need to adapt the hierarchy with whatever values you are aggregating in an Analytic View.


Once we have built the hierarchy, we are presented with a columnar hierarchical view in the SYS_BIC schema. This will nicely show you numerical ordinals and paths as seen below:




For the continuation of this example, I will only be interested in the path.



Mapping your family tree


Recall that only some values need to be aggregated for each node subtree. The path field from the hierarchy will aid me in that process, since recursion is not doable.


I need to add the path to the hierarchical tree that already has the values (values are calculated in another view with the hierarchy included)

I cannot do this join in HANA graphical views, but I can do it in a procedure. Which is what is done below:




I highlighted Path and root since these are the values that will give me the full aggregation without recursion. Note that I'm still using POSID as my main index since that is what I used for my hierarchy. If you are using PSPNR then adapt the code to use that field.


Next, I just wrap this procedure in a scripted calc view so I can analyze the data, which will now include root and path.




Looking for your descendants


At this point I posses all the values and path and root for each WBS node, so how do I aggregate each node subtree?


I deliberately expand each node to include the path via the root node. Then I weed out the records where the WBS element is not included in its path


When joining via the root node, each WBS element will have the maximum number of records for that tree.


For example,


POSID = 2101CRS will map to 255 records as it is the root (full tree)

POSID = 2101CRS_53 will also map to 255 records, but I need to reduce to only 7 since only 7 records have this POSID in its path.


the diagram below goes into it graphically.




Here's how it looks in the HANA views








For POSID =2101CRS_53




As you can see, both will have 255.


To reduce I just need to see if the POSID is included in the path and filter out those that aren't. Now for this step, you can use whatever methods you want. I utilized a calc column as a flag and used the instr() function to identify if that substring (POSID) is in the PATH.




with this flag I can weed out unwanted records.


POSID=2101CRS will still have 255 since it is the root node, All WBS paths will have it.




POSID=2101CRS_53 will have 7 records as only those have this POSID in its path.





Now to get the aggregated node subtree values, I just need to aggregate by the POSID since it will now have all of the values of its descendants.





A nice and happy family


At this point I have the values that I wanted aggregated for each node. Without the use of any recursive functionality, I was able to compress the data that I wanted. Now if your business case requires you to have some values aggregated while others that don't, Just join to the view that has the tree with the values since now you will have a one-to-one mapping.





Now, I made this design using SPS09. Reading the HANA literature, SPS10 will have a nice feature that will allow you aggregate by the hierarchy directly from SQL (i.e. no need to expand and reduce as I did). I'll stay tuned for that feature and try this design with that in mind.



Thank you for reading and leave you comments and thoughts below.



Luis Zelaya


Half of the world's population have a mobile device that connect us (billions) to each other. That's no surprise everyone and mobile apps are at the heart of human interactions. Businesses want to acquire customers who are reachable through business apps. We used decades ago, me one of those, to develop software applications on a computer and would undertake massive tasks to make it available to hundreds or thousands of users. Software maintenance and updates were an elephant in the room.
Lately, businesses can leverage a modern development platform to create apps that can reach billions of users. Apps maintenance and updates are just an ant in the room. What we need to seize the present (carpe diem) ?

  • In-memory computing for real-time operations, smarter decision making, and better business results
  • Open platform-as-a-service providing unique in-memory database and business application services
  • Standard platform for cloud applications built for fast-cycle innovation
  • Scalable and enterprise ready private and public cloud infrastructure
  • Open and interoperable platform for mission-critical computing—across physical, virtual and cloud environments

Basically, it is important that developers focus on innovation (value-add coding) and free them from any technical limitation. To seize the present, we need open, fast, intuitive, and scalable everything; from development environment and deployment to production use and maintenance.


  1. SAP HANA Cloud Platform and SAP HANA In-Memory Platform
  2. Cloud Foundry and OpenStack and SUSE OpenStack Cloud
  3. SAP leading Cloud Foundry project "BOSH OpenStack Cloud Provider Interface"
  4. SUSE Joins Cloud Foundry Foundation and Collaborate with SAP on OpenStack

What's New?


We have just added a number of video tutorials to the SAP HANA Academy about the SAP HANA Rules Framework (HRF). This series was created by Rob Case and Noam Gilady.


For more information about HRF, see Noam's blog HANA Rules Framework and the official product documentation on the SAP Support Portal.


Complete playlist: SAP HANA Rules Framework - YouTube





In the first two videos, the SAP Community Network, the SAP Help Portal, the SAP Support Portal and the Product Availability Matrix are introduced. Knowing where to find information is critical for success.



1. How to install SAP HANA Rules Framework: Delivery Unit



2. How to install SAP HANA Rules Framework: SAP HANA Studio Modeling Tools





In the next four tutorial video lessons vocabulary is explained:


1. How to create a simple vocabulary?



2. How to create outputs and actions?



3. How to create dependent vocabularies?



4. How to create value lists?





In the next four tutorial video lessons cover rules, decision tables, aliases and the rule service:


1. How to create a text rule?



2. How to create a decision table?



3. How to create aliases?



4. How to create a rule service?



Thank you for watching


You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy, follow us on Twitter @saphanaacademy, or connect to us on LinkedIn.


Filter Blog

By author:
By date:
By tag: