This post is part of an entire series

Hana SPS09 Smart Data Integration - Overview

 

What is the most performing table layout for querying data in Hana? Certainly not the OLTP data model, where many many joins are required to get all data. The OLTP data model is best suited for entering and changing data as it impacts the least amount column/rows. But in that case, is it really such a wise idea to copy the data 1:1 from the source OLTP system into Hana?

Sure you get better performance with Hana than with your existing database, but even Hana is slowed down by every join. Not much but 20 times a little is still a lot. Hence it might be a good idea to read the source data, transform it an load the data into a Hana optimized data model instead of strictly keeping it as is.

 

Actually, analyzing the cases when the OLTP model is good enough and when transformation makes sense was an entire exercise I went through in another blog post: Comparing the Data Warehouse approach with CalcViews - Overview and Comparing the Data Warehouse approach with CalcViews - Examples.

The summary of my findings are, Hana is simply amazing, how much it can cope with within sub seconds. But the creativity and the needs of business to query data is often even more amazing. Hence my conclusion is, operational reports can be done back where they belong - the operational system if run in Hana. But for true Business Intelligence - finding new business insights in the data - this is where I would suggest building star schema data models still.

 

How do you transform the data during the load

In the past the answer was quite simple, you use an ETL tool like Data Services. And the answer is and will be valid still. But for just getting data into Hana such an external ETL tool might have a too large of a footprint. They have their own look and feel, their own repository, their own administration and management, even simple things like data types are different. These tool are meant to read any source and put the data into any target.

Therefore we utilized existing Hana technology and combined it to build an ETL feature natively into Hana.

It comprises of the following components

  • CalcEngine as ETL engine
  • Hana Smart Data Access to connect to a remote source
  • Hana security and the entire Hana management actually
  • Hana monitoring
  • Hana repository for designtime storage and propagation to production
  • A new Hana editor, three in fact as of today: AFM Editor, WebIDE based Replication Editor, Smart Data Preparation - another WebIDE based tool aimed towards Business Users

Since this blog is located under Hana Development, let's focus on the tool targeting us, the IT professionals.

 

The AFM Editor

The AFM editor is a Hana Studio Modeling editor like any other, so I open that from a Hana repository tree.

afm1.pngafm2.png

With this object being created, you can start drawing dataflow. Pull in source objects, target objects and transforms. I am using the term objects here for a reason, because actually it can be many things. Hana tables, virtual tables, views, calcviews, table types, temporary tables. For the ETL use case it is (virtual) tables mostly.

 

As usual in this blog series I want to do something special, in this case my sources are the 0MATERIAL_ATTR and 0MATERIAL_TEXT extractors of the SAP ERP system. So I drag and drop these virtual tables into the editor and use them as source together with some transforms and have to add a target table.

Note: This adapter is one I wrote, it is not a shipped one.

afm3.png

 

The interesting part is the types of transforms available in the calcengine now and hence in the editor.

On the right hand side of the editor is a palette and depending on the features being installed in Hana, it shows different categories. If the smart Data Integration and Smart Data quality feature is installed, there are two categories.

 

Palette "General"

This contains the basic transforms.

afm4.png

 

  • Data Source: Reader
  • Data Sink: Loader
  • Data Sink (Template Table): A target table that is created matching its input columns
  • Aggregation: Allows to group-by columns and specify aggregation functions for the others
  • Filter: Allows to specify filters and complex mappings like substring(col1,4,8)
  • Join: All kinds of joins (Note: this is not the existing calcengine join. With this join node you can join n tables at once, specify different join types for each, specify complex join conditions.)
  • Sort: To order data
  • Union: A Union-all operation
  • Procedure: In case you want to invoke a procedure with all transformed data being passed into it as table type parameter

 

 

 

 

 

 

Palette "Data Provisioning"

This contains all Data Integration specific transforms.afm5.png

 

  • Date Generation: A source that generates a range of date values, one per row
  • Row Generation: A source that generates a range of integer values, one per row
  • Case Node: To route the data through different paths controlled by conditions, e.g. All region='US' data should go into FilterUS, all region='EMEA' data into the transform FilterEMEA and all other rows into FilterOther.
  • Pivot: To take n input rows and merge them into n columns. All other columns stay as is.
  • UnPivot: The reverse operation, to take n columns, e.g. Revenue_January, Revenue_February,... and create n rows, e.g. the 12 values are returned in the column Revenue but using 12 rows.
  • Lookup: To lookup in another table for the lookup value and take one. A join might find multiples and hence the output would be all combinations - often not desired, hence the lookup.
  • Cleanse: To cleanse all data based on reference information, most often used together with the postal address directory of every single country of the world.
  • Geocode: To translate addresses into long/lat information and vice versa. Or Point Of Interest relationships and the such.
  • Table Comparison: To identify for all rows the difference of the incoming data compared to the target table and what type of difference there is.
  • Map Operation: To influence the way read data is handled, e.g. all data read from table1 should be flagged as delete and hence will be deleted in the target table. Also to specify different mappings for differently flagged rows, e.g. all insert rows should have the value of now() in the INSERT_DATE column but not be changed by updates. And update rows should write the value of now() into the LAST_UPDATE column.
  • History Preserving: Deals with all the complexity when loading a target table that should retain the history. So instead of updaing the e.g. customer record, a new customer record version should be added, the valid-from and valid-to columns of the old and new version should be updated,... things like that. Makes creating a Slow Changing Dimension Type 2 a piece of cake. Note: Requires a Table Comparison Transform somewhere upstream, else it does not know what changed and what the current values are.

 

Summary

As you concur hopefully, there is little reason to use an external ETL tool for loading data into Hana, even if transformations are required. Although I have talked about loading the data into Hana just once, this solution does allow to create delta logic as well. But more important, it can deal with realtime data as well!

SQL Script using a declarative style to simplify our data operation. Although HANA SQL Statement is a powerful language, in some cases, especially complex calculation is needed, we need cache table in memory and operate it, just like internal table did in ABAP.

Table variable in SQL Script is a symbol to describe some SQL statement or other logic. It is not a “real” variable at runtime. SQL script’s execution is not line by line, our script was analyzed by HANA and was treated like one or several combined SQL statement. The consequence is we can do nothing with intermediate data you declared in SQL Script as table variable.

Create local temporary table may solve the problem, we could create some temporary table and using update or insert statement to manipulate data in it. But we need to create procedures with write privilege. Thus could not be invoked by read only procedures and calculation views.

Fortunately, SQL Script has an array concept, it could help us to store table data temporary in memory and manipulate it.

In this article, we use a sap standard table SCARR as example to show how to use ARRAY to handle data in memory.

1.Declare Array for storing data

SQL Scrpit do not has a  "Structure" or "Internal Table" concept, so if we want to save data in a table, we should use some arrays, each represent a field.

DECLARE SCARR_MANDT NVARCHAR(3) ARRAY; --Declare a array of  nvarchar(3)
DECLARE SCARR_CARRID NVARCHAR(2) ARRAY;
DECLARE SCARR_CARRNAME NVARCHAR(10) ARRAY;
DECLARE SCARR_INDEX INTEGER;--Declare an integer for record the pointer of arrays



  • You could use any primitive data types of sql scrpit.
  • Mention that,for some reason, may be some bugs, if you use DECIMAL type in array and try to read its value will lead to a compile error. So if you want to deal with decimals, use double instead and convert it back to decimal after calculation.
  • Following a [TABLENAME]_[FIELDNAME] rule to ensure code is easy to understand.

2.Fill arrays with values from a table variable

After binding a select statement to a table variable, using ARRAY_AGG command to fill column data into array

LT_SPFLI = SELECT MANDT,CARRID,CARRNAME FROM SPFLI;
SCARR_MANDT = ARRAY_AGG(:LT_SPFLI.MANDT ORDER BY MANDT,CARRID);
SCARR_CARRID = ARRAY_AGG(:LT_SPFLI.CARRID ORDER BY MANDT,CARRID);
SCARR_CARNAME = ARRAY_AGG(:LT_SPFLI_CARRNAME ORDER BY MANDT,CARRID);



  • Sort data by primary key to ensure each array have the same order, then we could use one index to access fields of same record
  • Mention to the use of ":"
  • Data types of array element and table column must be same

3.Loop over,get data from or set value to the array

FOR SCARR_INDEX IN 1 .. CARDINALITY(:SCARR_MANDT) DO --Using cardinality statement to get number of elements of an array. Array index starts from 1.
IF :SCARR_CARRID[:SCARR_INDEX] = 'AA' THEN --Get value using :ARRNAME[:INDEX]
SCARR_CARRNAME[:SCARR_INDEX] := 'America Airline'; --Set value using ARRNAME[:INDEX] := VALUE
END IF;
END FOR;
SCARR_INDEX := CARDINALITY(:SCARR_MANDT) + 1; --Add 1 to the index to add new rows
SCARR_MANDT[:SCARR_INDEX] := '200'; --Set value with new index directly, be careful not to override existing values
SCARR_CARRID[:SCARR_INDEX] := 'CA';
SCARR_CARRNAME[:SCARR_INDEX] := 'China Airline';


  • Up to now, SQL Script do not support directly pass array value to a procedure or function, use a temp sclar variable to handle the data temproryly.
  • Mention to the use of ":",":=" and '='

4.Combine arrays to a table variable

Using UNNEST command to combine arrays to a table variable

var_out = UNNEST(:SCARR_MANDT,:SCARR_CARRID,:SCARR_CARRNAME) AS ("MANDT","CARRID","CONNID");
--Arrays are transfered to columns of a table

5.Summary

Using array, we can deal with complex logic which could not deal with SQL statement without create temprory tables, this feature make SQL Script has the ability to handle nearly all data logic.Except this,we could force hana to execute some logic at a certain time and sequence to improve proformence in some case.Although doing those is not as easy as ABAP, we have a way to deal with the most complex logic and combine those logic to the easy unstanding declaretive SQL Script.

A series is defined as a sequence of data points made over a specific interval, most often a time interval. Examples of series are average levels of CO2 in the atmosphere, daily closing values of the stock market and daily precipitation levels.

 

Time series analysis is the practice of analyzing time series data in order to extract meaningful information. Just about every organization can benefit from harvesting this information in order to better understand reality: from financial applications striving to better support decision making, to heavy industry companies looking to better predict service needs, to retail organizations looking to improve by describing typical seasonality behavior.

Among the many new features with SPS09, SAP HANA introduced a number of powerful series analytic SQL functions. This article will introduce and explain these functions, as well as give general recommendations on their applicability.

 

The SERIES clause

As of SAP HANA SPS09, the CREATE TABLE statement has been enhanced to support a new SERIES clause. This clause indicates that the table will contain series data. A series table is like any other table but has additional properties. These allow for more efficient storage and takes particular advantage of the columnar nature of SAP HANA.

 

Series analytics functions

Getting the man in the middle - MEDIAN

The median value is very well known and does not need much further explanation. In SAP HANA, null values are eliminated when calculating the median value and the middle value (the average of the two middle values in the case the number of elements is even) is returned.

 

Linear dependence between two variables - CORR

When using the CORR function, SAP HANA will calculate the Pearson product momentum correlation coefficient - often simply called the correlation coefficient, or r - between two variables. The output is a value between –1 and 1.

 

A negative correlation indicates that as one of the variables increases, the other one decreases. An example of this would be the correlation between vaccinations and illness: as more people are vaccinated for a given illness, the less this illness will occur.

 

Similarly, a positive correlation means that an increase in one variable will also increase the other variable. As an example, consider the relationship between smoking and lung disease.

 

No correlation means that an increase of one variable will not reliably affect the other variable, and that the elements are really just randomly distributed.

 

Rank correlation between two variables - CORR_SPEARMAN

Sometimes, two variable are obviously correlated - just not as a line in a graph.

 

RankCorr.PNG

 

For these situations, we are interested in measuring correlation based on how a ranking of a data point in one series is related to the ranking of a data point in another series.

This is measured by the Spearman’s Rho correlation, which is supported in SAP HANA by the CORR_SPEARMAN function. Considering the graph above, the CORR value is 0.88, indicating a strong corrrelation, but not perfect (a line would not be a perfect fit). The CORR_SPEARMAN value, however, is a perfect 1 since each consecutive value ranks the same on both the x and the y axis. 

 

A common issue in statistical analysis is the existence of outliers. An outlier is a data point which does not fit the general trend of the data. There are multiple reasons for the occurrence of outliers, and they can have a significant impact on the calculated correlation. The Spearman correlation will, by it’s nature, be much less affected by the existence of outliers.

Recommendations

When analyzing correlations between variables, it is important to note that the value does not indicate the strength of the correlation, only its directional value. To know how strong the measured correlation coefficient is, you need to also take into account the sample size (the larger the sample size, the more we can trust calculated correlations). This is known as the its significance. Another way of expressing this is that the stronger the significance, the less likely the correlation is to happen by chance.

This is a measure based on the sample size and must be calculated in order to draw a meaningful conclusion.

If it is calculated that there is less than a one in twenty chance (5%) that a calculated correlation can happen by chance, the findings are said to be significant. If there is less than a one in one hundred chance (1%), the findings are designated as highly significant. Currently, the significance is not automatically calculated by the SAP HANA analytical series SQL functions.

Future Directions

SPS09 is the first release of SAP HANA to support SQL-level analytical series functions. Over time, we can expect more support for additional functionality, such as direct support for significance calculations and other measures such as Kendall’s Tau correlation - a different correlation measure which is more sensitive to many, but smaller, rank differences. Kendall’s Tau is a better measure when we are looking to measure overall conformance but are not too concerned about one or two major discrepancies. By contrast, Spearman’s Rho is very sensitive to singular, large variations.

 

Full support for series analytics can be found in the R language, which is also supported by SAP HANA. To take advantage of the full, blazing speed of in-memory analytics, however, we need to use the native SQL functions described in this article.

 

As the SAP HANA platform continues to evolve, I will update this blog with information about new capabilities for lightning fast series analytics!

Ive recently posted an idea about making better informed purchases in SAP Idea Incubator.

 

The basic concept is to help purchasers make better purchases with help of analytical data processed through HANA on the go via a Fiori app.

Further details regarding the idea can be found via the following link,

 

Making better informed purchases. : View Idea

 

I urge you to help me improve this idea with your valuable feedbacks and suggestions.

Thanks.

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

 

The SAP HANA Repository is a key feature of the HANA native development environment.  It is the design time archive for all development objects from database tables all the way up through user interface artifacts. However up until now we've not had a supported API which would allow application developers or 3rd party tools developers to access the Repository.  With the advent of the Web-based development tooling, SAP needed to create a REST-based service interface for these development tools.  These two requirements come together in SPS 09 in the form of the newly released SAP HANA REST API.

 

Eclipse Orion

For the Web-based development tools, SAP decided to use the Eclipse Orion API. The REST API is an API layer completely supporting the Orion protocol version 1.0 which enables Development Tools to access HANA platform components such as the repository via REST based calls in an easy and flexible manner. As an example the API provides means for the creation and manipulation of workspaces, projects and files but also covers HANA-specifics such as activation and catalog access. Any development tool using the REST API can be used as a client. This opens up HANA as a development environment not just for SAP development tools such as Web IDE or the Web-based Development Workbench but also for tools coming from Partners and Customers.

 

As a consequence, existing tools implemented on top of the Orion Server API are able to work with HANA. This enables SAP-external communities who would like to use their own tools in a "bring-your-own-tool" (BYOT) manner and allows them to integrate HANA nicely into their established development and management workflows and tools infrastructure.

 

API Parts

The REST API has several parts which we will look at briefly in this blog. This is just intended to be an introduction to the basic capabilities.  For more details, please refer to the online help: http://help.sap.com/hana/SAP_HANA_XS_REST_API_Reference_en/index.html

 

  • File API
  • Workspace API
  • Transfer API
  • Metadata API
  • Change Tracking API
  • OData Service
  • Info API

 

File API

The SAP HANA REST Application Programming Interface (REST API) includes a file API that enables you to browse and manipulate files and directories via HTTP. The File API included in the SAP HANA REST API uses the basic HTTP methods GET, PUT, and POST to send requests, and JSON is used as the default representation format. For more information about the original Orion File API, on which the File API is based, see http://wiki.eclipse.org/Orion/Server_API/File_API.

 

HANABlog62.png

 

Retrieve the contents of a file (http://wiki.eclipse.org/Orion/Server_API/File_API#Getting_file_contents).

You can also retrieve the contents of a particular version of the file (URL-parameter version) or check the ETag of the current version of the file (using the “If-Match" parameter).


The file API supports a conditional “If-None-Match”, too. Note that providing a header for the “If-None-Match" results in the return code 304 (Not Modified) if the file was already requested in the current session and the file was not changed since the last request.

 

If you specify a value for the parameter Workspace in SapBackPack, the file API returns the most recent inactive version of the file in the given workspace, if it exists. If no inactive version of the requested file exists in the specified workspace, then an active version of the file is retrieved. The default workspace is "" (empty string).

 

Here is a small example taken from an SAPUI5 application where we use this API via jQuery.AJAX to load the content of a file from the Repository:

var ShortUrl = '/sap/hana/xs/dt/base/file/';
var package = oExerciseJSON.exercises[i].steps[isub].package;
var lPath = package.replace(/[.]/g, '/');
var aUrl = ShortUrl + lPath;
aUrl = aUrl + '/'
                + oExerciseJSON.exercises[i].steps[isub].object
                + '.'
                + oExerciseJSON.exercises[i].steps[isub].extension;       
                                                
var oSapBackPack = new Object();

oSapBackPack.Workspace='SHINE_DATA';
var sapBackPack = JSON.stringify(oSapBackPack);
                

jQuery.ajax({
                                url : aUrl,
                                method : 'GET',
                                dataType : 'text',
                                headers : {
                                    "SapBackPack" : sapBackPack },
                                success : onInsertContent,
                                error : onErrorCall,
                                async : true
                });

Workspace API

The SAP HANA REST Application Programming Interface (REST API) includes a Workspace API that enables you to create and manipulate workspaces and projects via HTTP. For more information about the original Orion API, on which the Workspace API is based, see http://wiki.eclipse.org/Orion/Server_API/Workspace_API.

 

The information in this section describes how to use the Workspace API in the context of SAP HANA. To better understand how it can be used, it is essential to understand the meaning of the terms workspace and project in the context of the Eclipse ORION concept.

 

ORION Workspace

A concept taken over from the Eclipse architecture. A workspace is an area where the design artifacts used in coding or modeling work are located. In this sense, the workspace is user specific; it is the place where a user collects all the smaller entities required for development activities, for example, projects. This Eclipse-based workspace concept is different to the meaning of the workspace currently used in the context of the SAP HANA XS Repository.

 

ORION Project

A collection of design time and modeling entities that not only have a common storage location, but to which you can also apply activities that typically occur during the development lifecycle, for example: compile, build, distribution, deployment, sharing objects with other developers.

Both ORION concepts are technically mapped to the SAP HANA XS package concept, which means that both are implemented as SAP HANA XS packages with specific content and in a specific location.

 

So to summarize, when you create a project; it is an SAP HANA XS sub package in the specified workspace package.

 

Transfer API

The Orion Transfer API is used to import and export packages and files. For more information about the Orion Transfer API, see http://wiki.eclipse.org/Orion/Server_API/Transfer_API.

  • This API is used to import and export packages and files
  • It supports resumable, chunked uploads of file content
  • Export not yet implemented.  Use the File API for export instead.

 

Currently, there are some restrictions. For example, although the Orion Transfer API supports the upload of single (binary) files also in the form of multiple chunks, it is not possible to use the Orion API to import (or export) a package. Neither is it possible to import files from (or export files to) an external server.

 

Metadata API

The REST API's Metadata API provides services to support search and auto-completion scenarios. For example, to retrieve metadata from runtime, design-time and other metadata locations. The typical location of runtime metadata is the SAP HANA database catalog. Currently it is possible to retrieve metadata for tables, views, procedures, functions, sequences, and schemas. The design-time location for metadata is the SAP HANA Repository. Also accessible is the metadata location used by Core Data Services (CDS). All provided services are available at the following (single) access point:

/sap/hana/xs/dt/base/metadata

 

The specific services are called by setting the HTTP parameter Service-Name to the appropriate value. This part of the interface only supports HTTP GET requests.

 

Supported Services:

checkMetadataExistence

checkMetadataExistence URI

getMetadataSuggestion

 

Change Tracking API

The SAP HANA REST Application Programming Interface (REST API) includes a Change Tracking API that enables you to make use of specific lifecycle-management features included with the SAP HANA Repository via HTTP.

 

Change Tracking is integrated with the SAP HANA XS Repository transport tool set. For example, with change tracking enabled, you can ensure that an export operation includes only the latest approved versions of repository objects.

 

Note that the API can only be used if change tracking is enabled in the SAP HANA system via the HANA Application Lifecycle Management tool.

 

OData Services

The SAP HANA REST Application Programming Interface (REST API) includes a set of OData services that enable access to configuration data stored in the SAP HANA environment. The OData services can be accessed with any Odata consumer implementation that supports Odata version 1.0.

 

The SAP HANA REST API includes the following Odata services.

  • Delivery units
  • Contents of .ini files
  • M_FEATURES

 

To use the Odata services included with the SAP HANA REST API, the user needs to have the access rights to the underlying SAP HANA runtime views.

Delivery Units

Provides access to "_SYS_REPO"."DELIVERY_UNITS"

 

INI File Contents

Provides access to "SYS"."M_INIFILE_CONTENTS"

 

M_FEATURES

Provides access to "SYS"."M_FEATURES"

 

Info API

The SAP HANA REST Application Programming Interface (REST API) includes an Info API that can be used to display information about the current version of the REST API. The information displayed by the Info API includes a description of the current version of the delivery unit and the number of commands (API entry points) that are currently supported by the REST API.

This post is part of an entire series

Hana SPS09 Smart Data Integration - Overview

 

One of the more basic use cases is to copy data from a source database into Hana. Either just once to try out Hana or doing an initial load and setup realtime replication for all changes in order to have Hana side by side with the source system, e.g. for reporting.

Using the Smart Data Integration technology that can be done, you have to create virtual tables, write the SQL or dataflow for the initial load - one per source table - and create the realtime subscriptions. Or in short, use the Hana WebIDE to create a hdbreptask object that does all of that for you.

 

But let's do something different instead of replicating a source database into Hana this time. Just for fun, a database would work as well of course. We want to replicate the Twitter!

For that we need an Adapter to connect to Twitter - no problem, shipped together with the Agent - and create a remote source with the Twitter login credentials. Everything else is straight forward:

 

Open the hdbreptask editor in the WebIDE by creating a new file of that ending

reptask0.png

 

Select the remote source, the target schema and all remote tables - well in Twitter there are just two but in reality you can add hundreds of tables at once

 

reptask1.png

Optionally specify filters

reptask2.png

Execute the task

reptask3.png

 

Summary

Actually, there is a nice video of all of this

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

 

SAP River

The benefits of the SAP River language (RDL) application development environment were validated by partners and early adopters. However, based on feedback we received, and consistent with our strategic direction of building open and standard environments, SAP has decided to abandon a proprietary language approach, and to reapply and integrate the SAP River assets and principles within a cloud based development environment as part of the HANA Cloud Platform.

 

The SAP River language will therefore no longer be available as a stand-alone development environment in SAP HANA.

 

SAP River Assets Reused

The SAP River Application Explorer has been rebuilt as a general SAP OData Explorer in SPS 09. It allows for the general testing and data generation of XSODATA based services. You can view all records in a service or create/delete/edit individual records. It also supports mass generation of multiple records at once with random value generation. It can be launched from the Web-based Development Workbench (via context menu option on XSODATA service) or directly via the url=

/sap/hana/ide/editor/plugin/testtools/odataexplorer/index.html?appName=<xsodata service path>

HANABlog6.png

 

SAP River Concepts Realized in Alternative Ways

 

One-Click Debugging

We now introduce One-Click debugging in SPS 09.  No longer do you have to choose the XS Session ID. Instead the targer of the debug session will launch in an external web browser or run in-place within the HANA Studio when you choose debugging.  This also means that we needed to provide tools within the debugger for stubbing in HTTP Headers, Body, etc.  In the past developers often used 3rd party tools like Postman to simulate service calls. Now you can do all of this from within the HANA Studio as you start the debugging.

HANABlog32.png

 

But the debugging improvements don't end there.  We now also support XSJS/SQLScript integrated end-to-end debugging.  From an XSJS that calls into a SQLScript stored procedure we can now step from the XSJS debugger seamlessly into the SQLScript debugger.

HANABlog33.png

 

 

XSDS (XS Data Services)

In SPS 09, SAP introduces a major extension to the programming model in the form of a reusable XSJS Library called XSDS or XS Data Services. There are several goals for this new library and the impact that it has on the overall programming model.

 

General HANA artifact consumption

  • Native JavaScript embedding of HANA database artifacts
  • Structured API calls instead of low-level SQL Interface
  • Less boilerplate coding (for example result set conversions)

 

Native CDS Consumption

  • Import CDS entities as native JavaScript objects
  • Understand CDS metadata for working with JavaScript objects

 

For more details on XSDS please refer to this blog:

SAP HANA SPS 09: New Developer Features; XSDS (XS Data Services)

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

 

Additional blogs on this topic:

 

In SPS 09, SAP introduces a major extension to the programming model in the form of a reusable XSJS Library called XSDS or XS Data Services. There are several goals for this new library and the impact that it has on the overall programming model.

 

General HANA artifact consumption

  • Native JavaScript embedding of HANA database artifacts
  • Structured API calls instead of low-level SQL Interface
  • Less boilerplate coding (for example result set conversions)

 

Native CDS Consumption

  • Import CDS entities as native JavaScript objects
  • Understand CDS metadata for working with JavaScript objects

 

The origin of this extension to the programming model is that there were originally three separate projects/libraries built within SAP.  They focused on different aspects of data consumption within XSJS. They were an ORM (Object Relationship Mapper) library, a Query Builder, and a Relationship library.

HANABlog59.png

 

While these three individual libraries all had value on their own; ultimately SAP decided to integrate them together into one single library and also build them onto the CDS/HDBDD foundation for even better reuse of information that already exists within the database layer.

 

CDS (Core Data Services)

Core Data Services are a cross-platform set of concepts and tools to define semantically rich data models.  Within SAP HANA we already have the development artifact HDBDD for using the CDS syntax to define tables (entities), their relationships and views that utilize those relationships. A central part of the CDS specification is the object-relationship mapping that links semantically meaningful entities to their technical representation as records in the HANA database.

 

The short description of XSDS then can be consider as the native CDS embedding for JavaScript in the XS application services layer of the SAP HANA Programming Model.

HANABlog60.png

 

At its core, XSDS allows you to import and extend CDS entity definitions.  You simply supply the package path and file name of the HDBDD and the entity name during the import. The XSDS library reads the available metadata on types, keys and associations.  It also supports extensions, projections, and renaming of entity definitions.  It also supports CDS conceptions (like backlink and via entity associations) which aren't yet supported by HDBDD definitions.

 

In this simple example we import the Employees entity from the EPM hdbdd artifact and the MasterData Context.

 

$.import("sap.hana.xs.libs.dbutils", "xsds");
var XSDS = $.sap.hana.xs.libs.dbutils.xsds;
var oEmployee = XSDS.$importEntity("sap.hana.democontent.epmNext.data", "EPM.MasterData.Employees");



oEmployee is a now a JavaScript which contains all this metadata which was already defined in the underlying CDS model. It also supports full CRUD transactionality via this JavaScript object.

 

Import Pre-Generation

However, this import of the metadata from the underlying model can be a relatively costly operation. Because the XSJS programming model is stateless this often creates the situation where you must re-import the CDS metadata for every single request.

 

For this reason the XSDS library also support pregeneration of imports in order to improve performance.  The library serializes the result of the import into a single reusable XSJSLIB file. Therefore instead of importing the entity from the HDBDD artifact you can use standard $.import feature of XSJS to utilize this prebuilt library definition.

 

Managed/Unmanaged

However XSDS isn't limited to only using CDS Entities. The goal for XSDS was to provide a single solution for all XSJS applications.  This means being able to use the same access library even if your base tables are directly created in the catalog and not via CDS. This way you can decide between Managed or Unmanaged modes in the library.

 

Managed mode allows you to import entity definitions from catalog objects and from CDS definitions (as described above). It is best used when you need consistency in access. It functions as a lightweight ORM building JavaScript objects from catalog or CDS definitions. It supports navigation via associations, data consistency and a limited query ability.

 

Unmanaged mode, on the other hand, functions as a Query Builder to provide you with maximum flexibility and access to nearly the full HANA SQL Syntax but via a simplified query interface. Data consistency between entities, however, remains the responsibility of the application developer.

 

Now for more details, lets look at some syntax examples of both Managed and Unmanaged Mode.

 

Managed Mode

Earlier we saw how to import a CDS entity definition.  You can also import catalog object definitions as well:

 

var Demo= XSDS.$defineEntity("Demo", '"DEMOSP9"."DEMO_TABLE"');



In this case we supply the Schema and Table name from the catalog instead of the path to the HDBDD artifact.

 

Regardless of the source of the entity, we perform the same core operations on the entity after its imported into a JavaScript object.

 

We can create, retieve, update, and delete instances of the entity.

var post = Post.$get({ pid: 101 });



The result is a plain JavaScript object that can be programming against like any other JavaScript object.

if (post.Author.Name === 'Alice') post.Rating++;



Optionally the objects support lazy retrieval of associated instances to avoid "n+1 problem".  The library also optimizes Join strategy to minimize Database queries.

 

You can write changes into the database explicitly.

post.$save();



The transaction manager tracks local changes for minimal updates to the database.

 

As far as instance manipulation goes we can retrieve a single instance by key:

var post = Post.$get({ pid: 101 });



You can then update an instance like this:

post.Author.Name = "Alice";
post.$save();



Or you can create a new instance using this syntax:

var user = new User({ Name: "Alice", Created: new Date() });
user.$save();



Deletion works similarly:

post.$discard();



Instance Management

As the above examples show the entity instances are managed. This means that instances are singleton objects with "personality". Associated instances are stored by reference.

 

The benefits of this approach are that the runtime guarantees consistency.  You have a single consistent view on every instance enforced by the $save operation. You also have automatic association management across all relationship cardinalities (1:1, 1:n, and m:n).

 

The downside, on the other hand, is that we need additional overhead for instance management.  We are also limited to a subset of database functionality for instance retrieval.

 

HANABlog61.png

Transaction Handling

The managed mode also has impacts on the transaction handling.  We receive the benefits of transparent database and transaction handling. With a minimal XSDS runtime, the library is still able to provide metadata processing, entity cache, and type conversion during the transaction. It still exposes and reuses as much underlying database functionality as possible (for example: Constraints checks).

 

XSDS uses a single Database connection and single transaction.  This makes it well suited for single-threaded, per-request execution model of the XS processing layer.  It supports auto commit, explicit commits, and rollbacks. It also supports connection configuration files (XSSQLCC).

 

 

Unmanaged Mode

We also have unmanaged mode for when you want a more convenient database interface but don't want to give up the full database syntax.  For example here is the typical database join statement in XSJS code.

var rs = conn.prepareStatement(‘
SELECT a."Name", c."Text.Text"
  FROM "bboard.post" p
  JOIN "bboard.user" a ON p."Author.uid"=a."uid"
  JOIN "bboard.comment" c ON p."pid"=c."Post.pid"
  JOIN "bboard.user" ca ON c."Author.uid"=ca."uid"
WHERE a."uid"=ca."uid“').execute();
while (rs.next()) {
  rs.getInt(1); rs.getString(2);
}


 

There are several problems here which XSDS unmanaged mode addresses.

Leaking Abstraction:

  • Column name can only be derived from the underlying table definition

Convenience:

  • String based query interface
  • Manual result set processing

Redundant Code:

  • Joins could be derived from underlying metadata instead of hand written

 

For these reasons the XSDS library also supports a query library for Ad-hoc queries.  It allows you to construct general queries in a very JavaScript-like way:

var query = Post.$query().$where(Post.Author.Name.$eq("Alice"));
var results = query.$execute();


The result object still yields a plain JavaScript object as unmanaged values not an instance as of a managed object.

 

The Query Language used here has several features:

  • Incremental query building for better readability/maintainability
  • Operators like $project, $where, and $matching
  • Full support for CDS path navigation
  • Expression language based on CDS Query Language using fluent API
  • Alternative, more concise JSON-like selection notation for restricted subsets

 

Here are some additional code examples to help you learn some of these benefits.

 

$project

Projections with CDS navigation expressions:

var res = Post.$query().$project({ Author: {Name: true},           
                  Comments: {Author: {Name: true}},
                  Title: "TitleOfPost"}).$execute();


The results of this query would look something like this:

[{             Author: {
                                Name: "Alice"
                },
                Comments: {
                                Author: {
                                Name: "Bob"
                                }
                },
                TitleOfPost: "First Post!"
}, ...]


$where

The library supports complex where conditions like the following:

var selfCommentingPosts = Post.$query().$where(Post.Comments.Author.pid.$eq(Post.Author.pid))
.$execute();


There are predefined operators of the expression language:

  • $eq
  • $ne
  • $gt
  • $lt
  • $ge
  • $le
  • $like
  • $unlike

But you can still escape to any SQL operator:

var somePost = Post.$query() .$where(Post.Rating.$prefixOp('SQRT').$gt(2)).$execute();


$matching

$matching allows selection using a template for the result.

var interestingPosts = Post.$query().$matching({
    Rating: {
        $gt: 2
    },
    Tags: {
        Name: "+1"
    }
}).execute();


**Note the Unmanaged version of $find and $findAll uses the same template language.

 

Incremental Query Construction

Incremental construction with immutable query objects:

// build query without DB call
var qBadPosts = Post.$query().$where(Post.Author.pid)
  .$eq(Post.Comments.Author.pid)));
// refine query
var qStarBadPosts = qBadPosts.$where(Post.Rating.$gt(4));


Explicit trigger:

// trigger actual DB calls
var badTitles = qBadPosts.$project(
                { Title: true, Author: {Name : true}}).$execute();
var recentBadTitles = qStarBadPosts.$project({ Title: true })
  .$execute();


JSON Expressions

Simple expressions:

Post.$find({ mandt: "001", pid: 101 });


Complex expressions:

Post.$find({ Title: { $like: "First%" },
             Rating: { $gt: 1, $lt: 3 },
 Created: { $lt: Date.now() – 3600 } });


JSON Expression Language Operators:

  • $eq
  • $ne
  • $lt
  • $le
  • $gt
  • $ge
  • $like
  • $unlike
  • $null

 

Optimized Query Construction

The query construction allows for incremental query construction without database roundtrips.  Immutable query objects support sharing of subqueries. This results in just-in-time translation into plain SQL using needed joins.

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

 

Core data services (CDS) is an infrastructure for defining and consuming semantically rich data models in SAP HANA. Using a a data definition language (DDL), a query language (QL), and an expression language (EL), CDS is envisioned to encompass write operations, transaction semantics, constraints, and more.

 

A first step toward this ultimate vision for CDS was the introduction of the hdbdd development object in SPS 06. This new development object utilized the Data Definition Language of CDS to define tables and structures. It can therefore be consider an alternative to hdbtable and hdbstructure.

 

In SPS 09 we continue to build on the foundation of CDS and the HDBDD development artifact.  We introduce important new syntax as well as significant enhancements to the lifecycle management of these development objects.

 

Lifecycle Management

 

One of the biggest requests for CDS/HDBDD was to improve the lifecycle management capabilities.  Before SPS 09 if you made most changes to an entity definition for a table which had data in it, you received an activation error. The developer was responsible for migrated data out of the existing table, dropping it, recreating with the new structure and then migrating the data back. This process was not only cumbersome and error prone but also made transporting such changes automatically to downstream systems nearly impossible.

 

In SPS 09 we add an automatic migration process to activation of the HDBDD artifact. This supports a variety of scenarios.

 

  • Conversion from the same named hdbtable artifact
  • Change of key / data type
  • Removal of columns

 

In these situations, data is moved transparently to a shadow table.  The source table is then dropped/adjusted and the data is migrated back. This allows for structural changes that in the past would have required significant developer work to preserve the data.  Even better is that this process happens automatically upon activation, therefore transporting content via Delivery Units will automatically trigger the migration.

 

Multi-File Support

Another very common request was for multiple-file support in HDBDD. Before SPS 09 you often were force into very large monolithic HDBDD artifacts if you wanted to share association or type definitions between entities or views. It also meant no way to defined global types that could be reused across the system.

 

SPS 09 solves all of these problems by adding the ability to import from one or more existing HDBDD files.  All the entity definitions, types, etc then become visible to the destination HDBDD artifact.

 

It is possible to refer to an artifact that is defined in another HDBDD file ("external" artifact). Each external artifact must explicitly be made accessible by a using declaration. The using declaration introduces a local name as an alias for the external artifact, which is identified by a fully qualified name. The external artifact can be either a single object (type, entity, view) or a context. The using declarations are located in the header of the file between the namespace declaration and the beginning of the top level artifact.

 

For example you might create some reusable types in a central HDBDD:

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context ContextA {
  type T1 : Integer;
  context ContextAI {
    type T2 : String(20);
    type T3 {
      a : Integer;
      b : String(88);
    };
  };
};


 

You can then import and reference these types in a separate HDBDD artifact.

namespace playground.sp9.cds;
using playground.sp9.cds::ContextA.T1;
using playground.sp9.cds::ContextA.ContextAI as ic;
using playground.sp9.cds::ContextA.ContextAI.T3 as ict3;
@Schema: 'SP9DEMO'
context ContextB {
  type T10 {
    a : T1;               // Integer
    b : ic.T2;            // String(20)
    c : ic.T3;            // structured
    d : type of ic.T3.b;  // String(88)
    e : ict3;             // structured
  };
  context ContextBI {
    type T1 : String(7);  // hides the T1 coming from the first using declaration
    type T2 : T1;         // String(7)
  };
};


 

Enumerations

Another new syntax feature in SPS 09 is the introduction of enumerations.  That's a type which declares several possible values or domains.  However Enumerations can only be used within Annotations. The definition of annotations is very similar to type definitions. Annotation definitions can be either located inside a context, or an annotation definition can be the single top level artifact in a CDS file.  However annotations do NOT generate catalog artifacts.

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context enumerations {
    type Color : String(10) enum { red = 'FF0000'; g = '00FF00'; b = '0000FF'; };
annotation MyAnnotation {
  a : Integer;
  b : String(20);
  c : Color;
  d : Boolean;
};
};


 

New Types

One of the other common requests was for the CDS syntax to support all the HANA data types.  With SPS 09 we extend the supported types, including the GeoSpatial types.  Please note that the support the GeoSpatial types ST_POINT and ST_GEOMETRY is still limited.  These types can only be used for the definition of elements in types and entities.  It is not possible to define a CDS view that selects such an element from an CDS Entity.

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context NewTypes {
@nokey
entity SomeTypes {
  a : hana.ALPHANUM(10);
  b : hana.SMALLINT;
  c : hana.TINYINT;
  d : hana.SMALLDECIMAL;
  e : hana.REAL;
  h : hana.VARCHAR(10);
  i : hana.CLOB;
  j : hana.BINARY(10);
  k : hana.ST_POINT;
  l : hana.ST_GEOMETRY;
};
  
};

 

View Syntax

The View Syntax receives several new features as well.  For example we can now specify the OrderBy clause in the view definition:

//Order By
view EmployeesView as select from Employee
{
  orgUnit,
  salary
} order by salary desc;

 

We can also use the CASE syntax within the view definition:

//Case
entity MyEntity {
key id : Integer;
     a : Integer;
     color : String(1);
};
  
view MyView as select from MyEntity {
    id,
    case color  
        when 'R' then 'red'
        when 'G' then 'green'
        when 'B' then 'blue'
        else 'black'
    end as color,
    case when a < 10 then 'small'
        when 10 <= a and a < 100 then 'medium'
        else 'large'
    end as size
}; 

 

Unmanaged Associations

Associations in general are one of the most powerful features of the CDS syntax. They move the relationship definition out of the view/join and into the source entity definition for better re-usability/maintenance and simpler overall syntax.

 

An unmanaged associations is based on existing elements of the source and target entity. No fields are generated. In the ON condition, only elements of the source or the target entity can be used; it is not possible to use other associations. The ON condition may contain any kind of expression - all expressions supported in views should also work in the ON condition of an unmanaged association. The names in the ON condition are resolved in the scope of the source entity. Elements of the target entity are accessed via the association itself.

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context unmanagedAssociations {
entity Employee {
  key id   : String(256);
  officeId : Integer;
};
entity Room {
  key id : Integer;
  inhabitants : Association[*] to Employee on inhabitants.officeId = id;
};
  
entity Thing {
  key id   : Integer;
  parentId : Integer;
  parent   : Association[1] to Thing on parent.id = parentId;
  children : Association[*] to Thing on children.parentId = id;
};
  
};

 

Backlink

Unmanaged associations also make several previously unsupported features possible.  For example the backlink. The CDS specification defines backlink associations. This kind of association is not yet supported. But you can (with a little extra effort) achieve the same effect by using unmanaged associations.

According to the CDS specification, it shall be possible to define a header/item relationship like this:

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context backlink {
//Backlink syntax not yet supported
//entity Header {
//  key id : Integer;
//  items : Association[*] to Item via backlink header;
//  description : String(120);
//};
//
//entity Item {
//  key header : Association[1] to Header;
//  key id : Integer;
//  description : String(120);
//};
//
//Workaround using unmanaged associations
entity Header {
  key id : Integer;
  items : Association[*] to Item on items.headerId = id;
  description : String(120);
};
entity Item {
  key headerId : Integer;
  key id : Integer;
  header : Association[1] to Header on header.id = headerId;
  description : String(120);
};
};

 

Many to Many

Similarly Many to Many can also be simplified using unmanaged associations. CDS defines mediated m-to-n associations, which can be defined using the "via entity" keyword. These kinds of associations are not yet supported directly in HDBDD. But using unmanaged associations, there is a workaround.

Example: model of employees and projects, employee can be assigned to any number of projects. The link table must be modelled explicitly.

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context manyToMany {
entity Employee { 
  key id : Integer;
  name : String(80);
  projectLinks : Association[*] to E2P on projectLinks.e_id = id;
};
entity Project {
  key id : Integer;
  name   : String(80);
  employeeLinks : Association[*] to E2P on employeeLinks.p_id = id;
};
entity E2P {  
  key e_id : Integer;
  key p_id : Integer;
  projects  : Association[*] to Project  on projects.id  = p_id;
  employees : Association[*] to Employee on employees.id = e_id;
};  
view EmployeesWithProjects as select from Employee {
     name                       as EmployeeName,
     projectLinks.projects.id   as projectId,
     projectLinks.projects.name as projectName
  };
    
view ProjectsWithEmployees as select from Project {
     name                         as projectName,
     employeeLinks.employees.id   as EmployeeId,
     employeeLinks.employees.name as EmployeeName
};
  
};

 

Full Text Index

Last but not least we now have syntax to define full text Index within the CDS/HDBDD artifact.

namespace playground.sp9.cds;
@Schema: 'SP9DEMO'
context fullText {
entity Header {
  key id : Integer;
@SearchIndex.text: { enabled: true } 
@SearchIndex.fuzzy: { enabled: true }
  description : String(120);
};
};

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

 

Unit testing and test driven development have increasingly become more popular in the industry in general and specifically within the SAP customer base.  To meet this demand, SAP introduces both an Unit Test framework and a table/view Mock framework in SPS 09.  Its important to note that this test framework is not automatically installed into SPS 09 however. The delivery unit for this framework is delivered as non-automatic content. This means its sitting on the file system of your HANA system and must be imported by an administrator. This is done because we assume customers will want these tools in their development systems probably not production systems.  Therefore we allow you, the customer, to decide if you want the tools installed.

 

Contents of the HANA_TEST_TOOLS Delivery Unit

 

  1. XSUnit a unit test framework based on Jasmine, which is a open source JavaScript Test Framework. With XSUnit you can test both JavaScript code and database content. Various extensions have been made to the standard Jasmine library to make it easier to test database content. (unit.jasminexs)
  2. XSUnit Test Coverage for XS JavaScript (unit.codecoverage)
  3. Test Isolation Framework, namely Mockstar, which allows you to test a view or a stored procedure in isolation. (mockstar)
  4. The results of the XSUnit can be persisted in the Test Result Logger
  5. Test Helper (unit.util)
  6. Developer Documentation (doc)
  7. Demos which are a good starting point for how to test various HANA content. (demo)

HANABlog51.png

 

How to create an XSUnit Test

 

First of all there is one common tool for testing XS JavaScript and database content such as SQLScript and Views.  It is all based upon the open source libraries detailed in the above section; several of which have been extended by custom assertions and test utilities specific to the HANA environment.

 

Test code and test data are developed beside your productive code in the same HANA instance as just another XSJSLIB development object.  Test themselves are implemented in XSJS syntax in SAP HANA Studio or the Web-based Development Workbench.

 

HANABlog52.png

 

The Unit tests can than be ran standalone in the Web browser via the URL:
/sap/hana/testtools/unit/jasminxs/TestRunner.xsjs?package=<package>

Where package = the repository package your test are located within.

HANABlog53.png

 

These unit tests can also be ran directly from the SAP Web-based Development Workbench.

HANABlog54.png

 

This has the advantage of displaying the code coverage directly within the editor of the SAP Web-based Development Workbench.

HANABlog55.png

 

Code coverage can also be displayed in the standalone web tool as well:

HANABlog56.png

By integrating the JavaScript line code coverage library BlanketJS into the XSUnit Test framework, it’s now possible to see, to which extent the JavaScript code is covered (percentage, line based) by the executed test run.

 

Note that all .xsjslib files which match pattern parameter (default “*Test”) are excluded per default.

 

Example 1: The coverage is measured for each imported .xsjslib file
http://<host>:<port>/sap/hana/testtools/unit/jasminexs/TestRunner.xsjs?package=sap.hana.testtools.mockstar&coverage=true

 

Example 2: The coverage is measured for each .xsjslib file located in the mockstar package
http://<host>:<port>/sap/hana/testtools/unit/jasminexs/TestRunner.xsjs?package=sap.hana.testtools&coverage=sap.hana.testtools.mockstar

 

Example 3: If you would like to exclude test helpers located in the tests package you can specify them via the “exclude” parameter
http://<host>:<port>/sap/hana/testtools/unit/jasminexs/TestRunner.xsjs?package=sap.hana.testtools&coverage=true&exclude=sap.hana.testtools.mockstar.tests

 

However the test tool can also be ran as a service call.  This way if you like to execute the test on a regular basis or trigger by code changes you can use Jenkins as a continuous integration server.  The unit tests are called remotely from Jenkins and the results are stored and displayed within Jenkins. This is a process that we use internally at SAP.

HANABlog57.png

 

Mocking

HANABlog58.png

 

To write self-contained unit tests that are executable in any system, you have to test the HANA model in an isolated way.

An HANA View for example has typically dependencies to other HANA Views or to database tables. These dependencies pass data to the “View under test" and cannot be controlled and overwritten by the test. This means you need the possibility to mock dependencies of the “View under test".

 

The Mocking Framework, namely Mockstar helps you to test your models with specific test data that resides in dedicated test tables.

Therefore it creates you a copy of your origin view/procedure where the dependent views/tables are replaced by test tables.

These (temporary) tables can be prepared, controlled and filled by the test.

 

The advantages of mocking are:

 

  • Write self-contained unit tests that are executable in any system
  • Tests can be executed by several users in parallel

Intro

Hi everyone, recently I just installed SAP HANA SPS09 and wanted to try something new. Since multitenant database containers (MDC) is a major feature in SPS09, I installed SAP HANA in single container mode and multiple containers mode as well. As you know, I'm a fan of SAP HANA, especially XS, so after adding both SAP HANA systems into SAP HANA Studio, I just visited XSEngine welcome pages directly. Just like after installing Apache Tomcat, you'll first visit the welcome page to check if the web server is running correctly.

 

For the single container mode, everything looked fine. But for the multiple containers mode, I failed to visit the welcome page... It seemed like hide and seek game in XSEngine. So where is XSEngine hidden and how can I visit the welcome page? I did some research and wanted to share with you my experience exposing HTTP access to multitenant database containers in SAP HANA SPS09.

 

Single container mode (instance 00)

Everything looked fine just like before.

 

1_.png

 

Since the instance number is 00, I visited port number 8000. XSEngine is up and running.

 

2_.png

 

Multiple containers mode (instance 01)

I logged on to system database container, so that's SYSTEMDB as you can see in the following screen shot. First of all, you cannot find the 'xsengine' service which you can find in the single container mode.

 

3_.png

 

Since the instance number is 01, I visited port number 8001. The welcome page did not appear. Why???

 

4_.png

 

Configure HTTP access to multiple containers

The first idea came to my mind was searching the solution in SAP HANA Administration Guide. Bingo! You can find the solution from Configure HTTP Access to Multitenant Database Containers and follow the procedure. Here I won't explain the procedure again, but I'll give some screen shots and highlight some key points for you.

 

Before jumping into the configuration part, I highly recommend you to first have a look at Multitenant Database Containers - SAP HANA Administration Guide - SAP Library in order to have an overview of MDC. From the architecture diagram below, you can see some points:

 

1. The internal web dispatcher, which is introduced as a service inside SAP HANA SPS09, accept all HTTP requests and dispatch them to different databases. That's why you'll see in the following part, most configurations take place in webdispatcher.ini.

 

2. Only index server in tenant database by default

 

3. For system database, the XS server is embedded in the name server. That's why you cannot find the 'xsengine' service in the landscape.

 

4. For tenant database, the XS server is embedded in the index server.

 

6.PNG

 

Configure HTTP access to SYSTEMDB

1. Logged on to system database container -> Administration -> Configuration -> webdispatcher.ini -> profile -> change wdisp/system_0 on the sytem layer

  • Changing the port number from 8 to 14, 3<instance>14 is the internal port number reserved for the embedded XS server of the system database.
  • Replacing the default SRCURL=/ parameter with the SRCVHOST parameter, specifying the fully qualified domain name (FQDN) of the physical host as the value

 

5_.png

 

2. Visit the welcome page. XSEngine is up and running.

 

7_.png

 

Configure HTTP access to DB1

What about creating a tenant database and visiting its XSEngine? Let's give it a shot.

 

1. Logged on to system database container -> SQL console -> run the following SQL

CREATE DATABASE DB1 SYSTEM USER PASSWORD Manager1;










 

For better understanding, I logged on to DB1 and you can see only the index server belongs to DB1 as we discussed in the architecture diagram. Since the XS server is embedded in the index server, you cannot find it as well.

 

9_.png

 

2. Logged on to system database container -> Administration -> Configuration -> webdispatcher.ini -> profile -> add wdisp/system_1 on the system layer

By default, new tenant databases are assigned 3 port numbers, one for each of the following:

  • Internal communication (3<instance>40-3<instance>97)
  • SQL (3<instance>41-3<instance>98)
  • HTTP (3<instance>42-3<instance>99)

 

We did not assign the port number when we created the tenant database DB1, so 30140(internal communication), 30141(SQL) and 30142(HTTP) these three port numbers are assigned to DB1. That's why we configure the port number to 3$(SAPSYSTEM)42 in the following screen shot.

 

8_.png

 

3. Add the tenant database alias name to DNS. As I'm not able to do that, I just add an entry to the local "hosts" file on Windows. You can find the path in the following screen shot.

 

10_.png

 

4. Visit the welcome page. XSEngine is up and running!

 

11.PNG

 

Add a dedicated 'xsengine' service and configure HTTP access

Don't like the embedded XS server? What about adding a dedicated 'xsengine' service? Now let's create another tenant database DB2 and add a dedicated XS server. You can find details from Add a Service to a Tenant Database - SAP HANA Administration Guide - SAP Library.

 

1. Logged on to system database container -> SQL console -> run the following SQL

CREATE DATABASE DB2 SYSTEM USER PASSWORD Manager2;





 

Identical to DB1, only the index server belongs to DB2, still no 'xsengine' service. Besides, you can imagine 30143(internal communication), 30144(SQL) and 30145(HTTP) these three port numbers are assigned to DB2.

 

12_.png

 

2. Run the following SQL to add the dedicated 'xsengine' service. Replace <hostname> with yours.

ALTER DATABASE DB2 ADD 'xsengine' AT LOCATION '<hostname>:30146';





 

If you use an occupied port number, the following error will appear.

 

13.PNG

 

3. Now you can find the dedicated 'xsengine' service. Keep in mind 30146(internal communication), 30147(SQL) and 30148(HTTP) are assigned to it.

 

14_.png

 

4. Configure HTTP access, add wdisp/system_2 on the system layer, use port number 3$(SAPSYSTEM)48

 

15_.png

 

5. Add the tenant database alias name to DNS.

 

16_.png

 

6. Visit the welcome page. XSEngine is up and running!

 

17.PNG

 

Want more?

Till now we've configured HTTP access to SYSTEMDB, DB1, DB2 and visited the welcome page successfully.

 

SYSTEMDB

7_.png

 

DB1

11.PNG

 

DB2

17.PNG

 

From above three URLs, you can find different hostnames but the same port number. That's how the internal web dispatcher in SAP HANA works, mapping various hostnames to various databases.

 

But how can we make sure there are three XS servers, not just one XS server? Let's use Web IDE to confirm this. In order to use Web IDE, first we need to grant a role to the SYSTEM user in SYSTEMDB, DB1 and DB2 respectively.

 

18_.png

 

We assume the following credentials.

 

<database>: <username>/<password>

SYSTEMDB: SYSTEM/Manager0

DB1: SYSTEM/Manager1

DB2: SYSTEM/Manager2

 

SYSTEMDB

19_.png

 

I failed to login with the password Manager1 and Manager2 which means this XS server only belongs to SYSTEMDB.

 

22.PNG

 

DB1

Failed to login with the password Manager0 and Manager2 which means this XS server only belongs to DB1.

 

20_.png

 

DB2

Failed to login with the password Manager0 and Manager1 which means this XS server only belongs to DB2.

 

21_.png

 

Wrap-up

In this blog post, we've first learned some basics about multitenants database containers (MDC) including the architecture at a high level. In SAP HANA SPS09, the HTTP access to MDC is not configured/exposed by default. We need to configure them manually. Thus besides SYSTEMDB we created two tenant databases DB1 and DB2, and successfully exposed HTTP access to these containers. In addition, we also learned how to add a dedicated 'xsengine' service.

 

Hope you enjoyed reading my blog and exposing HTTP access to MDC successfully.

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

 

There are several major enhancements to the XS and developer related admin tools in SAP HANA SPS 09. In this blog we will explore the visual redesign of the XSAdmin tools as well the structural changes to the SAP Web Dispatcher.

 

SAP Web Dispatcher

 

The SAP Web Dispatcher has long (since SPS 03) been the embedded web server for SAP HANA.  However its actual administration and integration into the HANA tools has been pretty lacking.  Before SPS 09 if you want to configure or monitor the SAP Web Dispatcher you had to resort to direct OS access of the HANA system.  This all changes is SPS 09 as the local SAP Web Dispatcher is now fully integrated into the SAP HANA Process Management framework.  This means that monitoring, configuration, and administration call all be done via the HANA Studio or web tooling which runs from an XS endpoint.

 

From a monitoring standpoint we can view the Web Dispatcher from the HANA Landscape view of the HANA Studio.

HANABlog40.png

However this also means that configuration parameters can now be made from the HANA Studio tools as well:

HANABlog41.png

 

The SAP Web Dispatcher has long had its own web based administration and monitoring tool.  Now this tool is directly accessible via an XS endpoint.  The URL path for this admin tool is /sap/hana/xs/wdisp/admin/public/

HANABlog42.png

 

XS Admin Tool

When originally introduced several SPSs ago, the XS Admin tool was primarily intended to allow for the advanced configuration of XSACCESS points. Since then it has grown in scope to include job schedule, SMTP and SAML configuration as well as receive a major visual design overhaul in SPS 09.

 

SAP has decided to adopt the Fiori visual design for all the web based Administration tools in HANA and the XS Admin too is no different.

HANABlog43.png

 

Another area of major redesign in the XS Admin tool is around job scheduling.  In SPS 07 we introduced XSJobs and had the ability to administer or monitor a single job from the XS Admin tool. New in SPS 09 by frequent request is a single Job Dashboard that allows you to monitor the status of all jobs in a HANA system.  It can be directly accessed from the url: /sap/gaba/xs/admin/jobs/ or runs as integrated part of the XS Admin tool.

HANABlog44.png

 

From the Job Dashboard we can drill into the details of a single job definition.

HANABlog45.png

 

Logon and Self Service

 

The Form Logon Screen receives several new features in SPS 09 as well. This includes new self service tools for resetting passwords and requesting new accounts.

 

For example you can now set a custom background image for the Form Logon Screen via the xsengine.ini -> httpserver parameter.

HANABlog46.png

There is a new self service application to reset passwords.
/sap/hana/xs/selfService/user/resetPassword.html

HANABlog47.png

 

And a new self service to request a new user account.

/sap/hana/xs/selfService/user/requestAccount.html

HANABlog48.png

 

Also there is an Admin tool where you can monitor and approve or reject these user account requests.

/sap/hana/xs/selfService/admin/

HANABlog49.png

 

Finally there is a stand alone self service appliation to allow users to set their user profile preferences such as date or time format.

/sap/hana/xs/formLogin/profile/

HANABlog50.png

This post is part of an entire series

Hana SPS09 Smart Data Integration - Overview

 

The foundation to Data Integration is being able to connect to various sources. Looking at the SPS08 Smart Data Integration option and its connectivity, you can see the usual suspects: Oracle, SQL Server etc.
With SPS09 and its Adapter SDA-extension not much did change, except one: There is an Adapter SDK now and you can write your own Adapters in Java!

 

 

The question towards connectivity is an obvious one. Without the ability to connect to a source system directly a workaround has to be used, e.g. writing files, copying them and then loading them into Hana. That files are cumbersome to handle is obvious as well. Or do you support Twitter? Workaround might be to use its underlying restful protocol. Common to all these workarounds is that they put all the development burden on the user. The user has to write a program to create files. The user has to create and parse the restful messages.

For the common sources that is no problem, all tools support relational databases of the various vendors. But even there you might find features unique to one database that is either supported or not.

 

While that is the case for Smart Data Integration Adapters as well, thanks to the Adapter SDK every Java developer can write adapters for Hana without compromising the Hana stability.

 

Architecture

 

adaptersdkarchitecture.png

The most important change, from an architectural point of view, was to move as much code as possible out of Hana's IndexServer into separate processes.

All that remains in the IndexServer is the optimizer code, translating the user entered SQL into an execution plan containing remote execution and local execution in - hopefully - the most efficient manner. The part of the SQL that should be sent to the source is handed over to the Data Provisioning Server process, which is another Hana process. This contain all the logic common to all adapters. Most important, it contains the communication protocol in order to talk to an agent process, the host of all adapters.

This architecture has multiple positive side effects:

  1. If anything happens to the remote source, Hana Index Server is not impacted. Since the Index Server is the core process of Hana, any core dump in any line of code could have brought down the entire Hana instance.
  2. Because the agent is an installation of its own, you can install the agent anywhere. One option is to place it on the Hana server itself. But that might not be preferred because then the entire middleware of all sources has to be installed there and the network has to allow passage of those middleware protocols. More likely the agent will be installed on the specific source and talk to Hana via the network. No problem as one Hana instance can have as many agents as required. Or a server of its own is used - possible as well.
  3. Because the agent can be installed anywhere, it can be even installed on-premise and be connected to a Hana cloud. instance. Not even a VPN tunnel has to be used as the supported protocol includes https as well. In this case the agent does establish a https connection to the Cloud Hana instance just as any other web browser would do.
  4. Developing an Adapter is much easier. Hana Studio has a plugin so it acts as an agent and now the developer can watch the internals easily.

 

 

Deploying existing Adapters

 

All the SAP provided Adapters are part of the Hana Agent installation, which is a separate download in SMP.

see Installations and Upgrades -> Index H -> SAP HANA SDI

in A- Z Index | SAP Support Portal


Once the agent is installed deploying an adapter is as easy as copying a jar file.

 

see SAP HANA Enterprise Information Management Administration Guide - SAP Library for details

 

Writing your own Adapter

 

The most important question is how difficult it is to write your won adapter. The goal of development was to make it as simple as possible of course but most important, you do not need to be an expert in Java, Hana and the Source system. Any Java developer should be able to write new adapters easily, just by implementing or extending some Adapter base classes.

Frankly it is quite simple, you start a new project in Hana Studio, a Java osgi/Equinox plugin project, a wizard builds the base class for you and your task is to add the code to open a connection to the source, list the source tables and their structure and the such. The SAP help portal has an excellent manual describing all step by step.

Create a Custom Adapter Using a New Plug-in Project - SAP HANA Data Provisioning Adapter SDK - SAP Library

Building an Adapter by Extending the BaseAdapterClass - SAP HANA Data Provisioning Adapter SDK - SAP Library

 

Using the Adapters

fileadapter-createsource.png

All Adapters follow the Hana Smart Data Access paradigm.

In Hana Studio you can create a new remote source, you browse the remote tables. And for selected tables you will create virtual tables so that these look and feel like any other Hana table and can be queried. All the complexity underneath is hidden.

Just imagine the power of this! You deploy the e.g. File Adapter on your file server.

fileadapter-browse.png

Obviously the Adapter needs to follow certain security rules, like in the case of the FileAdap

ter the developer decided that you can query not the entire server but only files and directories within a specified root directory. And in addition the adapter requires the consumer to authenticate himself as a valid user.


Then the allowed Hana instance, local or remote or in the cloud, can see files as regular tables with a user defined table layout. Each of these tables can be instantiated as virtual table and when selecting from a virtual table, all files - or the files specified in the where clause of the query - are parsed using the defined file format and their data is shown.

 

fileadapter-query.png

Craig Cmehil

IoT with SAP HANA DEV

Posted by Craig Cmehil Dec 8, 2014

This past few months have been heavily focused on SAP HANA and Internet of Things (IoT) for me and the SAP Developer Relations team. The result from Las Vegas, Berlin and just last week at the ThingMonk 2014 event in London is a nice little collection of code making up a new example application that will be present in the next version of the SAP HANA Developer Edition (SPS9) in the next weeks. Ideally this project will be embedded in the next HANA Dev Edition but also available on GitHub in the next weeks.

 

The goal of this example is to show how easily one can read data from an external hardware device with a sensor such as temperature sensor and store, visualize and access the data via a SAP HANA XS application.

  • Hardware such as Raspberry Pi, Beagle Bone, and Tessel
  • Circuit
  • Some code for hardware devices using Bonescript, Python and Javascript

 

This example application also allows for display of the values entered via those devices.

 

No example is complete without showing the complete "round trip", this example also provides the ability to retrieve values from SAP HANA and use them in the hardware side.

 

 

Hardware

Below you will find example code for posting to this application from different hardware devices. 3 That we have focused on are:

  • Raspberry Pi Model B

    rpi.jpg

  • Beagle Bone Black

    bbb.jpg

  • Tessel

    Tessel.png

 

Circuit

led_temp.png

Creating the circuit on the breadboard is extremely simply.

  • Connect a black jumper wire to the GND
  • Connect a red jumper wire to the 3.3 V
  • Place an LED in the breadboard
  • Place a 220 Ohm Resister from GND to the short leg of the LED
  • Place a jumper wire from the long leg of the LED to the GPIO pin 18
  • Repeat the process for the second LED and connect to GPIO pin 22
  • Place the digital temperature sensor on the bread board, we are using DS18B20
  • From the "left" leg of the sensor we run a black jumper wire to GND
  • From the "right" leg of the sensor we run a red jumper wire to 3.3v
  • From the "center" leg of the sensor we run a jumper wire to the GPIO pin 16
  • Between the "right" and "center" legs we place a 4.7K Ohm resistor

 

Coding


Basically any type of hardware device that can do a JSON based HTTP post can submit data to the server using Content-Type": "application/json".

               var jsonData = {
                          "ID": 1,
                          "SNAME": "Tessel",
                          "STYPE": "Temp",
                          "SVALUE": ""+temp+"",
                          "TDATE": "/Date("+timeStamp+")/"
          
              }
     

 

 

Python for the Raspberry Pi

        

import os

import glob

import json

import urllib2

import time

import RPi.GPIO as GPIO

 

os.system('modprobe w1-gpio')

os.system('modprobe w1-therm')

 

base_dir = '/sys/bus/w1/devices/'

device_folder = glob.glob(base_dir + '28*')[0]

device_file = device_folder + '/w1_slave'

 

hanaposts = 0

hanaposts2 = 0

 

# to use Raspberry Pi board pin numbers

GPIO.setmode(GPIO.BOARD)

GPIO.setwarnings(False)

# set up GPIO output channel

GPIO.setup(18, GPIO.OUT)

GPIO.setup(22, GPIO.OUT)

 

def blink(pin):

GPIO.output(pin,GPIO.HIGH)

time.sleep(1)

GPIO.output(pin,GPIO.LOW)

time.sleep(1)

return

 

def basic_authorization(user, password):

s = user + ":" + password

return "Basic " + s.encode("base64").rstrip()

 

def read_temp_raw():

f = open(device_file, 'r')

lines = f.readlines()

f.close()

return lines

 

def read_temp():

lines = read_temp_raw()

while lines[0].strip()[-3:] != 'YES':

time.sleep(0.2)

lines = read_temp_raw()

equals_pos = lines[1].find('t=')

if equals_pos != -1:

temp_string = lines[1][equals_pos+2:]

temp_c = float(temp_string) / 1000.0

temp_f = temp_c * 9.0 / 5.0 + 32.0

return temp_c

 

while True:

hanaposts += 1

txtTemp = read_temp()

txtDate = '/Date(' + str(int(time.time())) + ')/'

url = 'http://[SERVER IP]:[SERVER PORT]/sap/devs/iot/services/iot_input.xsodata/sensor'

params = {"ID": "1", "TDATE": txtDate, "SVALUE": str(txtTemp), "SNAME": "Craig", "STYPE": "Temp" }

req = urllib2.Request(url,

headers = {

"Authorization": basic_authorization('[USER]', '[PASSWORD]'),

"Content-Type": "application/json",

"Accept": "*/*",

}, data = json.dumps(params))

f = urllib2.urlopen(req)

# LED

if hanaposts == 25:

hanaposts = 0

hanaposts2 += 1

blink(22)

if hanaposts2 == 50:

hanaposts2 = 0

blink(18)

time.sleep(1)

Bonescript for the Beagle Bone

          var b = require('bonescript');
          var temperatureSensor = "P9_40"
          
          setInterval(readTemperature, 1000);
          
          function readTemperature() {
              b.analogRead(temperatureSensor, writeTemperature);
          }
          
          // The MCP9700A provides 500mV at 0C and 10mV/C change.
          function writeTemperature(x) {
              //console.log("temp value: "+x.value);
              var millivolts = x.value * 3300; // Only outputs 1.8v, so must adjust accordingly
              var temp_c = (millivolts - 500) / 100;
              var temp_f = (temp_c * 9 / 5) + 32;
              //console.log("Millivolts: " + millivolts + ", temp_c: " + temp_c + ", temp_f: " + temp_f);
              var timeStamp = new Date().getTime();
              //console.log(new Date());
              writeToScreen(temp_c);
              writeToHana(temp_c,timeStamp);
          }
          
          function writeToHana(temp, timeStamp) {
              var http = require('http');
              var options = {
                host: '[SERVER IP]',
                port: [SERVER PORT],
                path: '/sap/devs/iot/services/iot_input.xsodata/sensor',
                method: 'POST',
                headers: {
                   'Authorization': '[AUTH STRING]' ,
                   'Content-Type':'application/json'
          
                }
              };
          
              var req = http.request(options, function(res) {
                res.setEncoding('utf-8');
          
                var responseString = '';
          
                res.on('data', function(data) {
                  responseString += data;
                  // Do something when a value is there
                  //console.log("Response: " + responseString);
          
                });
          
                res.on('end', function() {
                  //var resultObject = JSON.parse(responseString);
                });
              });
          
              req.on('error', function(e) {
                console.log("error found");
                console.error(e);
              });
          
              var jsonData = {
                          "ID": 1,
                          "SNAME": "Craig",
                          "STYPE": "Temp",
                          "SVALUE": ""+temp+"",
                          "TDATE": "/Date("+timeStamp+")/"
          
              }
              var strData = JSON.stringify(jsonData);
              //console.log(strData);
              req.write(strData);
              req.end();
          }
     

Javascript for the Tessel

// Any copyright is dedicated to the Public Domain.

// http://creativecommons.org/publicdomain/zero/1.0/

 

/*********************************************

This basic climate example logs a stream

of temperature and humidity to the console.

*********************************************/

 

var tessel = require('tessel');

// if you're using a si7020 replace this lib with climate-si7020

var climatelib = require('climate-si7020');

 

var climate = climatelib.use(tessel.port['A']);

 

 

climate.on('ready', function () {

console.log('Connected to si7020');

 

// Loop forever

setImmediate(function loop () {

climate.readTemperature('c', function (err, temp) {

climate.readHumidity(function (err, humid) {

console.log('Degrees:', temp.toFixed(4) + 'C', 'Humidity:', humid.toFixed(4) + '%RH');

var timeStamp = new Date().getTime();

writeToHana(temp.toFixed(4),timeStamp);

setTimeout(loop, 1000);

});

});

});

});

 

climate.on('error', function(err) {

console.log('error connecting module', err);

});

 

function writeToHana(temp, timeStamp) {

var http = require('http');

var options = {

host: '[SERVER IP]',

port: [SERVER PORT],

path: '/sap/devs/iot/services/iot_input.xsodata/sensor',

method: 'POST',

headers: {

'Authorization': '[AUTH STRING]' ,

'Content-Type':'application/json'

 

}

};

 

var req = http.request(options, function(res) {

res.setEncoding('utf-8');

 

var responseString = '';

 

res.on('data', function(data) {

responseString += data;

// Do something when a value is there

//console.log("Response: " + responseString);

 

});

 

res.on('end', function() {

//var resultObject = JSON.parse(responseString);

});

});

 

req.on('error', function(e) {

console.log("error found");

console.error(e);

});

 

var jsonData = {

"ID": 1,

"SNAME": "Tessel",

"STYPE": "Temp",

"SVALUE": ""+temp+"",

"TDATE": "/Date("+timeStamp+")/"

 

}

var strData = JSON.stringify(jsonData);

//console.log(strData);

req.write(strData);

req.end();

}

    

 

Control Values

 

Using an HTTP GET using Content-Type": "application/json" we can retrieve values from our "control" table.

Python for the Raspberry Pi

import RPi.GPIO as GPIO

import time

import urllib2

import json

 

# convert string to number

def num(s):

try:

return int(s)

except ValueError:

return float(s)

 

# blinking function

def blink(pin):

GPIO.output(pin,GPIO.HIGH)

time.sleep(1)

GPIO.output(pin,GPIO.LOW)

time.sleep(1)

return

 

# Get values from server

def getServerResponse(url):

req = urllib2.Request(url)

opener = urllib2.build_opener()

f = opener.open(req)

return json.loads(f.read())

 

 

###### Initialize Program ######

 

# surpress warnings

GPIO.setwarnings(False)

# to use Raspberry Pi board pin numbers

GPIO.setmode(GPIO.BOARD)

# set up GPIO output channel

GPIO.setup(18, GPIO.OUT)

GPIO.setup(16, GPIO.OUT)

GPIO.setup(22, GPIO.OUT)

 

# fetch control variables

jsonStr = getServerResponse("http://[SERVER IP]/sap/devs/iot/services/iot_control.xsodata/control?$format=json&$filter=SCONTROL%20eq%20%27HOT%27")

hotTemp = jsonStr['d']['results'][0]['SVALUE']

 

jsonStr = getServerResponse("http://[SERVER IP]/sap/devs/iot/services/iot_control.xsodata/control?$format=json&$filter=SCONTROL%20eq%20%27COLD%27")

coldTemp = jsonStr['d']['results'][0]['SVALUE']

 

# Now loop and check for action

while True:

jsonStr = getServerResponse("http://[SERVER IP]/sap/devs/iot/services/iot.xsodata/IOT?$orderby=ID%20desc&$top=1&$select=SVALUE&$filter=SNAME%20eq%20%27Ian%27&$format=json")

currentTemp = jsonStr['d']['results'][0]['SVALUE']

if num(currentTemp) < num(coldTemp):

print "Under, " + currentTemp + " is less than " + coldTemp

for i in range(0,50):

blink(18)

else:

if num(currentTemp) > num(hotTemp):

print "Over, " + currentTemp + " is more than " + hotTemp

for i in range(0,50):

blink(22)

else:

print "Within range"

 

time.sleep(5)

 

GPIO.cleanup()

 

Round Trip

 

Using an HTTP GET using Content-Type": "application/json" we can retrieve values from our "control" table.

 

This round trip uses a fan, fan and "led" as a heater. The fan is a small 4V fan using the 5V power and a transistor on board. The idea here is that when the fan reaches the "HOT" value from the control table the program would start the fan to cool it back down and when it reaches the "COLD" value the "heater" would start to heat of the environment.

 

round_trip.png

 

Python for the Raspberry Pi

import os

import glob

import json

import urllib2

import time

import RPi.GPIO as GPIO

 

os.system('modprobe w1-gpio')

os.system('modprobe w1-therm')

 

base_dir = '/sys/bus/w1/devices/'

device_folder = glob.glob(base_dir + '28*')[0]

device_file = device_folder + '/w1_slave'

maxtemp = 0

mintemp = 0

 

# to use Raspberry Pi board pin numbers

GPIO.setmode(GPIO.BOARD)

# set up GPIO output channel

GPIO.setup(12, GPIO.OUT)

GPIO.setup(13, GPIO.OUT)

GPIO.setup(15, GPIO.OUT)

GPIO.setup(16, GPIO.OUT)

GPIO.setup(18, GPIO.OUT)

 

def blink(pin):

GPIO.output(pin, GPIO.HIGH)

time.sleep(0.5)

GPIO.output(pin, GPIO.LOW)

time.sleep(0.5)

return

 

def temp_low():

GPIO.output(12, GPIO.HIGH)

GPIO.output(13, GPIO.LOW)

GPIO.output(15, GPIO.HIGH)

GPIO.output(16, GPIO.LOW)

return

 

def temp_high():

GPIO.output(12, GPIO.LOW)

GPIO.output(13, GPIO.HIGH)

GPIO.output(15, GPIO.HIGH)

GPIO.output(16, GPIO.LOW)

return

 

def temp_ok():

GPIO.output(12, GPIO.LOW)

GPIO.output(13, GPIO.LOW)

GPIO.output(15, GPIO.LOW)

GPIO.output(16, GPIO.HIGH)

return

 

# Get values from server

def getServerResponse(url):

req = urllib2.Request(url)

opener = urllib2.build_opener()

f = opener.open(req)

return json.loads(f.read())

 

def basic_authorization(user, password):

s = user + ":" + password

return "Basic " + s.encode("base64").rstrip()

 

def read_temp_raw():

f = open(device_file, 'r')

lines = f.readlines()

f.close()

return lines

 

def read_temp():

lines = read_temp_raw()

while lines[0].strip()[-3:] != 'YES':

time.sleep(0.2)

lines = read_temp_raw()

equals_pos = lines[1].find('t=')

if equals_pos != -1:

temp_string = lines[1][equals_pos+2:]

temp_c = float(temp_string) / 1000.0

temp_f = temp_c * 9.0 / 5.0 + 32.0

return temp_c

 

jsonStr = getServerResponse("http://[SERVER IP]/sap/devs/iot/services/iot_control.xsodata/control?$format=json&$filter=SCONTROL%20eq%20%27COLD%27")

mintemp = jsonStr['d']['results'][0]['SVALUE']

print ('MIN Temp is set at ' + mintemp + 'c')

 

jsonStr = getServerResponse("http://[SERVER IP]/sap/devs/iot/services/iot_control.xsodata/control?$format=json&$filter=SCONTROL%20eq%20%27HOT%27")

maxtemp = jsonStr['d']['results'][0]['SVALUE']

print ('MAX Temp is set at ' + maxtemp + 'c')

 

while True:

# Value of Sensor

txtTemp = read_temp()

# Timestamp

txtDate = '/Date(' + str(int(time.time())) + ')/'

# HTTP Post to HANA

url = 'http://[SERVER IP]/sap/devs/iot/services/iot_input.xsodata/sensor'

params = {"ID": "1", "TDATE": txtDate, "SVALUE": str(txtTemp), "SNAME": "Ian", "STYPE": "Temp" }

# print(params)

req = urllib2.Request(url,

headers = {

"Authorization": basic_authorization('[USER]', '[PASSWORD]'),

"Content-Type": "application/json",

"Accept": "*/*",

}, data = json.dumps(params))

f = urllib2.urlopen(req)

blink(18)

 

# fetch the url

# url2 = "http://[SERVER IP]/sap/devs/iot/services/iot.xsodata/IOT?$orderby=ID%20desc&$top=1&$select=SVALUE&$filter=SNAME%20eq%20%27Ian%27&$format=json"

# req2 = urllib2.Request(url2)

# opener2 = urllib2.build_opener()

# f2 = opener2.open(req2)

# json2 = json.loads(f2.read())

# currtemp = json2['d']['results'][0]['SVALUE']

 

jsonStr = getServerResponse("http://[SERVER IP]/sap/devs/iot/services/iot.xsodata/IOT?$orderby=ID%20desc&$top=1&$select=SVALUE&$filter=SNAME%20eq%20%27Ian%27&$format=json")

currtemp = jsonStr['d']['results'][0]['SVALUE']

#print currtemp

 

if (float(currtemp) <= float(maxtemp)):

if (float(currtemp) < float(mintemp)):

print ('>>> HEATER ON ' + currtemp + "c lower than MIN temp of " + str(mintemp) + "c")

temp_low()

else:

print ('HEATER/FAN OFF ' + currtemp + "c within bounds")

temp_ok()

else:

print ('>>> FAN ON ' + currtemp + "c exceeds MAX temp of " + str(maxtemp) + "c")

temp_high()

 

# Loop to next reading

time.sleep(3)

 

** Be sure on your spacing for your code, for some reason formatting here is horrid!!

Dear Cracks,

 

I wrote this code more then 6 months ago and as it is still not legacy, I decided to write this blogpost.

In the tutorial above, I will show you how easily you can get data out of your HANA database with NodeJS. The application is written in a way that you can deploy it on CloudFoundry.

 

Why Hana?

Hmm...I think it is not necessary to describe here why to use HANA I mainly do it, because it is super fast and has a lot more to provide then just the database.

 

Why NodeJS?

"Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Holger Koser @kos wrote a NodeJS HANA driver, which is also open source and available on Github. The driver alows you to connect to your HANA DB and write powerfull applications in common javascript.

 

Why CloudFoundry?

In the cloud era, the application platform will be delivered as a service, often described as Platform as a Service (PaaS). PaaS makes it much easier to deploy, run and scale applications. Some PaaS offerings have limited language and framework support, do not deliver key application services, or restrict deployment to a single cloud. CloudFoundry is the industry’s Open PaaS and provides a choice of clouds, frameworks and application services. As an open source project, there is a broad community both contributing and supporting CloudFoundry. As it is open source, you can find the whole code on Github. Internally we have CloudFoundry running in a production grade, this is mainly why I go into CF here.

 

Let's get the party started

This tutorial shows you how to deploy an application which uses openUI5 as frontend, NodeJS as backend and SAP HANA as database. Regulary SAP and SAP databases need a huge landscape which is not really fast and needs a lot of customizations. A big advantage of this use case is the very fast SAP HANA database. We use it directly from our NodeJS backend without any middleware. Because of this architecture we're able to request our data really fast.

This is the architectural overview:

https://camo.githubusercontent.com/8bf6a0e9bd808ca364f31df77eeafbc6694c4400/68747470733a2f2f73332d65752d776573742d312e616d617a6f6e6177732e636f6d2f7377697373636f6d2d6974732f6e6f646a732d68616e612f6172636869746563747572652e706e67

Here is to mention that you can use every CloudFoundry based PaaS not only our Swisscom Cloud also i.e. Pivotal Web Services.

 

To demonstrate the fast usage of HANA we provide a freestyle fuzzy search(wikipedia) over three columns and a big amount of data. Follow the tutorials to understand how it works.

 

Watch a Demo here.

 

There are two different types of this tutorial, they are linked on github as I only want to maintain it at one point:

  • Kickstarter steps: a short introduction on how to use this repository and what you have to do when you clone it -> Link.
  • Step by step tutorial: you learn to setup a NodeJS application and how to use a database service in Cloudfoundry -> Link.

 

As enhancement you can also deploy a mobile frontend in an addition layer, see the source code and steps here.

 

What do you think about the combination of SAP HANA and NodeJS?

 

I also wrote a demo app with which you can search through your PDF's, would be happy to write a blog post about that, if you're interested in NodeJS and HANA.

 

Looking forward to your feedback,

Lukas

@Github

@Twitter

Actions

Filter Blog

By author:
By date:
By tag: