• Series Data Column Table "STOCKS_DATA_SERIES" has to be present in Schema “SERIES_DATA".



[CSV Files (STOCKS_DATA_SERIES.csv) is attached in this Post,

Using Flat File Import Create tables "STOCKS_DATA_SERIESin schema SERIES_DATA" in your landscape.]


Exercise 1:  Create & Compare the Series table with Column table



Examples / Screenshot

Step 1:


Creating Regular column Table and

Series Data Table.

Create column Table "SERIES_DATA".MyTab

(key int, ts timestamp, value int);


Create column Table "SERIES_DATA".MyTabSeries

(key int, ts timestamp, value int)


(series key(key)

period for series(ts,null)


increment by interval 60 second);

Step 2:


Inserting Data to Regular column Table and

Series Data Table using



Insert into "SERIES_DATA"."MYTAB" select 1, GENERATED_PERIOD_START, 7 from SERIES_GENERATE_TIMESTAMP ('INTERVAL 60 SECOND', '2010-01-01', '2011-01-01', null, null, null);


Insert into "SERIES_DATA"."MYTABSERIES" select 1, GENERATED_PERIOD_START, 7 from SERIES_GENERATE_TIMESTAMP ('INTERVAL 60 SECOND', '2010-01-01', '2011-01-01', null, null, null);


  1. No.of rows in both the table has to be 5,25,600.

Step 3:


Execute Merge Delta Operations for both the tables.



merge delta of "SERIES_DATA"."MYTAB";


with parameters ('optimize_compression' = 'force');


merge delta of "SERIES_DATA"."MYTABSERIES";

update "SERIES_DATA"."MYTABSERIES" with parameters ('optimize_compression' = 'force');

Step 4:


Verifying Series Table Storage and Comparing Memory size and Compression with Regular Column table.


Select Table_name, column_name, memory_size_in_total, sum(memory_size_in_total) over (partition by table_name) as tab_memory_size, uncompressed_size,

sum(uncompressed_size) over (partition by table_name) as tab_uncompressed_size, compression_ratio_in_percentage as ratio, compression_type, "COUNT", distinct_count

from m_cs_columns where table_name in ('MYTAB', 'MYTABSERIES')




Verify both normal column table and series Table.

Normal Column table --> TS Column memory size --> 5 MB

Series Table --> TS Column memory size --> 10 KB

Now You can understand How efficiently Series Table stores the data for  Time values.



Exercise 2:  Series Table Built-in and Analytic Functions:



Examples / Screenshot

Step 1:


Check the data Preview of Series Data Table "STOCKS_DATA_SERIES"




The table (“SERIES_DATA”.”STOCKS_DATA_SERIES”) is having Stock Market data with values (TICKER_ID, TICKER_DESCRIPTION, DATE, OPEN, HIGH, LOW, CLOSE, VOLUME, ADJ_CLOSE and DAILY_PERCENT_CHANGE) Since Year 1959 to 2015 (all the business days)

for Deutscher Aktien Index (DAX).


Total no.of rows in the Table 13895.




Step 2:


Exploring Series Data Built-in Functions:




SERIES_DISAGGREGATE (Built-in Function):


Transforming an equidistant time series with a coarser delta to one with a finer delta can be

performed using the SERIES_DISAGGREGATE function.


SQL Query:


We have data on daily basis.

We are going to disaggregate data to hourly basis from Daily.



select * from



s.close * g.FRACTION_OF_SOURCE_PERIOD AS "By Hour Close"




'INTERVAL 1 HOUR', '2015-01-19', '2015-01-20') g



where DATE = '2015-01-19';






Step 3:


Exploring Series Data Built-in Functions:



SERIES_ROUND (Built-in Function):


Horizontal aggregation transforms an equidistant series with a narrower interval to a new series with a coarser interval. Horizontal Aggregation functionality performed using the SERIES_ROUND function.


SQL Query:


We have data on daily basis.

We are going to Aggregate data to monthly basis from Daily.


Select rounded.TICKER_ID, Month, Avg(CLOSE) as Monthly_avg




  1. t.TICKER_ID,





As rounded

Group By rounded.TICKER_ID, Month





You have completed the exercise!

You are now able to:

1)  Create the Time Series Table.

2)  Understand the Storage of Series Data Table.

3)  Usage of Series Data Built-in Functions.


I chose to model fire propagation across the suburban landscape. Obviously, since I’m not a subject matter expert in that, I didn’t try to make the model actually useful for real world predictions. Rather my aim was to pick a realistically looking aspect of fire propagation and use it to showcase spatial analysis capabilities of HANA.


So, leaving aside all other factors of fire propagation, let’s assume that in an event of fire the wind may carry away burning particles, which, when falling to the ground, may start new fires.


Spatial data used

I used three sources of data, all available for free from Australian Bureau of Statistics:

  • - boundaries of postcodes in New South Wales,
  • - boundaries of Australian Statistical Geography Standard (ASGS) mesh blocks,
  • - census data associated with ASGS mesh blocks: population, number dwellings, type of land use.


I imported all that data into HANA using IMPORT statement and using SRS 1000004326, which would allow to measure distances – more about that in the next section.


Flat Earth, Round Earth

The Earth has an ellipsoid shape, hence one geographical degree means different distance on the equator and in Sydney. That makes using degrees difficult for measuring distances. HANA provides a few pre-set Spatial Reference Systems (SRS), falling into either “flat Earth” or “round Earth” category. Measuring distances only works in “flat Earth” one.


In a fully-fledged GIS system, there would be some sort of transformation available to convert a shape from linear to polar units and back. HANA lacks that functionality: the ST_TRANSFORM() function will serve some other purpose (and only in SPS10), and the ST_TRANSLATE function has just been added to the feature list. One is left with either the option to implement Vincenty’s formulae, or assume that 1 polar degree has the same value within a local map. For example,

SELECT new st_point(151,-31,1000004326).st_distance(new st_point(151.001,-31,1000004326), 'meter')

  as v_dx

from dummy

should give a metric value of the longitude polar unit in the near proximity of the point -31 lat 151 long.

My understanding of the example above is that behind the scene, HANA converts geographic degrees into some internal “flat Earth” coordinates (of SRS 1000004326) and calculates the distance. The values of those internal coordinates are not made available to the user.

Using one of these options, one can use HANA math functions and build a shape in linear system, translate it to geographic degrees and save to the database, or just pass back to SAPUI5 application as GeoJSON().


I picked Google Maps API, as it seemed to have the easiest learning curve for me. Other choices would be probably Nokia Here maps and SAP Visual Business. All these APIs can be integrated into SAPUI5, and SAP Visual Business, in fact, has its own controls in SAPUI5 framework.


To make the application portable between map providers, I moved all calculation and measuring logic to database procedures and used the Google Maps API only for geospatial data visualisation and for user input capture.


Google Maps visualises javascript objects of class google.maps.Data.Feature, where a Feature has the ID, some properties and, lastly, its geometry represented as GeoJSON. Conversion of HANA internally stored spatial objects into GeoJSON has to happen in database procedures, and Features can be built either on the server (XSJS) or on the client (SAPUI5) side.


For my app, I implemented visualization of postcode containing a point and attached it to a Click event on the map:



Simple spatial functions

I built a simple tool to measure distances on the map. The google.maps API captures two consecutive Click events, then HANA uses ST_DISTANCE() spatial function to measure the distance between those two points.


Similarly, using distance measurement functions within an arctangent expression, I calculated the simulated wind direction from two consecutive Click events.



Modelling smoke dispersal

For a particle to be carried X units away from the fire along the wind direction, there is some probability associated with that event. That probability has normal distribution along X. Then, there is some probability for a particle to diffuse Y units across the wind direction -- that probability also has normal distribution, but obviously with different parameters; in fact, there are equations describing that along/across dispersion.


For the purpose of further work, I changed those equations to outline a patch on the surface, that encompasses all probabilities below some threshold (say, 1-2 sigmas) and created a HANA stored procedure to calculate an ellipsoid-like shape of that patch.


In this scenario, an area inside the red patch is under significant risk of fire (blue arrow indicates the wind direction).



Spatial join

I wanted to find out a) what ASGS mesh blocks would fall into the risk area and get some statistics about them, b) to what postcodes those mesh blocks belong. Both tasks would require joining tables not by usual alpha/numeric  fields, but using a spatial relationship between shapes in records. Examples would be “A within X meters from B””, “A overlaps/touches B”, “A is covered by B” etc.


A simplified way to use spatial joins would be in a Calculation View, and there is a SAP tutorial for that. An SQLScript example for “intersection join” would look like this:

SELECT t1.mb_code11 as mesh_code,

t2.POA_2006 as postcode

from "SPATIAL"."T_SHP_MESH" as t1

inner join "SPATIAL"."T_SHP_POSTAL" as t2

    on t1.shape.st_intersects(t2.shape) = 1


Here, I added a SAPUI5 Table that would interactively select the row with the data about a mesh block the user clicks on:


Performance considerations

HANA is undeniably fast, but my experience with spatial analysis in HANA so far indicates that there is some amount of optimisation to be done. I may be pushing HANA too far with the amount of data I loaded, but since it’s just one Australian state I doubt that.


So, performance degrades dramatically with increased complexity of spatial features being analysed, increased both in terms of the number features and number of vertices in features. One should be careful, for example, with using ST_BUFFER(), as it produces a rather finely shaped circle polygon with the number of vertices that can totally choke the database. It would be good of SAP to provide functionality to reduce the number of vertices in a shape, I remember having that in ESRI’s Arc/INFO.


Another idea that proved useful was to build a “spatial index” of loaded shapes, for example by creating a rectangle containing each shape:

insert into "SPATIAL"."T_SHP_MESH_INDEX"

( select mb_code11, mb_cat11,

         new st_polygon('Polygon ((' ||

               shape.st_xmin() || ' ' || shape.st_ymin() || ', ' ||

               shape.st_xmin() || ' ' || shape.st_ymax() || ', ' ||

               shape.st_xmax() || ' ' || shape.st_ymax() || ', ' ||

               shape.st_xmax() || ' ' || shape.st_ymin() || ', ' ||

               shape.st_xmin() || ' ' || shape.st_ymin() || '))')

               as shape

               from "SPATIAL"."T_SHP_MESH")

Rough and slightly redundant selection of features might be made using that “spatial index” and then fine selection with real shapes would be performed on a subset. In my case, this trick reduced selection time from eternity to a few seconds.


Possible extension

This model has some potential for further extension for predictive analysis.


The ASGS mesh blocks already provide land use type and population density, which may give a clue on how fire-prone a mesh block is: say, a high-rise block is less prone to catch a fire than a bushland piece. Further, some historical data on detected fires, with their coordinates, time of detection, wind parameters etc. could be used to derive spatial relationships (clustering? distances?) between historical fires and build a more thorough predictive model.

In SPS 09 the unit test framework XSUnit (which bases on open-source framework Jasmine) was introduced by SAP. By using this tool server-side JavaScript unit tests can be created for XS applications. As well, the mocking framework Mockstar was introduced which permits to mock database objects or substitute tables/views by stubs. In interaction both tools make it possible to write unit tests for complex database objects like attribute views, analytic views, calculation views and procedures.

This blog guides through the initial difficulties to draft the first unit tests. Furthermore, it examines advanced aspects like measuring the code coverage and debugging unit tests.


All relevant test tools/frameworks have been bundled by SAP in delivery unit HANA_TEST_TOOLS which is not automatically deployed but available as non-automatic content. For us it worked to download HANATESTTOOLS.tgz and deploy it using the HANA Studio (REPO.IMPORT privilege is required for this activity). According to the SAP development documentation it is possible as well to deploy the delivery unit via Application Lifecycle Management.


Note: After installation the test tools are available in package sap.hana.testtools. In HANA Studio the package content can be inspected in Repository Browser or Project Explorer. On the opposite, the System view will show “hidden objects” since it only displays objects of type Analytic View, Attribute View, Calculation View, Analytic Privileges, Decision Tables, and Procedures.


The following privileges and roles are required to create and run unit tests:

  • User must be a standard user (since stub tables are always are created within the user’s schema).
  • Select permission must be granted on tables/views under test (in schema where the original tables/views reside).
  • (optional) For viewing and updating unit tests the following privileges must be granted on the package where the unit tests reside:


    Additionally, the execute privilege on
    SYS.REPOSITORY_REST must be granted to access the HANA repository.
  • (optional) Roles sap.hana.xs.ide.roles::EditorDeveloper and sap.hana.xs.debugger::Debugger are required for using the web-based development workbench and debug unit tests while runtime.
  • (optional) Role sap.hana.testtools.common::TestExecute is required for using the code coverage facility. (Even if this rule is designed for another purpose it is the only role which grants SELECT on the SAP_HANA_TEST schema.)


Writing Unit Tests

Unit tests are supposed to be created as XS JavaScript Library files (suffix .xsjslib) in HANA Studio. The basic syntax is explained in detail on the Jasmine project page. Simplest example:


/*global jasmine, describe, beforeOnce, it, expect*/
describe("test suite", function() {
       beforeOnce(function() {
              // called before running the test suite

       it("unit test", function() {

describe() implements a test suite and it() implements one unit test within that suite. The comment in the first line is specific for HANA. It must be added to each unit test file and tells the HANA Studio to consider the listed functions to be available. expect() brings along many functions to implement the unit test assertions.


A great variety of unit tests examples is provided with the test tools demos which can be found in package sap.hana.testtools.demos. Thus, this blog leaves it with a pretty simple unit test example. The object under test is an attribute view (AT_EMPLOYEE) which joins tables EMPLOYEE and DEPARTMENT with an outer join:


Note: Both, the attribute view as well as the unit test file must be located in a package which belongs to an XS Application.


When it comes to tests on views it’s mandatory to manipulate the content of referred tables in order to compare the view’s results afterwards with expected values. Certainly, that contradicts the idea of unit tests which are supposed to  test objects isolated. To solve that issue the Mockstar framework can be used to create a temporary copy of the view under test and replace the originally referenced tables with stub tables. These stub tables can be manipulated without worries. The following code would be used to isolate AT_EMPLOYEE:


var testEnv = mockstarEnvironment.defineAndCreate({
    targetPackage : 'UnitTesting.temp.' + mockstarEnvironment.userSchema,
    schema : 'UNIT_TESTING', // original schema
    model : { // object under test
        schema : '_SYS_BIC',
        name : 'UnitTesting/AT_EMPLOYEE'
    substituteTables : { // tables to be substituted
        "empl" : 'EMPLOYEE',
        "dept" : 'DEPARTMENT'

It’s common practice (within SAP’s demo unit tests) to do that isolation within the beforeOnce() function. After running the unit test for a first time the object under test will be copied into the specified target package. Its source tables will be substituted by identical copies of the original tables which are located in the user schema of the user who ran the test.


Note: It’s not possible to configure the target schema. The substitute tables will always be created in the user’s schema who run the test. If the original tables are located in the same schema (of the testing user) the unit test execution will cause the original tables to be deleted!

Note: Objects under test, in our example the attribute view, are taken from schema “<definition.schema>_SYS_BIC”, since that schema contains all activated objects.


A first, simple unit test could be one which truncates the employee table and checks if the view is running and is returning an empty result set:


// truncate employee table (substitute)
testEnv.clearTestTables([ 'empl' ]);

// check if the view’s result set is empty
sqlExecutor = new SqlExecutor(jasmine.dbConnection);
var actual = sqlExecutor.execQuery('select * from ' + testEnv.getTestModelName());
expect(actual).toMatchData({}, [ "ID" ]);



Complete Unit Test Example

After discussing all aspects of creating unit tests in the previous chapter I would like to show a complete unit test example in this chapter. Therefore, let’s extend the previous example and create a second calculation view (CA_DEPARTMENT_STATS) which simply aggregates the results of the first calculation view (CA_EMPLOYEE - I re-built AT_EMPLOYEE as calculation view with identical functionality for this example):




That view counts the number of employees per department. Two simple unit tests which prove the view’s functionality would be:

  1. Is the number of employees per department is calculated correctly
  2. Are employees without assignment to a department are counted correctly, as well


The following unit test suite implements both unit tests for view CA_DEPARTMENT_STATS:


/*global jasmine, describe, beforeOnce, beforeEach, it, xit, expect*/
var SqlExecutor = $.import("sap.hana.testtools.unit.util", "sqlExecutor").SqlExecutor;
var mockstarEnvironment = $.import("sap.hana.testtools.mockstar", "mockstarEnvironment");
* Test suite for testing CA_DEPARTMENT_STATS calculation view
describe("CA_DEPARTMENT_STATS", function() {
  var sqlExecutor = null;
  var testEnv = null;
     * Setup unit test environment
    beforeOnce(function() {
        testEnv = mockstarEnvironment.defineAndCreate({
            targetPackage : "UnitTesting.temp." + mockstarEnvironment.userSchema,
            schema : "_SYS_BIC",
            model : { // object under test
                name : "UnitTesting/CA_DEPARTMENT_STATS"
            substituteViews : { // views to be substituted
                "empl" : "UnitTesting/CA_EMPLOYEE"
     * Before each unit test: Reset test data in stubs
    beforeEach(function() {
        sqlExecutor = new SqlExecutor(jasmine.dbConnection);
        testEnv.fillTestTable("empl", [{
            ID : 1,
            NAME : "John Doe",
            DEPARTMENT : "IT"
            ID : 2,
            NAME : "Jane Doe",
            DEPARTMENT : "IT"
            ID : 3,
            NAME : "Peter Carrot",
            DEPARTMENT : "Accounting"
     * Check if aggregation by department works
    it("should return correct number of employees per department", function() {
      var expected = [{
          DEPARTMENT : "IT",
          NUM_EMPLOYEES : 2
          DEPARTMENT : "Accounting",
          NUM_EMPLOYEES : 1
      var actual = sqlExecutor.execQuery("select * from " + testEnv.getTestModelName());
      expect(actual).toMatchData(expected, [ "DEPARTMENT" ]);
     * Check if employees without assignment to a department are shown as well
    it("should aggregate employees without assignment to a department", function() {
        testEnv.fillTestTable("empl", [{
            ID : 4,
            NAME : "John Dolittle"
            ID : 5,
            NAME : "Jane Auster"
        var expected = [{
            DEPARTMENT : null,
            NUM_EMPLOYEES : 2
        var actual = sqlExecutor.execQuery("select * from " + testEnv.getTestModelName() + " where department is null");
        expect(actual).toMatchData(expected, [ "DEPARTMENT" ]);

Running Unit Tests

Because XS applications are run on the HANA server unit tests can’t be started out of HANA Studio. Before a unit test can be run it must be activated and then be run on HANA server using HANA’s web frontend which is accessible in default setup on URI http://<hostname>:80<instance>.


On the one hand, on HANA’s web frontend the test runner tool can be used. It is contained in the HANA test tools and does accept parameters which define the target unit test. It searches <package> for unit test with name <pattern>:


On the other hand, unit tests can be run from HANA’s Web-based Development Workbench:


Note: Microsoft Internet Explorer 10+, Mozilla Firefox, and Google Chrome are supported.

Test Data

The XSUnit framework brings along two mechanisms to populate substitute tables with test data. These test tables can be populated with single test records. With regard to the unit test example above the following snippet demonstrates how to insert single records:

testEnv.fillTestTable('empl', {
       ID : 123,
       NAME : 'John Doe',
       DEPARTMENT_ID : 1

On the other hand, substitute table can be populated from CSV files which are available in the HANA repository. Thereby, the CSV properties and source package must be defined within the Mockstar environment definition, the table load is done by a separate command which can be placed anywhere in the unit test suite:

testEnv = mockstarEnvironment.defineAndCreate({ // only supplement for definition!
     csvPackage : "UnitTesting.object",
     csvProperties: {
         separator: ";",
         headers: true,
         decSeparator: ","

testEnv.fillTestTableFromCsv("empl", "employee.csv");

Expected values can be matched against the actual view output as illustrated by the following example:

var expected = {
     ID : [ 123 ],
     NAME : [ 'John Doe' ] ,
var actual = sqlExecutor.execQuery('select * from ' + testEnv.getTestModelName());
expect(actual).toMatchData(expected, [ "ID" ]);

Truncating substitute tables works for single as well as for all dependent tables:

testEnv.clearTestTables([ 'dept' ]);

Code Coverage

Since SRS 09 the open-source code coverage library BlanketJS is integrated into the HANA Test Tools. It measures to which extent JavaScript code is covered by unit tests. Unfortunately, for unit tests on model views (attribute views, analytic views, calculation views and procedures) this approach can’t be applied.

Debug Unit Tests

When running unit tests for a first time you might miss privileges or have semantic failures in your code. If that occurs you will be provided with stack traces and error messages. If those information are not sufficient to understand the issue there are options to debug server-side JavaScript code. Thereby both, your unit test code as well as the HANA test tool libraries can be debugged (to a certain extent).


For enabling the debugging functionality follow the instructions on These instructions will guide you (1) to deploy delivery unit HANA_XS_BASE and (2) add section “debugger” to “xsengine.ini”. Afterwards the debugging can be initiated from HANA Studio or from Developer Workbench.


Regarding the example unit test above a possible scenario for receiving a meaningless error message is when you miss privileges to run the attribute view. In that case error “258 - insufficient privilege” will be returned. When running the unit test from Developer Workbench the following output will be shown:


To initiate debugging (1) make sure that the target session for debugging is THIS_SESSION in editor settings, (2) set a breakpoint on the code line were you want to start the debugging by clicking on the line number and (3) click on link “Create test report” which is located bellow the test results. The link will re-run the unit test in a new browser tab. Because you set a breakpoint the execution will stop at the specified code line and the debug control will appear in the Workbench:


Use the Step In (F3) and Step Over (F4) buttons to navigate through the code.


Note: When it comes to debugging the Development Workbench behaves different depending whether the file is a standard JavaScript file or an Unit Test file. Standard JavaScript files can be run immediately in debug mode by setting breakpoints and run the script/library. Unit Test files must be run in another browser tab (via TestRunner.xsjs) to debug that file in Workbench.

Hi all,


I think this an simple idea as many of us will be getting confused while working with transactions like  for example while working with smartforms we may get confused on bar code creation and uploading image in smartforms etc.


Suggestions while typing in User command depending upon the starting letter of the TCODE.


for example : What do I mean is in google when we are searching for anything we get autocomplete suggestions in the down. In the same way it will be useful for us if SAP will provide suggestions for transcation codes as it is little bit confusing bewtween some TCODEs.


If this ides sounds good to you then you can click on below link to vote for this:


Provide TCODE Suggestions to users when user types TCODE in user command : View Idea







Sneha Jadhav



I have posted an idea to refer multiple members of a page axis dimension to other sheets in a workbook. Please help me to improve this idea with your valuable feedback and suggestions.



Please go through the below link for more information -

Refer multiple members of a Page Axis dimension : View Idea



Thank you..



I have posted an idea few days back to have the ability to customize Refresh and Save button in EPM Add-in as many of us faced the requirement to customize these 2 buttons based on some condition.

Please go through the below link for more information.

Ability to customize Refresh and Save button in EPM Add-in : View Idea


Help me to improve this idea with your valuable feedback and suggestions.



Thank you..

As you probably heard, SAP HANA smart data streaming is a new HANA capability introduced with SAP HANA SPS09.  This is an optional capability that can be added to a HANA system to add high-speed real-time event stream processing, with the ability to capture raw or derived streams in the HANA database as well as to perform real-time complex event processing for low latency (sub-second) "sense and respond" scenarios.


This white paper provides a technical overview of smart data streaming,  how it can be used, how it scales, and the basic features of the server and the CCL language.  But here I'd like to provide a quick overview of the six steps you need to take to put this new capability to work.


1. Download it and install it in your HANA system landscape


Smart data streaming is an optional component and is not installed by default on a HANA system. Download the smart data streaming from SAP Service Marketplace (look under "H" for HANA, and then you'll see SAP HANA smart data streaming").  This HANA Academy tutorial will show you how.


Note that smart data streaming requires and additional license.  If you aren't licensing for smart data streaming, talk to your SAP representative.


2. Download and install the streaming plug-in for HANA Studio


From the "support packages and patches" section of Service Marketplace.  Look under "H",  go to SAP HANA smart data streaming, and then "Comprised software component versions" and choose "Smart Data Streaming Studio 1.0"


Watch one of these HANA Academy tutorials for installation instructions:  this one is for installing the streaming plugin when you install HANA Studio;  this one is to add  the streaming plugin to an existing HANA Studio installation.


Important: you need to be using HANA Studio 2 (the new version delivered along with HANA SPS09)


You'll then want to follow the steps in these tutorials  to configure the studio plugin, include connecting it to a streaming server, and then add a data service definition so that your streaming project(s) can query and write to the HANA database


3. Build a streaming project


Streaming projects are written in CCL - continuous computation language - which is the event processing language used by smart data streaming. CCL is derived from SQL - so it's easy to learn, and it also includes a powerful scripting language called CCL Script that can be used to define complex operations that go beyond basic SELECT statements.


The streaming plugin for HANA Studio includes both a CCL editor as well as a visual editor - you can work in either and switch between them at any time to view and edit your project. Check out this HANA Academy tutorial that takes you through the steps of building  a simple project using the streaming visual editor in HANA studio.


4. Connect your project to one or more data sources


Smart data streaming includes a number of input and output adapters that can be used to attach streaming projects to data sources and data destinations.  These include adapters for message buses, web services, files, sockets, email and others.  It also includes a web service provider that exposes streams as web services, with support for REST, SOAP and WebSockets, allowing client applications to easily connect via http.


And the HANA output adapter is used to connect any stream or window in a streaming project to a HANA table, such that the data flows directly from the stream into the connected table in the HANA database - continuously.


5. Test your streaming project


The streaming plugin for HANA Studio includes two perspectives, the streaming development perspective and the streaming run-test perspective. The latter has a number of tools to make it easy for you to test your project before you deploy it on a production system.  This tutorial video takes you through the most basic steps, using the playback tool to stream in test data from a file and the stream viewer to view the output.  Other tools include:  manual input, event tracer, performance monitor (for tuning) and a debugger (set breakpoints)


6. Deploy your streaming project


When you're finished testing your project,  while you can run it from the studio, if you are connected to a streaming server,  more typically for a production system you would put it into the HANA repository and then activate it from there or include the streaming project(s) in a HANA Delivery Unit.


Visit the Smart Data Streaming Developer Center for more information or to ask questions or share your ideas.

If you've ever visited London you'll no doubt have seen those courier cyclists weaving in and out of traffic.  You might have seen some of them hurtling through red lights, mounting the pavement (or sidewalk for American readers) and frightening pedestrians.  Cycling in general is being encouraged by the London city mayor.  Being a risk-averse scaredy-cat myself I got to wondering how safe it is to cycle in London?  It certainly doesn't look very safe, so I set about attempting to quantify how safe - or otherwise - it is using HANA's spatial capabilities in SP8.


The UK government now publish road traffic accident data in a very convenient form.  They provide spreadsheets containing 1.4 million accidents spanning 13 years.  I used some of this data previously to allow accidents near your location to be viewed as heatmaps on your mobile phone.  Back then the accident data needed a license and additional work was necessary to get locations, so the data is much easier to use now.


This article will show you how to get the necessary accident data, tidy it up using Linux tools, then perform some spatial analysis using SQL all in an attempt to measure: how safe cycling is in London?


For the impatient, I can reveal that the data shows no increase in average accident severity if you're involved in a cycling accident in London vs the rest of the UK.  Good news for cyclists, then.  Perhaps this is because average speeds in the capital are lower, or perhaps my amateur statistics analysis is flawed.  But I'm getting ahead of myself.  Let's take this step by step:


1) Define the question we want to answer

2) Source the Data and Understand the Data Model

3) Do the Extract, Transformation and Load (ETL) into HANA

4) Analyse Data using HANA SP8 Spatial capabilities


1) Define the question we want to answer

Ideally we'd like to know "how safe is cycling in London?".  To answer that, it would be reasonable to say what is the probability of a cycling journey having an accident.  That would mean we'd need to know how many journeys there are, including those that were incident free, and more about their nature (distance, weather, lighting).  Data about accident-free journeys seems not so easy to get.  Since we're not working to anyone's spec, let's redefine the question that we will answer to suit the data available.  How about:


If you cycle in London and are in an accident, is it likely to be more serious than in the rest of the UK?


The above is much easier to answer with the available data.


2) Source the Data and Understand the Data Model

The source data is available here:  The data model is very simple, as shown below:


accident data model.png


The tables of interest are Accidents and Vehicles.  The Accidents table holds the worst case severity for an accident and it's location.  The Vehicles table holds all vehicles involved and the field Vehicle_Type = 1 identifies cycles.  In fact, that is all that we need.  This is enough to let us filter on cycles only and average the severity over "in London" and "not in London" buckets based on the accident location.


The last thing to consider here is what sort of spatial model we want to use in HANA.  HANA offers 3 spatial reference systems (SRS) in SP8, each identified by a Spatial Reference ID (SRID):




Most of the articles you see use SRID 4326 (aka the "WGS84" system) which is the system used by GPS and it treats the earth as a sphere.  However in HANA SP8, SRID 4326 does not allow tests of points being contained by shapes.  So in this article we're going to use SRID 1000004326.  This effectively takes the earth as a globe from SRID 4326 and stretches it out onto a flat planar surface.  Distances are distorted (look at Antartica in the south of the above diagram) but tests of "does A contain B" are possible.


Suitable HANA tables to hold the data can be found in this GitHub repository, see the RoadF.hdbdd file.


Ok, now we have understood the data model, we're ready to get some data into the system.


3) Do the Extract, Transform and Load (ETL) into HANA

The data is supplied as three tables, one spreadsheet per table.  The data will make a journey through process steps a) to e) as shown below:


data flow.png


a) Use PSCP to Upload Files


First download the files to your local PC.  Next we need to get the files onto the HANA box.  One method for this I'd not seen till recently is using a tool called PSCP.  The HANA system for this demo was a CAL instance and if you use a CAL instance then you may already use PuTTY to connect to the backend Linux host.  When you install PuTTY on a Windows PC you also get a secure FTP client called PSCP.  PSCP can read configurations you've setup in PuTTY and so it is quite convenient to use to FTP files.


Let's use PSCP from a DOS command line to list some files on the Linux host.  In windows run PSCP from the DOS command line like this:


C:\Program Files (x86)\PuTTY>pscp -ls "HANAonBW":/usr/sap/HDB/home/

pscp ls.png


In the above, the -ls is the Linix command to list files, the "HANAonBW" is a saved PuTTY config to allow us to login and the /usr/sap/HSB/home/ is the directory on the Linux box.  The PuTTY configs are those you you see in the "Saved Session" in PuTTY here:




Now we are familiar with PSCP, it is easy to do the file transfer.  The syntax is like this: pscp <source file from Windows> <PuTTY config>:<target directory on Linux>:


C:\Program Files (x86)\PuTTY>pscp c:\temp\RTA\AccidentsF.csv "HANAonBW":/usr/sap/HDB/home/


b) Use AWK to convert date format


The data provided contains a date in a format that is not compatible with HANA.  We need to change the format of the date from 14/01/2005 to the HANA format 2015-01-14.  To do this, we're going to use a tool that comes with Linux called AWK.  To do this conversion, we use PuTTY to connect to the Linux backend, then run this series of Linux commands:


This runs pretty quickly, around 8 seconds to convert 1.4 million rows.  Taking each line in turn:


// line count of source file for checking
wc -l < AccidentsF.csv
// Change field 10 be HANA style date
awk -F"," '{OFS=","; $10=substr($10,7,4)"-"substr($10,4,2)"-"substr($10,1,2); print $0}' AccidentsF.csv > AccidentsFT.csv
// line count of target file for checking
wc -l < AccidentsFT.csv = 1494276

The AWK command is quite long and deserves a bit more elaboration:


-F","     - to use a field separator of comma

{         - begin actions

OFS=",";  - Output Field Separator, so that the output fields are also separated by a ,

$10=      - field number 10 will equal...

substr($10,7,4)"-"substr($10,4,2)"-"substr($10,1,2); - some strings manipulated to make new date

print $0; - to print out the whole line to the target file.

}         - end actions

AccidentsF.csv  - the input file

>               - is sent to

AccidentsFT.csv - the T transformed file


The result of the above is a new file called "Accidents FT.csv", with the date formatted as 2015-01-14.


c) Upload via CTL files


This is covered already on SCN, and so following the same principles: use PuTTY, move to directory /usr/sap/HDB/home/ and type:


cat > roadaccF.ctl

import data

into table "ROAD".""

from '/usr/sap/HDB/home/AccidentsFT.csv'

record delimited by '\n'

field delimited by ','

optionally enclosed by '"'

error log '/usr/sap/HDB/home/AccidentFTErr.txt'


Use [CTRL+D] to end creating the CTL file.  Then upload the data using SQL from insiide HANA studio (the necessary target table definitions are in GitHub here):


IMPORT FROM '/usr/sap/HDB/home/roadaccF.ctl';


d) Data Cleansing


Now need a tiny bit of data cleansing.  The data contains a few records without location data, and these need removed:


delete from "ROAD"."" where LAT is null and LON is null;


e) Add Spatial Field & Fill it


To add the spatial field to our Accidents table, we cannot yet add that in the .hdbdd file in HANA Studio, instead we have to manually add the field with SQL.  To do this I followed the article by Rafael Babar and copied the data from the existing table to a new table, populating the Spatial Point in the process.  The SQL to do this in Github.


4) Analyse Data using HANA SP8 Spatial capabilities

I spent much time trying to get HANA models to work, SQLScript calculation views to work, and various errors occurred.  The recommendation seems to be to wait for SP9 for fuller integration with HANA models.  Therefore I used pure SQL to do the analysis of the data.


Before doing any SQL, we need to define what "in London" and "outside London" means.  For this I followed Jon-Paul Boyd's excellent blog and used Google Earth to draw a polygon around the area I was interested in:

london polygon.png

That polygon is then exported as a series of coordinates which is used in the SQL statement below.  Finally we're ready for some analysis!  This SQL returns the mean severity and variance of all accidents in London that involved a bicycle:


-- Just bicyles in London


AVG (T0."SEVERITY") "Avg Severity",

VAR (T0."SEVERITY") "Avg Severity Variance",




ROAD."" T0

left outer join "ROAD"."" T1

on  T0."ACCREF" = T1."ACCREF"

where T1."VEHTYPE" = 1  --vehtype 1 is bicycle

and NEW ST_Polygon('Polygon((

51.69021545178133 -0.1000795593465265,

51.68262625218747 -0.1640894678953375,

51.62673844314873 -0.5003652550731252,

51.4687978441449 -0.5003020713080952,

51.37537638345922 -0.2604447782463681,

51.29248664506417 -0.1217913673590465,

51.3298782058117 -0.02055237147410183,

51.32142023464126 0.0993682688423303,

51.34618151800474 0.1346959279977478,

51.46491093248794 0.2133695972971839,

51.54192930978628 0.3296565877570212,

51.62542509952219 0.228648947683745,

51.60811732442631 0.0851277551187013,

51.67901853300752 -0.01341248134237749,

51.69021545178133 -0.1000795593465265

))').ST_Contains("LATLON") > 0;  -- use = 0 for outside London


The results are, for cycling accidents inside and outside London:



Location         Severity Mean    Severity Variance

Inside London     2.86076           0.12851

Outside London    2.81853           0.16485


Remember that lower severity is more serious.  Severity is measured as an integer where 1 = fatal, 2 = serious and 3 = minor.  The results suggest it is better to be involved in an accident inside London because the average severity value is higher (less serious).  Perhaps this is because car speeds are slower.


The next question is, are the above results statistically significant?  Could the difference be by chance alone, or does it reveal a pattern in the underlying data?  This is beyond my (very) amateur statistics knowledge, and although there are plenty samples online about "comparing the mean of two populations" they all focus on taking samples from large populations where variance is not known but here we know every data point.  If anyone with statistics knowledge reads this, I'd be interested to know how to go about comparing these means.

Hadoop offers large-scale, low-cost distributed storage called Hadoop Distributed File System (HDFS). When a large file is stored in Hadoop, it is broken into multiple fixed-size blocks and replicated across data nodes. Since the redundancy is on the small size block level, the recovery is faster and distribution is optimal. In order to process these blocks coherently, Hadoop introduced a programming paradigm called MapReduce (MR). By integrating with Hadoop, you can combine the in-memory processing power of SAP HANA with Hadoop’s ability to store and process huge amounts of data, regardless of structure.

Before SAP HANA SPS09, you can use the Smart Data Access to create virtual tables through hive ODBC driver for data federation scenarios. Start from SPS09, SAP HANA supports the integrations with Hadoop MapReduce (MR) jobs written in Java. You can create new type of User Defined Function with the direct access to HDFS and the vUDFs(Virtual User Defined Function) can be invoked from SQL directly that help customer to reuse their investments in MapReduce and solve the problems don’t fit the typical Hive usage patterns via SDA.


Here is the architecture diagram of the integration:


1. Install Hadoop adapter controller

You should have your Hadoop system installed already. SAP has created an adapter that can be installed as a delivery unit in HANA XS Engine and will be pushed to Hadoop later. This installation has been done in the system but it is good for you to understand how it works.

The controller can be downloaded from SAP Marketplace. After that, you need to assign the sap.hana.xs.lm.roles::Administrator role to your HANA user then start the HANA Application Lifecycle Manager to import it as a delivery unit.


HANA Application Lifecycle Manager URL is like below, replace the host and instance no. After login, click on Delivery Unit->Import->Browse and locate the tgz file and import it.



In the Hadoop side, the controller.jar should have been deployed at /sap/hana/mapred/<Revision>/controller/package in HDFS.




You will also need to put these two jar files at the path /sap/hana/mapred/lib in HDFS:




2. Create the MapReduce java project


You can ask to download the project from this blog. If you want to start from scratch, below are the steps.

In HANA Studio/Eclipse, create a new Java project, give it a name song_count, add the following jar files into a subdirectory lib and add them into class path of the project. You should see something like below after that.



Create the Mapper class with a name in the package com.sfp and then copy the source code below. In the mapper, we find out the songs for each artist.

package com.sfp;


import org.apache.hadoop.mapreduce.Mapper;

public class SongCountMapper extends Mapper<Object, Text, Text, IntWritable> {

private final static IntWritable one = new IntWritable(1);

public void map(Object key, Text value, Context output) throws IOException,
             InterruptedException {
song = value.toString().split(",");
output.write(new Text(song[3]), one);


Create the Reducer class with a name in the package com.sfp and then copy the source code below. In the reducer, we aggregate the number of songs for each artist.

package com.sfp;

import org.apache.hadoop.mapreduce.Reducer;

public class SongCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterable<IntWritable> values, Context output)
throws IOException, InterruptedException {
int count = 0;
for(IntWritable value: values){
count+= value.get();
output.write(key, new IntWritable(count));


Create the Application class with a name and then copy the source code below. In the application, it incorporates the mapper and reducer, launch a MapReduce job the process data from the input path /sfp/song/data and put the result into path /sfp/job_result in HDFS.

package com.sfp;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class SongCountApplication extends Configured implements Tool{

public static void main(String[] args) throws Exception {
int res = Configuration(), new SongCountApplication(), args);

public int run(String[] args) throws Exception {

job = Job.getInstance(new Configuration());



job, new Path("/sfp/song/data"));
job, new Path("/sfp/job_result"));


return 0;



Compile the java code and make sure there is no syntax error. The project should look like below now.




3. Upload the MapReduce Jobs Archive into HANA repository


In order to let HANA aware of the MapReduce code, we need to upload the jar file into HANA repository. HANA Studio can help to generate the jar file on the fly from your source code.

We firstly need to create a new package in HANA repository that we can store the jar file. Go to HANA repository(Create a new one if not created), create a new package Create a new general project in HANA Studio/Eclipse named song and share the project with the HANA repository. You should see a project like this after that. The project is really a placeholder for the MapReduce job archive, it is not necessary to create the .xsapp or .xsaccess file like other XS-based project deployed in HANA repository.




Select the project, click New—>Other.. and click SAP HANA—>Database Development—>Hadoop MR Jobs Archive in the pop-up window.




Click next and enter the file name as song_count then click next.




Select the Java project song_count, put the target Schema as your Schema, here is SFP and click Finish.




That will create a song_count.hdbmrjobs file, activate it and you will see it has been uploaded as a jar file in HANA repository by executing this query.




4. Create the Remote Data Source


Now it’s the time to create the remote data source by running the SQL statement below in SQL console of HANA Studio. You will need to replace the <FQDN> with your own Full Qualified Domain Name of your host that you can find it by running command hostname -f.


ADAPTER "hadoop"

CONFIGURATION 'webhdfs_url=http://<FQDN>:50070;webhcat_url=http://<FQDN>:50111'


USING 'user=hdfs;password=hdfs';

Now you should see the Remote Source.





5. Create the Virtual Function


To create the virtual function, you can run the following SQL statement in SQL console of HANA Studio. The returns table structure need to be the same data types as the output structure of the Reducer, the Package System is the one you can find at view “SYS”.”VIRTUAL_FUNCTION_PACKAGES”, the configuration need to specify the input path of the MapReduce job where you put your data files in HDFS and the class name of the Mapper and Reducer.


RETURNS TABLE ("song_artist" NVARCHAR(400), "count" INTEGER)


6. Run the Virtual Function


Now you can simply run the virtual function like below.


It will trigger the MapReduce job and execute it in the Hadoop Cluster and populate the result into the output path of your Reducer, in this Reducer, it will be in /sfp/job_results in HDFS.




And you can find the job records in the job history UI at http://<your_hadoop_host>:19888/jobhistory



Clink the link to Download Source Code


With this quick blog I am explaining how to implement a supplier rating application using the Text Analysis capabilities of SAP HANA. I am using Sentiment analysis feature to rate a particular product and suppliers based on the customer feedback on various web sites .



We have the customer feedback collected from suppliers and arranged in the following format. (data is copied from multiple online suppliers. Product names and supplier names are masked )

data strucure.JPG


Modelling your source Table

     Now we need to load the data into SAP HANA. For that, We are creating a table of the same structure .



After once the data is loaded into the source table, we are creating a full text index on the table for the FeedbackText column using the standard configuration "EXTRACTION_CORE_VOICEOFCUSTOMER".

Voice of the customer content includes a set of entity types and rules that address requirements for extracting customer sentiments and requests. You can use this content to retrieve specific information about your customers' needs and perceptions when processing and analyzing text. The configuration involves complex linguistic analysis and pattern matching that includes processing parts of speech, syntactic patterns, negation, and so on, to identify the patterns to be extracted.

Once the index is created , one new columnar table will be created in the same schema where the source table is created. This table will be having the tokens as well as the corresponding sentiments values .




Now generating some visualizations on the generated sentiment data.




SAP Lumira For Reporting

Now we have the sentiment data available in our table. That means, we have the product details, Supplier details, and the type of feedback given by the customers . Now we will generate dashboard(Real time dashboards on LUMIRA if we integrate it with live feedback data on multiple web sites) .




Pie Chart.JPG

And I love this part

I love this part.JPG

(This blog is created for DataGeek ).



Firstly thanks to my customer for the situation that forced me to explore this scenario.


I was unable to find a simple document on this explaining such a setup. Hence this blog.


The scenario is to access HANA tables from within Oracle via database links.


Source database : SAP HANA SP 08 Rev 85

Target database : Oracle running on RHEL 6.x X64

In middle Oracle gateway running on RHEL 6.x X64


Target oracle database will migrated to HANA in 6 months, but in the interim period we need to access HANA data sitting inside oracle seamlessly to refresh few materialized views.


After evaluating options such as CSV file export/import, ETL tools, SAP SLT, Data Services, etc, the favorable option was Oracle Gateway.


To get this Oracle Gateway running was quite a work for the first time. Therefore this blog, to help others.


The way it should works is :-


From within oracle database if a sql statement like SELECT * FROM DUMMY@H1X; is fired, it should bring the data from SAP HANA database (H1X).


First some basics, which is important to understand.


How does it work?


SQL commands are fired from a oracle database (sqlplus), which will reach out for Oracle Gateway via DBLINK > tnsnames.ora

Oracle Gateway will have parameter in its init<sid>.ora file and loads unixODBC libraries from its LD_LIBRARY_PATH.

unixODBC will load HANA odbc drivers and goes through DSN setting to read data from HANA database.


Meaning Oracle DB > DBLINK > tnsnames.ora > Oracle Gateway > unixODBC drivers > Data source DSN (odbc.ini) > HANA odbc drivers > HANA DB


If you notice above Oracle Gateway and HANA odbc do not talk to each other directly. Instead talk through unixODBC drivers.


This is the most important to understand, else you will not be able to setup this correctly.


Step by step - How to setup the above scenario


Step 1 - First step is to make unixODBC working


Installed unixODBC rpms (both 32 and 64 bit) on RHEL machine where you will run Oracle Gateway.








Step 2 - Install SAP HANA client (64bit) on RHEL machine where you will run Oracle Gateway. Please refer to the HANA client installation guide.



Step 3 - Create /etc/odbc.ini   contents looks like below







Step 4 - Install Oracle Gateway software. You may please google to find step-by-step on how to install using Oracle Universal Installer.



Step 5 - Set environment variable of user running oracle gateway. In my case it is




Its very important to have 64bit unixODBC libraries path (/usr/lib64/ in LD_LIBRARY_PATH because we run RHEL x64 and Oracle 64 bit software.



Step 6 - Create init<sid>.ora. In my case I will call this as dg4odbc (initdg4odbc.ora). The content should like like below.



HS_FDS_CONNECT_INFO = H1X                                   <===== This is the DSN name that comes from step 2 /etc/odbc.ini





set ODBCINI=/etc/odbc.ini



Step 7 - Create listener.ora


















Step 8 - Start listener


lsnrctl start



Step 9 - Lets first test unixODBC is working


Login as user which will run oracle gateway and check LD_LIBRARY_PATH (refer step 4 above) and use below commands.


isql -v <DSN name from step 2 /etc/odbc.ini> <hana user name> <password>


For example isql -v H1X SYSTEM password



| Connected!                            |

|                                       |

| sql-statement                         |

| help [tablename]                      |

| quit                                  |

|                                       |



SQL> select * from dummy




| X    |


SQLRowCount returns 1

1 rows fetched


If you see these output you are half way through. unixODBC is working.



Now its time to work on oracle database from where data will be read with SELECT statements.



Step 10 - Add entries in tnsnames.ora   In my case it will look like below.


dg4odbc  =







And test it with tnsping




Step 11 - Create DB link in oracle database and run a SELECT command.    Commands looks like this.



"SYSTEM" IDENTIFIED BY "password" USING 'dg4odbc';



SQL> select * from dummy@H1X;











Hope this helps some one in need !

TechEd Las Vegas in 2013 I had the pleasure of teaming up with Greg Myers from EV Technologies . Quantum Fit was our solution to the explosion of fitness devices without a unified platform. To date, fragmentation of the data is still an issue. This blog will cover the devices that were strapped to me that were posting data to and receiving notifications from HANA.


The application collected biometric, distance and environmental data. This was all posted to an HANA database via XSJS. The HANA application would react to changes in the data to trigger notifications to the athlete. The data was also consumed via an athletes dashboard showing real time data for the athlete.


The demo shows our working solution, after watching the 6 minutes of geeky entertainment I'll go over how we built the device side of the solution.




Smart phone

The center of our Quanitified Self was the smart phone. In this instance we used the iPhone 5S. The overriding purpose was to read the data from the connected devices, device itself, push alerts to the Pebble and post device readings to our HANA System running on AWS.

Location and motion readings were read from the device.


The secondary purpose of the iPhone was to display the current heart rate.




iOS has the CoreMotion Framework that provides access to the Step Count. We enabled both distance determination for outdoor with CoreLocation, but for indoor on stage CoreLocation would not work. In hindsight the step count did not work so well on stage either as there was no guaranteed refresh period from the M7 chip and it slipped outside the time window for the demo.


Accessing the CoreMotion data is relatively straightforward as shown in the code example below.


#include <CoreMotion/CMStepCounter.h>


#pragma mark - Core Motion


-(void) startStepCount {

if ([CMStepCounter isStepCountingAvailable] == YES) {

        CMStepCounter *stepCounter = [[CMStepCounter alloc] init];

        [stepCounter startStepCountingUpdatesToQueue:[NSOperationQueue currentQueue] updateOn:20 withHandler:^(NSInteger numberOfSteps, NSDate *timestamp, NSError *error) {

steps = numberOfSteps;  // Double check on this step count value

self.stepCount.text = [NSString stringWithFormat:@"%d steps",steps];


[self updatePace];

        } ];




Pebble Watch

The Pebble Watch at the time was not being used (nor enabled) as a step counter. We looked at creating our own Pebble application but it made no sense as the Pebble comes with a default sports application that the mobile application can use. The following coding is from the original iPhone application. Since this was written Pebble have made a significant SDK update so this may no longer work with the latest SDK.



*  PBPebbleCentral delegate methods


- (void)pebbleCentral:(PBPebbleCentral*)central watchDidConnect:(PBWatch*)watch isNew:(BOOL)isNew {

    [self setTargetWatch:watch];



- (void)pebbleCentral:(PBPebbleCentral*)central watchDidDisconnect:(PBWatch*)watch {

[[[UIAlertView alloc] initWithTitle:@"Disconnected!" message:[watch name] delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil] show];

    if (_targetWatch == watch || [watch isEqual:_targetWatch]) {

        [self setTargetWatch:nil];




- (void)setTargetWatch:(PBWatch*)watch {

    _targetWatch = watch;


[watch  sportsGetIsSupported:^(PBWatch *watch, BOOL isAppMessagesSupported) {


[watch appMessagesSetUUID:PBSportsUUID];

        [watch sportsAppLaunch:^(PBWatch *watch, NSError *error) {

NSString *message = @"";

message = error ? [error localizedDescription] : @"Sports App Launch Request";






-(void) sendDataToWatch {

    NSTimeInterval minsPerMile = cMetersInMile / (paceMetersPerSecond * 60);

    NSString *minsPerMileString = [NSString stringWithFormat:@"%0.2f",minsPerMile];


NSDictionary *update = [[NSDictionary alloc] initWithObjectsAndKeys:[NSString stringWithFormat:@"%d",self.heartRate],PBSportsTimeKey,

  [NSString stringWithFormat:@"%0.1f",distanceInMeters/cMetersInMile],PBSportsDistanceKey,minsPerMileString,PBSportsDataKey ,nil];


PBSportsUpdate *upd = [[PBSportsUpdate alloc] init];

    upd.time = 3.95f;

    [_targetWatch sportsAppUpdate:update onSent:^(PBWatch *watch, NSError *error) {

        NSString *message = @"";

        if (error) {

message = [error localizedDescription];







The other feature we used on the Pebble was notifications. This is default behaviour for the watch and required no extra development on our part. The notifications received by the phone are mirrored on the watch.


Smart Water Bottle

The water bottle is where I had most of the fun. We used an Arduino UNO, Redbear Bluetooth BLEMini module and DS18B20 temperature sensor. We had other peripherals such as humidity readers connected but chose not to use them in the demo as we already had a lot to show in 6 minutes.




The coding for the Arduino below and still works when tested yesterday. The temperatue monitor is an I2C serial device which needs the OneWire library for reading data. There are a lot of examples on how to use this, it is one of the many getting started devices.


The RedBear BLEMini was used for the Bluetooth support. Using the serial port the temperature readings were written through the device.


#include <Arduino.h>

#include <SoftwareSerial.h>

#include <OneWire.h>


SoftwareSerial BLEMini(0, 1);


byte blink;


//Temperature chip i/o

/* OneWire */

int DS18S20_Pin = 2; //DS18S20 Signal pin on digital 2

OneWire ds(DS18S20_Pin); // on digital pin 2


void setup()



  pinMode(12, OUTPUT); 




unsigned char buf[16] = {0};

unsigned char len = 0;


void loop()


  float temp = getTemp();

  char tempData[10];




  blink ^= 0x01;


  digitalWrite(12, blink);





void printDouble( double val, unsigned int precision){

// prints val with number of decimal places determine by precision

// NOTE: precision is 1 followed by the number of zeros for the desired number of decimial places

// example: printDouble( 3.1415, 100); // prints 3.14 (two decimal places)


  Serial.print (int(val));  //prints the int part

  Serial.print("."); // print the decimal point

  unsigned int frac;

  if(val >= 0)

      frac = (val - int(val)) * precision;


      frac = (int(val)- val ) * precision;

  Serial.println(frac,DEC) ;



float getTemp(){

  //returns the temperature from one DS18S20 in DEG Celsius


  byte data[12];

  byte addr[8];


  if ( ! {

   //no more sensors on chain, reset search


   return -1000;



  if ( OneWire::crc8( addr, 7) != addr[7]) {

   Serial.println("CRC is not valid!");

   return -1000;



  if ( addr[0] != 0x10 && addr[0] != 0x28) {

   Serial.print("Device is not recognized");

   return -1000;




  ds.write(0x44,1); // start conversion, with parasite power on at the end


  byte present = ds.reset();; 

  ds.write(0xBE); // Read Scratchpad


  for (int i = 0; i < 9; i++) { // we need 9 bytes

    data[i] =;





  byte MSB = data[1];

  byte LSB = data[0];


  float tempRead = ((MSB << 8) | LSB); //using two's compliment

  float TemperatureSum = tempRead / 16;


  // Convert to f

  TemperatureSum = (TemperatureSum * 1.8 ) +32;


  return TemperatureSum;




For the demo we chose to use SMS. The quickest way to implement this was to use Twilio the following code is as simple as it gets to send an SMS message. The PHP script was hosted on an external server and called via events from HANA when specific thresholds were crossed in the biometric data.



// Include the PHP Twilio library. You need to download the library from

//, and move it into the folder containing this

// file.

require "Services/Twilio.php";


// Set our AccountSid and AuthToken from

$AccountSid = "abcdefghijklmnopqrstuvw";

$AuthToken = " abcdefghijklmnopqrstuvw ";


// Instantiate a new Twilio Rest Client

    $client = new Services_Twilio($AccountSid, $AuthToken);


/* Your Twilio Number or Outgoing Caller ID */

$from = '+1 xxx-xxx-xxxx';


// make an associative array of server admins. Feel free to change/add your

// own phone number and name here.

    $people = array(

"xxxxxxxxxx" => "Johnny",



    foreach ($people as $to => $name) {

// Send a new outgoing SMS

$body = "Missing data $name";

        if (isset($_GET["text"])) {

          $body = $_GET["text"];


        $client->account->sms_messages->create($from, $to, $body);

echo "Sent message to $name: $body";




Adidas HRM

The Heart Rate Monitor was simply one that supported Bluetooth notifications to allow us to get regular readings. The iPhone app subscribed to receive notifications using the CoreBluetooth Framework. The following code is an extract from the application. Subscribing and listening to the bluetooth devices requires that the devices are found, connected to , characteristics determined and the data read.


#define cAdidasHRMUUID @"0BC904FB-D617-A18E-A6B7-9385378F0A2E"



Request CBCentralManager to scan for heart rate peripherals using service UUID 0x180D


- (void) startCBScan {



[cm scanForPeripheralsWithServices:[NSArray arrayWithObjects:[CBUUID UUIDWithString:cBiscuitServiceUUID],nil] options:nil];




* Invoked when the central discovers a peripheral while scanning.


- (void) centralManager:(CBCentralManager *)central didDiscoverPeripheral:(CBPeripheral *)aPeripheral advertisementData:(NSDictionary *)advertisementData RSSI:(NSNumber *)RSSI


/* Make sure we don't pick up a rogue device */

    NSString *cfuuidString = (NSString*)CFBridgingRelease(CFUUIDCreateString(kCFAllocatorDefault, [aPeripheral UUID]));

    if (![knownDevices containsObject:cfuuidString]) {




    NSMutableArray *peripherals = [self mutableArrayValueForKey:@"heartRateMonitors"];

    if( ![self.heartRateMonitors containsObject:aPeripheral] ) {

        [peripherals addObject:aPeripheral];



// Wait until we have all expected peripherals

    if ([peripherals count] == [knownDevices count]) {

NSMutableArray *uuids = [[NSMutableArray alloc] initWithCapacity:5];

        for (CBPeripheral *per in peripherals)


[uuids addObject:per.UUID];


[cm retrievePeripherals:uuids];





Invoked when the central manager retrieves the list of known peripherals.

Automatically connect to first known peripheral


- (void)centralManager:(CBCentralManager *)central didRetrievePeripherals:(NSArray *)peripherals


    [self stopScan];


/* If there are any known devices, automatically connect to it.*/

    for (CBPeripheral *per in peripherals)


[cm connectPeripheral:per options:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:CBConnectPeripheralOptionNotifyOnDisconnectionKey]];





Invoked whenever a connection is succesfully created with the peripheral.

Discover available services on the peripheral


- (void) centralManager:(CBCentralManager *)central didConnectPeripheral:(CBPeripheral *)aPeripheral


    [aPeripheral setDelegate:self];

    [aPeripheral discoverServices:nil];




Invoked whenever an existing connection with the peripheral is torn down.

Reset local variables


- (void)centralManager:(CBCentralManager *)central didDisconnectPeripheral:(CBPeripheral *)aPeripheral error:(NSError *)error


    if( aPeripheral )


        [aPeripheral setDelegate:nil];

        aPeripheral = nil;





Invoked whenever the central manager fails to create a connection with the peripheral.


- (void)centralManager:(CBCentralManager *)central didFailToConnectPeripheral:(CBPeripheral *)aPeripheral error:(NSError *)error


[self addLogEntry:[NSString stringWithFormat:@"Fail to connect to peripheral: %@ with error = %@", aPeripheral, [error localizedDescription]]];

    if( aPeripheral )


        [aPeripheral setDelegate:nil];

        aPeripheral = nil;




#pragma mark - CBPeripheral delegate methods


Invoked upon completion of a -[discoverServices:] request.

Discover available characteristics on interested services


- (void) peripheral:(CBPeripheral *)aPeripheral didDiscoverServices:(NSError *)error


    for (CBService *aService in


/* Heart Rate Service */

        if ([aService.UUID isEqual:[CBUUID UUIDWithString:@"180D"]] || [aService.UUID isEqual:[CBUUID UUIDWithString:@"180d"]])


[aPeripheral discoverCharacteristics:nil forService:aService];



/* Device Information Service */

        if ([aService.UUID isEqual:[CBUUID UUIDWithString:cBiscuitServiceUUID]])


[aPeripheral discoverCharacteristics:nil forService:aService];






Invoked upon completion of a -[discoverCharacteristics:forService:] request.

Perform appropriate operations on interested characteristics


- (void) peripheral:(CBPeripheral *)aPeripheral didDiscoverCharacteristicsForService:(CBService *)service error:(NSError *)error



// Temperature measurement

    if ([service.UUID isEqual:[CBUUID UUIDWithString:cBiscuitServiceUUID]])


for (CBCharacteristic *aChar in service.characteristics)



/* Set notification on heart rate measurement */

CBUUID *uuid = [CBUUID UUIDWithString:cBiscuitCharacteristic];

if ([aChar.UUID isEqual:uuid])


[aPeripheral setNotifyValue:YES forCharacteristic:aChar];





// Heart Rate Service

    if ([service.UUID isEqual:[CBUUID UUIDWithString:@"180D"]])


for (CBCharacteristic *aChar in service.characteristics)



/* Set notification on heart rate measurement */

if ([aChar.UUID isEqual:[CBUUID UUIDWithString:@"2A37"]])


[aPeripheral setNotifyValue:YES forCharacteristic:aChar];


/* Read body sensor location */

if ([aChar.UUID isEqual:[CBUUID UUIDWithString:@"2A38"]])


[aPeripheral readValueForCharacteristic:aChar];



/* Write heart rate control point */

if ([aChar.UUID isEqual:[CBUUID UUIDWithString:@"2A39"]])


uint8_t val = 1;

NSData* valData = [NSData dataWithBytesvoid*)&val length:sizeof(val)];

[aPeripheral writeValue:valData forCharacteristic:aChar type:CBCharacteristicWriteWithResponse];





if ( [service.UUID isEqual:[CBUUID UUIDWithString:CBUUIDGenericAccessProfileString]] )


for (CBCharacteristic *aChar in service.characteristics)


/* Read device name */

if ([aChar.UUID isEqual:[CBUUID UUIDWithString:CBUUIDDeviceNameString]])


[aPeripheral readValueForCharacteristic:aChar];





    if ([service.UUID isEqual:[CBUUID UUIDWithString:@"180A"]])


for (CBCharacteristic *aChar in service.characteristics)


/* Read manufacturer name */

if ([aChar.UUID isEqual:[CBUUID UUIDWithString:@"2A29"]])


[aPeripheral readValueForCharacteristic:aChar];







* Invoked upon completion of a -[readValueForCharacteristic:] request or on the reception of a notification/indication.


- (void) peripheral:(CBPeripheral *)aPeripheral didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error


/* Updated value for heart rate measurement received */

    if ([characteristic.UUID isEqual:[CBUUID UUIDWithString:@"2A37"]])


        if( (characteristic.value) || !error )


/* Update UI with heart rate data */

[self updateWithHRMData:characteristic.value];




/* Updated value for heart rate measurement received */

    if ([characteristic.UUID isEqual:[CBUUID UUIDWithString:cBiscuitCharacteristic]])


        if( (characteristic.value) || !error )


/* Update UI with heart rate data */

NSString *temp = [[NSString alloc] initWithData:characteristic.value encoding:NSUTF8StringEncoding];

temperature = [temp doubleValue];

self.tempLabel.text = temp;




/* Value for device Name received */

else if ([characteristic.UUID isEqual:[CBUUID UUIDWithString:CBUUIDDeviceNameString]])


        NSString * deviceName = [[NSString alloc] initWithData:characteristic.value encoding:NSUTF8StringEncoding];


/* Value for manufacturer name received */

    else if ([characteristic.UUID isEqual:[CBUUID UUIDWithString:@"2A29"]])


self.manufacturer = [[NSString alloc] initWithData:characteristic.value encoding:NSUTF8StringEncoding];



Lessons Learned

While everything works in development the real world is a less forgiving place. I had spent the evenings (after midnight) running the streets so that my teammate Clint Vosloo could fine tune the HANA scripts. Timezone differences between the East coast of the USA and South Africa were coming into play here. After midnight there are few people running around wearing fitness devices that could interfere, however the same was not true at TechEd. Sitting in the Monday evening keynote I noticed quirks with the app, it would not always connect to my devices. Pulling out the Light Blue blue tooth testing app showed our issue. There were over 100 wearables in the audience including other HRMs and my coding was picking them all up, I had not added the specific device registration I needed. Good that we found this the night before the demo and not in the demo.


If you ever plan to have a demo that is tuned to your heart rate, make sure you take into account your own stage presence. During our many hours (I ran a lot) of tuning we collected lots of profile data on my heart rate. We tuned the alerts to the expected profile. What we had not accounted for was that my heart rate was already over 20bpm more than my usual resting rate when we started, nervous on stage perhaps? This meant that the system was detecting that I was working much harder than I physically was. While the metric was still correct in that my HR was in an abnormal zone it definitely had an affect on the timing of the demo.


Many thanks to Greg Myers, Clint Vosloo and Eric Vallo for teaming up with me on a fun time.



I have posted an IDEA on SAP Idea Incubator, about Tracking the Container through the Geo Mapping.


The basic Concept of this idea is, most of the Business having their own Product and this Products are moving across world based on the customer requirement.


To track this container, through the Geo map, a User or Administrator department could able to very much updated with the any of the shipment.

And even a different kind of Analysis can be done over the any one of the Container / Shipment,



1     Information of the Shipment

2     How frequently, customer is asking the product

3     What is the Current Status of the Shipment and how much time it could take to reach and other shipment related reports etc.


Please help me with your views. Your Suggestion and Feedback would be very much valuable.


Added to this, i have provided more information about the Idea in above line, as comment, Why comment, because, when i am trying to edit the Idea Information, the changes are not reflecting in main idea page.


Please find below more Information, how could be the Idea more useful for any of the business, there could be lot more...


As per My attached Sample Picture,


1. A container, which is moving Bankura to Kolkata, Once the all basic process complete from the Bankura plant. System will generate the Road map based on the Shortest Road route based on the vehicle type.


2. The same will provide to vehicle driver.


3. Once Driver will start from the Plant for the destination point.


4. A particular user can track the route of the same.


5 And based on the User interaction with the particular container system will store the all information, like


       1. How Many time User has been inquired about container


       2. Inquiry time, Date


       3. Inquiry Place / Location


6. While Inquiry about the particular container, other user can check the container information, like


       1. Container type / vehicle type


       2. Route Information


       3. Goods Information


       4. Destination Information


       5. Driver Information


       6. Forecasting about Time to reach the destination


       7. How frequently, containers are moving to the same route.


       8. Based what is the order type and how frequently order are place from the destination point.


       9. And other Information based on the Billing, Order, Plant etc.







Note: Issues facing during ECC-SLT-HANA scenario. In my scenario, each system is standalone system. We have connectivity between three systems.

I have discussed the issues only at ECC system side which we get and highly important.

1. First and primary note is DB USER NAME; it should not be a “SYSTEM” standard user. SAP strongly recommends not using “SYSTEM” as a user for Non-Administrative tasks. i.e. instance, to replicate data to specific SCHEMA and read it from ECC by means of HANA as a secondary database.


For instance, during creating a connectivity between ECC and HANA, HANA as a secondary database, we should not use SYSTEM as username as shown given below.


This screen shot has been captured from "DBCO" transaction code in ECC system.



2. User name and schema name should be same when we are working HANA as secondary database.


For instance, HANA DB Schema name is “ECC_HANA_800” and then DB user name at ECC side should be “ECC_HANA_800”. Because when we are executing the HANA transactions at ECC system side, it would search for tables, views and corresponding HANA objects from schema name similar to our DB user name at ECC side i.e. ECC_HANA_800.


Please create the technical user name seperately at ECC system similar to SCHEMA NAME as shown given below.


This screen shot has been captured from "DBCOCKPIT" transaction code in ECC system.


Please find the screen shot below for reference on HANA DB. Schema Name is “ECC_HANA_800” which is similar to Connection user name at ECC system.



3. We have to configure the connection information as per given below instructions.


For instance please find configuration details below.

HANA Server URL:

Port Number: 30015

Connection information would be


This screen shot has been captured from "DBCOCKPIT" transaction code in ECC system.


4. We need to have authorization to read the tables, views and respective objects from HANA Schema when we are executing HANA related transactions at ECC system via secondary database access to HANA DB.


For instance, I am executing “FBL3H” transaction code from ECC which would run on secondary database which is on HANA. During the execution of FBL3H transaction code, it would read the data from HANA secondary database.


Since it has no authorization, it would turn into dump at ECC system.



Thank you very much for referring to my blog. I have discussed only ECC system issues.


I will publish my next blog on SLT and HANA system issues for these kind of scenarios.



Ravi K Chandragiri

In this blog I explain how to convert an existing Hana database into an Multi Tenant Database. But before we start a short introduction to multi tenant databases


In the past you had the following choices when installing an SAP Hana Database


  • One SAP HANA DBMS, one database, one application and one schema.


  • One  SAP HANA DBMS, one database, several applications, several schema's (MCOD)


  • More then one SAP HANA DBMS (one DB in each) 1-n applications, 1-n schemas (MCOS)

  • SAP Hana Virtualised


Multitenancy refers to a principle in software architecture where a single instance of the software runs on a server, serving multiple tenants. A tenant is a group of users sharing the same view on a software they use. With a multitenant architecture, a software application is designed to provide every tenant a dedicated share of the instance including its data, configuration, user management, tenant individual functionality and non-functional properties. Multitenancy contrasts with multi-instance architectures where separate software instances operate on behalf of different tenants. (

A single database container is called a Tenant Database, you can run multiple tenant databases on one SAP Hana System while still having only one software version for a SAP HANA system. Some advantages are:


  • strong separation of data and users
  • backup and restore available by tenant database
  • resource management by tenant (cpu, memory)


When installing a multi tenant database you have two options:


  • Start from scratch with a new SPS09 installation, during the installation you get the option to install a single container or a multi container database node


  • Convert an existing single server to a multi tenant database, please not that the change is permanent and cannot be reversed.


Since the installation from scratch is basically not much different from the past with the exception of the screenshot shown above I will focus on converting an existing database in this Blog. I will add future blogs describing details regarding configuration and maintenance of a multi tenant database which will be the same for an new installation and converted system.





Before you can actually convert a database you have to apply to the following pre-requisites:


  • The Statistics server has been migrated or removed
  • Your Hana version is on SPS09 or newer



Migrate Statistics Server


As of SPS07 you can migrate the statistics server from a separate process to be part of the nameserver process. When you did not do this before you have to look at OSS notes "1917938 and 1925684" before executing the steps below. Since only installing a database on SPS09 with the "multiple_container" option enabled will install the database by default with the new Statistics server, you will be running the 'old' statistics server unless you manually migrated it already before.


First you need to check whether you are still using the old statistics server, the easiest way to do so is from the operating system with the sapcontrol command to check the processes (sapcontrol -nr <instance number> -function GetProcessList). When you see a separate process for the statistics server (as shown below) then you are still working with the old statistics server and need to migrate it first.2014-12-16_12-48-46.png


Migrating the statistics server is a simple process, just open the hana studio and go to the configuration. From there adjust "nameserver.ini -> statisticsserver -> activate=true".



After some minutes, you will see that the statistics server is shutting down and eventually it will be removed from the list.


Tip: when you add "watch -n 1" in front of the sapcontrol command you don't have to repeat the command each time manually to refresh.


Finally you can run the SQL Statement “SELECT * FROM _SYS_STATISTICS.STATISTICS_PROPERTIES where key = ‘internal.installation.state'” to check if the migration went fine



Check for the correct hana version


A second prerequisite you have to check is the correct hana version, you most probably already know on which version you run but you can also check it with the steps below (as provided by the admin guide):


First you need to stop the database with the command "HDB stop" and then you execute the command "hdbnsutil -exportTopology <file_name>". When you run the command with the database still running you will get an error message:


checking for inactive nameserver ...

nameserver phyhandb4-prd:30001 is still active. cannot open its persistence failed


Now open the exported file in the VI editor and look for the line "topologyVersion" and check that the value is 15 or higher.

Convert Database

Now that the prerequisites are met we can convert the database to an multi tenant database. In our test we used an empty database, to show you that the existing content is indeed still available after you convert de database we created and empty schema called “CONTENT-MIGRATION” through the SQL command: “create schema "CONTENT-MIGRATION" OWNED BY SYSTEM;


Before you can convert the database you have to stop the database with the command ”HDB stop”, then run the command “hdbnsutil -convertToMultiDB” to do the actual conversion.


This only takes a few seconds (for an empty database) after which the database is converted. This action executes the following steps:

  • Set “multidb mode” flag in the configuration
  • Create the system and tenant database
  • Updates the secure store in the file (SSFS) system.

The newly created database maintains the original data and has the original port configuration, the initial "HDB start" command only starts the system database though. Therefore it shows that you can start the tenant database by an SQL statement, after you do this it will automatically start when using the "HDB start" command in the future.

However, when you try to connect with hdbsql to this new SystemDB database it will fail with the error:

* 10: invalid username or password SQLSTATE: 28000

This is because you need to reset the password of the SYSTEM user for the SystemDB database. So make sure the database is still stopped (otherwise stop it with "HDB stop") and then reset the system password of the user “SYSTEM” with the command “hdbnameserver -resetUserSystem”. When asked type a new password and press enter.




When the System password has been reset you have to start the database with “HDB start” which will only start the System Database. You can see this with the sapcontrol command, this does not show an additional index server for the tenant database.


Now connect to the database with the hdbsql command “hdbsql -i <instance number> -n <hostname>:3xx13 -d SystemDB -u SYSTEM” after which the system will ask you for a password. You can see that you are connected when the prompt shows “hdbsql SystemDB=>”.


Now you have to start the tenant database with the SQL command “ALTER SYSTEM START DATABASE <SID>”. But when you do this, the system will complain that the user is forced to change its password:

* 414: user is forced to change password: alter password required for user SYSTEM SQLSTATE: HY000

to solve this run the SQL command “ALTER USER SYSTEM PASSWORD <new_password>”. Now run the SQL command "“ALTER SYSTEM START DATABASE <SID>” again and it should work (takes a few seconds before you get your prompt back).


After doing this you can exit the hdbsql command and check if the tenant database is running with the “sapcontrol -nr <instance> -function GetProcessList” command, it should now show an additional XS engine and index server instance “e.g. indexserver-TST and xsengine-TST, where our SID is TST”.



Add a Multi Tenant Database to the SAP Hana Studio


In order to add a multi tenant database to the hana studio, you first have to install Hana studio version “Version: 2.0.7”. After that add a system as usual (right click and choose the option “Add System”).




When adding a “normal” system with the new studio you choose the option “Single Container” but when adding a multi tenant database you choose the option “Multiple Containers”. Each Multi Tenant database has one System Database and one or more Tenant Databases. First we will add the System Database to the studio.

The system user password of the SystemDB database is the one that you have reset in the previous steps.



As you can see, the SystemDB database looks as a normal database but it is only used to maintain the tenant database and not to hold corporate data. As shown in the database above the SystemDB does not contain the schema called “CONTENT-MIGRATION” which we created earlier.


When you choose to install a new SPS09 Database from scratch with the option “Multiple_Containers” then you will have only a SystemDB after the installation and you have to add the tenant databases manually after the installation.


2014-12-17_08-17-46.png 2014-12-17_07-55-12.png

After the SystemDB is added to the studio you can also add the tenant database. The process is the same, but now you choose the option “Tenant database container” and you provide the tenant database name. The System user password of this database is the same as the original database before converting it to a multi tenant database.


As you can see this tenant database does contains the schema called “CONTENT-MIGRATION” which we created before we converted to a multi tenant database.


Now you have converted a single database to a multi tenant database, in future blogs I will write more details about adding and dropping a tenant database and also provide more information regarding the configuration of a multi tenant database (e.g. XS engine access).


Filter Blog

By author:
By date:
By tag: