My idea is I want people to get real-time live messages to their mobile phones, the details of the long distance trains travelling. Passengers should have a real time information from the time they board until their departure of journey.This helps the passengers to look for an alternative if the train has cancelled or suppose lets say the train is at station W and their is some track problem which takes 2 hours for it to repair and a passenger has to board at Y to give an attempt to his exam. If he doesnt have any idea about the train and waits for it he looses his exam.But if the passenger gets that the train starts after 2 hours from W he would take an alternate to his destination to avoid chaos at the last minute. As i experienced this situation I would suggest if SAP HANA cloud computing can solve this problem.

Note: This blog is not up to date as new functionalities have been introduced in late HANA revisions.

The example I am going to describe was actually created about two years ago when Text Analysis was first time introduced to HANA Platform.  However, I think it is still a good example to demonstrate how simple the text analysis feature is and programming with HANA in Java language that I'd like to share to you. I'm from the Startup Focus team, if you are a startup and interested in developing on HANA, visit here for more information.


Register an Application at Twitter Developers


As we are going to use the Twitter API to extract the data from Twitter, it is required to create an application at Twitter Developer and we will need the authentication information of the application and use them to invoke the APIs later.

In case you haven’t use Twitter before, you need to create your twitter account firstly. You can register an application and create your oAuth Tokens at Logon with your twitter account, click your profile picture and click the “My applications”.





Click the button “Create a new application”.




Follow the form instructions to complete the registration. You need to input the application name, description, your websites and leave the call back URL as blank. Accept the developer rules and click the button “Create your Twitter application”.



After that, you will be able to see the oAuth settings like below, save the values of Consumer Key, Consumer secret, Access token and Access token secret. We need to use them later in the APIs.




Download Twitter API Java library – Twitter4J


Twitter4J is an unofficial open source Java library for the Twitter API. With Twitter4J, you can easily integrate your Java application with the Twitter services. The link to download it is


Extracting the downloaded zip file, go the sub folder lib and you will see the file twitter4j-core-3.0.3.jar, which is the library we need in the Java project and it must be added as the library or class path in the java runtime.



There are some useful examples and you can simply check them to help yourselves getting familiar with the Twitter APIs.


Prepare the HANA jdbc library


In order to access SAP HANA from java, we will need the jdbc library, which you can find it at C:\Program Files\SAP\hdbclient\ngdbc.jar in windows and /usr/sap/hdbclient/ngdbc.jar in Linux by the default installation.




Now it is ready to go, in the end of the blog, we will understand the source code of the project and know how to connect HANA from java, how to use the twitter services in java and the most impressive thing is how simple it is to run the text analysis in HANA, which combines the unstructured data from various sources like twitter, documents with the structured data in RDBMS.

Import the Java Project in Eclipse

To save your time, I will upload the project here later so you can import the existing java project instead of starting from scratch. Do not worry and we will explain all the components of the project in details below. Open your HANA Studio and follow the steps below:

1. In the File menu, choose Import....

2. Select the import source General > Existing Projects into Workspace and choose Next. You should have created the workspace in the XS exercise. Otherwise, you may need to have your workspace created first.

3. Select the root directory where your project files located, selects the project TwitterAnalysis and click Finish to complete the import.

The project structure looks like this:



Understand the Java Project

The following table lists the major files in the project and we will explain them in details later in the exercise.











Build the connection to twitter services


Build the jdbc connection to HANA


The public interface for the network, twitter authentication configurations, override it by your own account or settings


The java bean class for the tweet objects


The data access object






SAP HANA jdbc library




Twitter4j library for twitter services in java




The SQL statement to create the column table in HANA




The SQL statement to create the fulltext index for text analysis




The file describes the steps to execute the project


Create a column table in HANA

Firstly, we need to create a table in HANA, where we want to store the tweets we fetched from the twitter services.

1. Open HANA Studio, copy the SQL statement from the CreateTable.sql and execute it in the SQL Console. You need to replace the current schema with your own schema.





2. Expand the Catalog folder in HANA studio, you should find the table TWEETS in your schema and the definition of the table is like:



Update the configurations

In the purpose to maintain the configurations easily, we put all the required information in a single interface and it is mandatory for you update it with your own account or settings before you can connect to either HANA or Twitter.

1. Open the file in your project. Basically, there are 4 category of setting you can override:

  • Network Proxy Settings: The proxy host and port, set the HAS_PROXY as false if you do not need to

    use proxy

  • HANA Connection Settings: Replace the HANA URL with your own HANA host and port, user,

    password and the schema where you created your table

  • Twitter Authentication Settings: Replace with your own authentication information from your twitter

    application as described in the prerequisites

  • Search Term: We will search the twitter based on the search term “startup” and we want to know what

    people were talking around the startups in twitter. You can always replace it with your own term if you are interested in other topics




Test Connection to Twitter

Once have the twitter authentication maintained correctly in the previous step. You can open and run it. You will see the message “Connection to Twitter Successfully!” following with your twitter user id in the console as the screenshot shows below.




Test Connection to SAP HANA


Now let us open the file and run it. You will see the message “Connection to HANA Successfully!” in the console as the screenshot shows below. Check the if you encountering any issue. Snip20150409_12.png



The data access object TweetDAO is the single point to communicate with HANA from java, take a look how the source code looks like and you will know the SQL statement and how to use the jdbc library.



Invoke Twitter API and save the tweets into HANA

Now it’s time to the do the real stuff. Open the file and run it, which will search the tweets based on the search term we specified in the and everything we got will saved to HANA table. You will see the messages in the console indicate the tweets have been inserted to HANA successfully like the screenshot shows:




After that, you can run the data preview in HANA studio and see the contents of the table TWEETS in your schema like this:




Run text analysis in HANA


Now we already have the tweets stored in the HANA table. The next step, we are going to run the text analysis to see what people are talking around the “startup” in twitter.

To run the text analysis, the only thing we need to do is creating a fulltext index for the column of the table we want to analysis and HANA will process the linguistic analysis, entity extraction, stemming for us and save the results in a generated table $TA_YOUR_INDEX_NAME at the same schema. After that, you can build views on top of the table and leverage all existing analysis tools around HANA to do the visualization even the predictive analysis.

1. Copy the SQL statement from the CreateFullTextIndex.sql and execute it in SQL console:

-- Replace the Scheme with your own Schema! --
SET SCHEMA "I045664";


2. Do you believe the text analysis is already done by HANA? Yes, it is. Now you know how simple it is! You will be able to find a generated table $TA_TWEETS_FTI in your schema. The structure of the table looks like this, which is the standardized format for the results of text analysis:




3. And here is the data preview of the $TA table, you will see the Tokens extracted from the tweets and the number of occurrence and entity type of each token.




4. Based on this, you can use the knowledge you learned in the previous modelling exercises and use the table to build a view if you want. Here, we just go to the Analysis tab and build a tag cloud like this:






I would find i nice, if e.g. all sales to customers could be displayed in a google map to see, where most of the customers of a componay are located. this could improve marketing campaings and other communication to customers.

Visualization of Customer Purchases on a Map : View Idea

Kind regards,


Hana Smart Data Integration - Overview


Another rather common scenario is to retain the history of the data throughout the loads. In short, build a slow changing dimension table of type 2.

The idea is simple: We get changed rows from the source and whenever something interesting changed, we do not want to overwrite the old record but create a new version. As a result you can see the entire history of the record, how it was created initially, all the subsequent changes and maybe that it got deleted at the end.



The dataflow

I want to use the RSS feed as realtime source again, but instead of simply updating each post, I want to see if corrections are made in the title. Granted, other source make more sense. More common would be to keep the history of the customer to correctly assign a sales order for this customer to the country the customer was living in at the time the sales order got created.

But since the principle is the same....


The flow is similar to the previous one, all source rows are compared with the target table, but then the result of the comparison is sent to the History Preserving transform and then loaded. (This time I am using the WebIDE, hence the screenshot looks different. But you can open and edit a .hdbflowgraph file with either one, even switch.)




In such a Slow Changing Dimension there are many options you can chose from, for example

  1. For a new record, what should its valid-from date be? You could argue that now() is the proper value, but maybe the source has a create timestamp of the record and this should be used?
  2. Does the source have a field we do not want to compare? Imagine for example we get the very same record a second time and all that changed in the publish date because the news article was re-published a second time. If that column was the only one that got changed, we do not even want to update the target row.
  3. Should the target table have a valid-from & valid-to date and/or a current-indicator? With the first we can see that version 1 of an entry was valid from create date to 11:05, a second version from 11:05 to 14:00. And with the current indicator we have a quick way to find the latest version of each record set.
  4. When do you want to create a new version, when just update the latest version? For example if the news title got changed we want to create history for, when the last_change_date is the only change, simply update the last version - no need for creating a new version just because of that.
  5. And then there are technical fields we need to tell, what is the primary key of the source, what the generated key and things like that.

Putting all of this into a single transform would make it very hard to understand. Instead the transforms work together in unison.


Preparing the data - Filter Transform


(1) In the Filter transform the source data is prepared and aligned to match the target table. In this case the source and target tables have the same columns, in name and datatype, only the target table has four additional fields: The surrogate key as the target will have multiple rows for one source record over the time; the valid from/to date and the current indicator column.

None of these columns are needed in the Filter transform as they will be dealt with in the Table Comparison and History Preserving transform. The sole exception is the VALID_FROM column, the transforms need to know the valid from value and that is set in this Filter transform. Hence I have added one more column and mapped it to "now()".




Comparing the data - Table Comparison Transform


(2) In the Table Comparison transform we deal with the question of what we want to compare. In this case we have columns like the URI - the primary key of the RSSFEED table, a title, a description and more.

The output of the Table Comparison transform is the structure of the selected compare(!) table, not the input structure. How would we know for example the SURROGATE_KEY of the row to update, else?



The first and most important setting are the columns to compare and to define what the logical primary key is.

Let's consider the column URI, the primary key of the source. This column should not only be compared, it is the key column for the comparison. In other words, the transform should execute something like "select * from compare_table where URI = Input.URI". Hence the URI column is not only added to the Compare Attributes but also marked as Primary Key=True.

All other columns are listed as Compare Attributes as well, hence the transform will compare the result of above "select * ..." column by column in order to find a change in the values. If all column values are identical, the transform will discard the row - no need to update a row that is current already.

The interesting part is what happens when one column is not part of the Compare Attribute list? Well, it is not compared but what does that mean in practice? Imagine the UPDATEDATE is not part of the list. If the transform find that the TITLE got changed, it will output the row. When the DESCRIPTION changed it will send the row. But if all values are the same only the UPDATEDATE column has a different value, the transform will consider that row is nothing-changed. Maybe somebody did open the row and saved it again without doing anything - all values are the same, only that UPDATECOLUMN is different.



For above to work a few obvious rules apply to the TC transform

  • It is fine to have less input columns than the compare table has. These extra columns will be assumed to have not changed, hence the transform will output the current value of the compare/target table so that an updated will not modify the target table column value.
  • It is not allowed to have input columns that do not exist in the compare/target table. The TC transforms compares the columns by name, it compares the URI input column with the URI column of the compare table, the AUTHOR with AUTHOR. If the source would have a column XYZ the transform does not know what to do with it. Hence you will find a filter transform upstream the TC transform often to prepare names and datatypes if needed.
  • The input should have a primary key which truly is unique, else the transform does not know what to do. The transform will perform an outer join of the incoming data with the compare table and all rows where there is no match found in the compare table are marked as insert. If the input has two rows you will end up with two insert rows and either this results in a unique constraint violation in the target or you end up with two rows for the same logical primary key in the target.
  • The target has to have the input primary key column as well, else above join does not work. This target column does not need to be the primary key, it does not even need to be unique. In that case you have to provide a hint however what row if the matching data set should be used - see below.


Above rules sound complex at first sight, but actually all of them are quite natural and what the user will do anyhow. But it helps to understand the logic in case of an error.


In our use case we have the problem that one URI returns potentially multiple matching rows from the compare table, all the various past versions. We need to specify which row to compare with.

We have two options for that, both are in the first tab. Either we filter the compare table or we specify a generated-key column.

The latter builds on the assumption that a surrogate key is always increasing, hence by specifying one, we tell the transform to compare with the higest one, that is the row that was inserted most recent.


The filter option would be to make the compare table appear as having the latest record only, e.g. by adding the filter condition CURRENT_INDICATOR='Y'. Assuming that there is only one current record per URI, that would work also, except for deletes. Deleted rows and not current, hence an incoming row would believe no such record was ever loaded before and mark it as brand new insert. So be careful when choosing this option.




Creating new versions, updating the old - the History Preserving Transform


The History Preserving transform gets all the information it needs from the Table Comparison transform, that is the new values from the input and the current values for all columns from the compare table and is using those to produce the output data.

In the most simple case, that is when neither a valid-from/to date is used nor a current indicator, all the transform does is comparing the new values with the current values for all columns listed in the Compare Attributes. If one or more is different, then it outputs an insert row with the new values and the table loader will insert that. If all these columns are the same and the input was an update row, it does send an update row. Insert rows are inserted.

If the input is a delete row, either a delete row is sent or in case the checkbox at the bottom called "Update Attribute on Deletion" is checked, an update is sent.

In case a valid-from/to column is used and/or a current flag, then the transform has to create a second row of type update to modify the current row in the target table. From the TC transform it knows the surrogate key of the compare row, it knows all the current values, hence it can update the row to the same values except for the valid-to-date and current-indicator, these should be changed to the new version's valid from date and the current indicator from 'Y' to 'N'.

Same thing happens for delete rows in case the "Update Attribute on Deletion" is set.




Generating the surrogate key - the Table Loader

The table loader should be a regular opcode writer, meaning its Writer Type option is left the default.

In the Sequence tab the surrogate key column is selected and the name of the Hana sequence to use is specified. This sequence has to exist already.

All rows that are inserted will get a new value in the surrogate key column, regardless of the input, but update/delete rows the surrogate key from the TC transform is used. That is the reason why the HP transform can output two rows, an insert for the new version and an update to alter the latest version currently in the target table. Both, insert and update, will have a surrogate key, e.g. SURROGATE_KEY=5, and therefore the update statement will look like "update table set current_ind='N', valid_to_date=...... where SURROGATE_KEY=5. But for the insert row, the 5 will be replaced by the sequence's next value.





A typical transformation is to retain history so we can query the data as if it was queried back then, something that is not possible with the source tables themselves - they do not contain history. The Hana time travel feature would be too expensive and is too limited as a generic method. Hence we need a transformation where we can define precisely what should trigger a history record, what should be just updated.

All of these setting can be made in the individual transforms and together they allow to build all variants of Slow Changing Dimensions Type 2.

I would like to discussion about SAP HANA  DB  Rapidmart deployment solution in 1.X.?.

If you are a developer just getting started with SAP Fiori, SAPUI5, HANA as well as HTML, CSS, and JavaScript, you’re likely often facing a whirlwind of excitement and frustration as you begin to develop your very first apps.


Whether you’re seasoned ABAP veteran or someone entirely new to software development, it’s easy to “miss the boat” with regards to really understanding best practice in HTML and CSS. It’s so easy to get ourselves caught up in web services, oData, JavaScript/jQuery, data visualization, HANA, AWS and so many other things that you may not be entirely sure if you are starting your learning and development in the right place. Take my advice, if you don’t have HTML and CSS knowledge or if it is pretty rusty, work on that before anything else before taking on developing your first SAPUI5/Fiori/HANA app.


There are numerous online courses and tutorials available on learning HTML and CSS, including some very good free ones. But let’s admit something, when you’re taking a course online, there’s the whole rest of the internet there to distract you. Sometimes, it is good to take a break from our web browsers, code editors, and everything else we’re multitasking on and focus on studying the essentials of what we need to know. That’s where this book comes in handy.


Duckett has put together a book on HTML & CSS that truly is in a class of its own. Many books on technology topics like this are filled with countless lines of lifeless text. In contrast, Duckett’s “HTML & CSS” is distinctly colorful, visually engaging, and succinct in describing how HTML and CSS works to structure texts, buttons, shapes and everything else that makes up the user interface of a web page. And if nothing else, this is a beautiful book worthy of your coffee table at home or in the office. The presentation is artfully crafted to show you the beauty and simplicity of using HTML and CSS’s  features. Nothing more, nothing less.


Speaking from my own experience developing web applications integrated with SAP, there is a certain “zen” you get from your work once you really understand HTML and CSS well. Creating state-of-the art user experiences no longer seems so arduous and monolithic of a task to accomplish. And as a developer working with cutting edge technology, you have plenty of monolithic tasks to accomplish! This book is an incredible resource to everyone looking to improve their skills working with HTML and CSS. Steve Duckett’s “HTML & CSS” earns my highest recommendation. 5/5 stars.


Non-affiliate link to the book on Amazon

This post is part of an entire series

Hana Smart Data Integration - Overview


The classic way to work with external data in Hana is to replicate the source tables to Hana and then transform the data in the user queries, e.g. by adding a calculation view on top. But does it make sense to apply the same transformations to the same rows every time a user queries the data? It would make more sense to apply the transformations only once to each row and make the calculation view very simple. In other words, the target of the realtime subscription receiving the changes should be a transformation task, not a table like in the Twitter example before.


And actually, that is quite simple utilizing the Hana Smart Data Integration feature. To be more precise, it is two checkboxes....



The batch dataflow

In the previous example of a batch dataflow we read from two database tables, joined them together and the result was loaded into a target table. For a change, the source should be this time a RSS Feed from

So I have created an RSS Feed adapter, did create a remote source pointing it to the URL and by creating a virtual table in this remote source, we can get all CNN news from this feed, these are the 50 most recent ones.

The primary key of a news article is its URL, hence this column is marked as PK of the source table, and the target table should have the same structure. To avoid a primary key violation when running the dataflow a second time, a table comparison transform compares the current data with the data being loaded already and inserts the new rows, updates changed rows and discards all that was loaded already.





The realtime dataflow

Executing that dataflow frequently would be one option but actually, the RSS adapter was built to support realtime subscriptions and is has optimizations built in, for example it asks in the http header already what the last change date of the page was. Therefore it is better to let the adapter push changes to Hana.

To accomplish that, all we have to do is checking the realtime boxes in above dataflow.



There are two of them, one is on container level (above screenshot) and the second is a property of the source table.

On table level the realtime box has to be set in case there are multiple source tables and only some of them should be read in realtime, e.g. you join this V_RSSFEED table with a flat file virtual table, which is static, not realtime.

And on container level the realtime flag is needed to generate a realtime task, even if just table types are used as source, no virtual table at all.


That's it. Suddenly you execute above dataflow once and from then on all changes will be pushed by the adapter into this transformation into the final table.


Granted, above transformation is not necessarily the most complex one but nothing prevents us from building more complex transformations, say we perform a text data processing on the news headline to categorize the text into areas, companies named, name of people and load those into Hana.

Then the calculation view is a simple select on these tables instead of it doing the transformations on all data every time a user queries something.



Under the hoods

Obviously above realtime checkbox does change a lot from the execution point of view. Most important, two tasks are being generated now by the activation plugin of the hdbtaskflow. One is the initial load, the batch dataflow. The other is a realtime task.

The interesting part is the realtime flow. It is more or less the same as the initial load task, except that the source is no virtual table, but a tabletype.





The activation plugin has also create a remote subscription, with target TASK not TABLE as before.





When executing the initial (batch) load, the realtime subscription is activated. We can see that in the stored procedure that is used to start the dataflow.




Compare above with any other ETL tool. For those it takes you, say, 1 hour to create the initial load dataflow and multiple days for the delta logic. The goal here is to reduce that time to a mouse click and support realtime delta.

Here are some of my ideas & suggestion within existing setup which can significantly benefit end users and BW consultants


Data Modelling


  • Default handling of junk characters upon loading : BW data models will have to always write their own piece of code to handle junk characters and it’s a long standing wish from all developers


  • There should be a provision to compare the BW objects and Reports across the system to identify inconsistencies


  • There is no standard way on how data integrity checks are to be performed with source system, SAP should define standard processes for data reconciliations OR provide data models to compare data between source systems


  • Similar to metadata repository SAP can have a repository which has inbuilt tools, Also it would be nice if SAP provides standard tools for certain tedious operation which need custom workarounds, some examples include,
    • Identifying BeX Reports associated with Roles
    • Reports which are associated with Web Templates & Workbooks
    • Where used list of variables in Reports
    • Reports having last used information


  • Suggest to have one single button to remove all Drills from the report output


  • If a report has 30+ fields for drills then its tedious to add the fields for drills to report output as users will have to scroll till bottom, enable search functionality for fields which are available for drills


  • Enable functionality to add multiple fields for drills at once in report output




Abhishek Shanbhogue

So basically this APP/weblink is going to store a large amount of data and would be a social security APP ..

We will keep this Basic as the idea is to have the information become commonly accessible to all ..


  1. Politician [Have Filter state wise , City wise , Area wise ].
  2. Government officer .
  3. General Public [All included ]


For Government officers the data would be filled in by a govt agency .

For an individual it will be filled by themselves and will go for approval to a 3rd Party agent who would verify all details .


Sections :

  1. Name
  2. Education
  3. Identification : ID , passport ,  Aadhar card , Pan card , [You can grey out your address ]
    4. Criminal record if any
  4. Companies worked for [Get yourself validated by a 3rd Party and it will be added ]
  5. Outstanding achievement if any [Specially this section for Politicians ]
  6. 7.Awards if any
  7. All court cases
  8. 9.CIBIL rating


For an individual :

  1. Once they create their profile here 100Rs voucher to be sent to their registered email ID to encourage people to register . Here the primary Key is passport and email ID so you cannot create your profile with both being the same again and again .
  2. Once they create an independent 3rd Party would verify and a verified sign would be shown marking the profile as complete ..


For politicians or candidates appearing for elections :

This is a mandatory step to be able to let the citizens know if they really truly qualify for being the leader in their area and to have transparency in the system .


Benefits :

  1. This will reduce the work and rework of all credit agencies as they can directly check the client details for identification and CIBIL rating in the website .
  2. This will reduce the work of Police when they have to verify tenants to see if any criminal record against them ..
  3. This will help Police to track down the person as people could report about the person if the data is not maintained in the website ..
  4. Provide easy and accessible information at a click of a second for those who want to know about their politicians and government agents .. THus people can take an informed decision while casting their vote .. Currently we dont even know the names of the elected candidate from our city /state..

  5.This will be the next FACEBOOK for the nation .. with real information about real people ..

6. This will reduce the marriage frauds for the partners specially in India as they can check each others validity from the portal to confirm there is no cheating of any sort in terms of faking an identity .


The need for HANA because : It will have a large amount of data being stored and based on a Name and date of birth the details should be published ,,

It is gravely needed that the fetching of data and displaying of data is really fast to be able to have a confidence in people that they are not wasting thier time in trying to use this APP or weblink


Do let me know how it sounds?





In the previous two blog posts, we have seen how the Core Data Services (CDS) client for XSJS XS Data Services (XSDS) allowed us to import metadata of a CDS data definition into XSJS and how entity instances could be created, retrieved, updated, and deleted. In this (final) block post in our series on XSDS, we present the unmanaged mode of XSDS, a JavaScript API for incrementally building queries against CDS entities, and execute them.


Incrementally Build and Execute Queries

Usually when you create database queries in XSJS, and in particular dynamic queries depending conditions like user provided parameters, you will have to deal with a String object containing the SQL query, and use String concatenation to assemble the query. A query builder provides a structured API instead of String operations, which can make your code a lot more succinct and readable. This is even more the case as in XSDS we can refer to CDS metadata, so the properties of the entities are already known.


In XSDS unmanaged mode we work with plain structured values retrieved from HANA by arbitrary queries.  Unlike in managed mode, these general queries support all of the advanced HANA functionality for retrieving data.


After we have imported our entities (using the importEntities statement described in the first blog post), we can start to build a query.


A general query related to an entity is built by calling the $query() method of the entity constructor. Continuing the example from the previous blog posts, this most basic query looks as follows:


var qPosts = Post.$query();


The resulting object returned by $query() represents an initial query for all of the fields of the underlying entity, but without any associated entities.


Queries may be built from existing queries by chaining $where, $matching, $project, and further operators. With these operators, results can be constructed incrementally without accessing the database for intermediate results.  A typical query construction could look as follows:


var qInterestingPosts = Post.$query().$where (/* condition */)

                                     .$where (/* further condition */)

                                     .$project (/* projection paths */)

                                     .$limit(5); (/* 5 results only */)


The final query is executed by invoking its $execute() method:


var result = qInterestingPosts.$execute();


As in the managed case, the result of the query is a JavaScript object, but it is treated as a plain value, not an entity instance managed by XSDS. Unmanaged values may be converted to entity instances using the new operator (see the second post).  Alternatively, the $find() or $findAll() methods return managed instances but only support a limited subset of the CDS query language.




The $project() method specifies the fields the query should return:


var qTitleAndComments = Post.$query().$project({

        Title: true,

        Comments: {

            Author: {

                Name: "CommentAuthorName"


        Text: {

            Text: true




var result = qTitleAndComments.$execute();


The list of projected fields is a JavaScript object, where desired fields are marked by either true or a String literal such as "CommentAuthorName" denoting an alias name.  Above query may thus yield the result:



    Title: "First Post!",

    Comments: {

Author: {

    CommentAuthorName: "Bob"


Text: {

    Text: "Can we prevent this by software?"





Note that the actual database query automatically LEFT OUTER JOINs all required tables based on the associations involved.  For above example, the generated SQL looks like (omitting the package prefix from the table name for readability):


SELECT          "t0"."Title" AS "t0.Title",

                "t0.Comments.Author"."Name" AS "t0.Comments.Author.Name",

                "t0.Comments"."Text.Text" AS "t0.Comments.Text,Text"

FROM            "" "t0"

LEFT OUTER JOIN "bboard.comment" "t0.Comments"

             ON "t0"."pid"="t0.Comments".""

LEFT OUTER JOIN "bboard.user" "t0.Comments.Author"

             ON "t0.Comments"."Author.uid"="t0.Comments.Author"."uid"

LEFT OUTER JOIN "bboard.user" "t0.Author"

             ON "t0"."Author.uid"="t0.Author"."uid"


Selections using $where


The $where() method filters the query result by some conditional expression.  For example, to select all posts which were commented by a person with the same name as the post author, we write:


var qStrangePosts = qTitleAndComments.$where(Post.Comments.Author.Name.$eq(Post.Author.Name));


References to fields and associations such as Comments are available as properties of the entity constructor function, e.g., Post.Comments.  As in the case with projections, XSDS generates all required JOINs for associations referenced by the conditions automatically, even if they are not part of the current projection.


To build complex query expressions, the XSDS expression language provides a number of predefined operators that work on all data types:


  • $eq, $ne for SQL equality and inequality, resp.
  • $gt, $lt, $gt, $le for the SQL operators >, <, <=, >=, resp.
  • $null for the SQL operator IS NULL
  • $like, $unlike for the SQL operators LIKE and
  • $and, $or for SQL junctors AND and OR


A more complex selection statement could thus look like




yielding a SQL query with the following where clause:


WHERE ("t0"."Text.Lang" = 'en') AND (("t0"."Rating" < 2) OR ("t0"."Rating" > 5))


For other SQL operators not part of the XSDS expression language you may use generic operators such as $prefixOp


qStrangePosts = qStrangePosts.$where(Post.Rating.$prefixOp("SQRT").$gt(1));


Selections using $matching


The $matching() method provides an alternative way to specify conditional expressions using the JSON-like syntax of $find() and $findAll() methods (see above).


var q1 = Posts.$query().$matching({ Rating: { $gt: 2 });

var q2 = Posts.$query().$matching({ Rating: { $ge: 1, $lt: 5 }, Parent: { $null: false } });


The main difference between $matching() and $findAll() is that the former returns an unmanaged, plain value and ignores all unpersisted changes to any entity instances.


We can think of the JSON-like conditional expression as a “template” that the result should match.  Compared to the XSDS expression language used by $where(), the matching syntax is more concise, but also less expressive.  Also note that the expression language does not apply to managed queries, e.g., to $find() or $findAll().


Calculated Fields and Aggregations


Arbitrary calculated values may be added to the result set by using the $addFields() method. As an example, we return the square root of the post’s rating as an additional field MyRating:


var qRootedRatings = Posts.$query().$addFields({ MyRating: Post.Rating.$prefixOp("SQRT") });


Aggregations are a special case of calculated fields that combine the $addFields operator with an additional $aggregate() method. For example, to retrieve the average rating per user, we would write:


var qUserRating = Post.$query().$aggregate({ Author: { uid: true, Name: true } })

                               .$addFields({ AverageRating: Post.Rating.$avg() });


In SQL terms, the $aggregate operator creates a GROUP BY expression for the specified paths and automatically projects the result on those. For an even more restrictive projection you may replace the true by a false in the $aggregate call:


var qUserRating = Post.$query().$aggregate({ Author: { uid: false, Name: true } })

                               .$addFields({ AverageRating: Post.Rating.$avg() });


This will remove the users’ IDs from the result set.


Currently, XSDS supports aggregation operators $avg, $sum, $count, $max, and $min.


Ordering, Size Limits, and Duplicate Elimination


The order of the result set is defined by the $order() method.  Each order criteria contains a property by with an expression according which to order. Optionally each criterion can contain a flag desc to require a descending order and a nullsLast flag.  The following example uses two criteria to first order descending by rating and then order ascending by author name:


var qOrderedPosts = Post.$query().$order({ $by: Post.Rating, $desc: true }, { $by: Post.Author.Name });


The $distinct operator removes duplicates from the result set.  To get the set of all ratings we can write:


var qAllRatings = Post.$query().$project({ Rating: true }).$distinct();


The number of records returned may be specified by using the $limit operator, which introduces the LIMIT and OFFSET keywords into the SQL query:


var qNextFivePosts = qStrangePosts.$limit(5, 3); //skip posts 1-3, return posts 4-8



In this blog post you have seen how you can exploit the full power of HANA queries against CDS entities through a convenient JavaScript API from XSJS. The unmanaged mode presented here complements the managed mode presented in the previous post.


Which mode you should use depends very much on your use case, if you have simple queries and need to modify objects, probably the instance management is valuable to you; if you need to create complex dynamic queries, you may want to work in the unmanaged mode. In any case you can easily switch between the modes, as described in the last section of the previous post.


We hope you enjoy writing your native HANA applications using XSDS, giving you a seamless experience in consuming database artifacts in the XSJS layer.

Hi everyone, in this post I'll share with you how to customize your SAP HANA login screen background image in just 3 steps. I got to know this new feature from SAP HANA SPS 09: New Developer Features; XS Admin Tools, but have never tried it before. Recently I've supported a SAP HANA PoC. Since the customer wanted to customize the login screen background image, I had the chance to use this new cool feature in SAP HANA SPS09.


As you know, in SAP HANA SPS08, we're not able to change the background image of form login screen. So, the following background image always appear when you use form login authentication.




As of SAP HANA SPS09, the login screen has been changed to the following simple blue style. As you can see, the background is nothing more than light blue, but don't worry, it's now possible to customize your own login screen background image. You can use whatever image you like.




You can find the information about this topic from SAP HANA XS Configuration Parameters - SAP HANA Administration Guide - SAP Library which is documented very clearly. Now let's DIY our own background image.




Step 1: Upload your image and make it public

First of all you need to know one thing, the form-based login screen is identical for all SAP HANA native applications which means the background image has no relationship with your XS projects and should be placed in a "global" place. As you can see in the above example, "/sap/hana/xs/ui/Image.jpg" is a good choice. For simplicity, I just uploaded the background image to my XS project, since it's my own SAP HANA system and I'm the only user.



You may notice there's a prerequisite, "No requirement for authentication and authorization". So, we need to make the background image available to everyone. We can achieve this with the following .xsaccess file, just set null to authentication.





Step 2: Configure xsengine.ini -> httpserver -> login_screen_background_image

Just input the path of our background image.




Step 3: Set the technical user

Create a technical user, e.g., "_USS" and grant the role "sap.hana.xs.selfService.user.roles::USSExecutor"




Assign the technical user to the artifact "/sap/hana/xs/selfService/user/selfService.xssqlcc"




That's it! Visit whichever SAP HANA native applicaiton to have a test.




Hope you enjoyed reading my blog and DIY your own login screen background image successfully!



I recently needed to collect data from external websites in json format using SAP HANA. There were some very useful blogs on scn to help me find my way. I however did notice that most of them were prior to SP09 and I therefore had one or two challenges trying to complete this as the user interface has changed slightly on HANA on the admin side. This blog will provide a more up to date reference but also share what blogs assisted me along the way.




Simple http request


So to create a simple app to communicate with the web I looked at SAP Help and found the following SAP HANA Cloud Platform


I found this helpful as it had simple code and worked with json.


Below is an example of taking the code from the example, you need to create a xshttpdest file as shown below.




Then create a xsjs file with the following code. Note that httpdest and xsjs file must be in the same directory.



You can now run the program, it will connect to google maps api. The example requests a distance calculation from Cologne in Germany to Frankfurt in Germany.



The output is in json format and in my screen it has come out in a neat structure, that is because I have app in my google chrome that does this for me. Depending on the browser the output can vary.




More Complex http request


So some sites when connecting it will be a secure website and uses https instead of http. In these instances in order to interact with the websites you will need a certificate to allow secure communication.


I found some very good blogs on this on scn by Kai-Christoph Mueller , they are

Outbound httpS with HANA XS (part 1) - set up your HANA box to use SSL/TLS

Outbound httpS with HANA XS (part 2) - set up the trust relation

Outbound httpS with HANA XS (part 3) - call the https outbound service in XS server side JavaScript (xsjs)



This was an excellent starting point for me, I however did have the following challenges

  • The script in part 1 of Kai-Christoph blogs after executing gave me some errors where my web app server for xsjs wouldn't start up. This was due to some of the standard settings in HANA webdispatcher.ini config were overwritten. I therefore reverted back to the standard settings and then by reading the script added the settings manually in SAP HANA Studio. The script was still executed to copy the files to the correct locations and register files. Below you can see the settings in hana studio



  • In part 2 of Kai-Christoph blogs the screens for trust manager have changed in SP09. So navigate to http://hana:8001/sap/hana/xs/admin , follow the numbers as shown in the below screen shot. As you can see I have done this for twitter.



  • Once you have done part 1 and part 2 you can connect to more secure sites using https. You however need to create the code in a similar manner as we showed in previous example connecting to google map api. Once the httpdest is done you will need to link httpdest to the certificate you imported. For completeness here is the code to connect to Twitter


     I have intentionally smeared my bearer credentials. This is the authentication with twitter for my twitter account.




  • In part 3 of Kai-Christoph blogs the screens for trust manager have changed in SP09. Here you want to link the xshttpdest file with the certificate we imported. Remember I have imported Twitter certificate, the httpdest is also for twitter. So below is the steps to navigate to http dest file and link with certificate we imported.




If you now run the application it will connect to twitter and search for SAP HANA tweets, will return search results in JSON as shown below.





The https can be more complex as linking the code to certificate is done in separate section. But once you know where to go and how to do it, it is then relatively simple.


I hope the above helps anyone trying to connect to HTTP from HANA.


Thanks. Good luck


Making use of different parameters or information from satellite imaging for all existing oil extraction sites, it may be possible to derive or generate complex algorithm to understand commonalities among different Oil extraction sites.

Based on this algorithm or pattern, it should be able to predict different possible oil extraction sites across earth by analyzing similar parameters (or information from satellite imaging for these sites).

These algorithm should provide probability to find oil across different areas.


These algorithm can be applied to find extraction sites (ores) for other precious & useful minerals as well based on similar or slightly different data.



Dear Subject Matter Experts,

Please provide your comments if you agree with this idea or you have better idea.



SAP HANA Idea Incubator - Planning Engagement of the Business : View Idea


This could be a purely academic exercise to be applied across multiple organizations or this could be something applied to a single organization, particularly a larger one with many entities and planning in a very decentralized fashion.

Regardless of the planning solution involved, or whether the planning functionality is purely for financial purposes, or operational purposes, tax, etc., many systems have the capability to log changes to the system, complete with who made the change, the time stamp, and likely the nature of the change, the source of the activity, the entity, cost center, profit center, etc.  Capture this information in HANA and develop KPIs around this information to highlight things such as:

1. number of people involved in the planning process by area

2. number of touch points by each person on the average

3. are updates by the users on a seasonal basis or regular and frequent throughout the year?

4. is there a right balance between too frequent and not frequent enough, at the right level in the organization, compared with plan accuracy?

5. frequency of source data loads, such as external, economic or industry data, etc.

6. is the business making updates soon after external data updates?

7. are planners providing updates as a result of other updates in the organization (sales forecast increases, which should lead to changes to operations and financials)?  how long before the updates occur?

8. are certain metrics unique to some areas of the business, like tax planning?

9. do certain business functions plan better than others?  are sources of information missing?

In addition to gaining insight into the level of engagement and collaboration of the organization in their planning functions, weaker areas in the organization can begin to learn from stronger areas of the organization.  Insights such as timing of regular updates from sales could be a trigger for other updates.  You could reduce the lag time from receiving external data to updating plans, therefore making more timely decisions.


Filter Blog

By author:
By date:
By tag: