(Special thanks to Ashish Sahu -blog content and review, Eric Du and Travis Price for technical papers)

 

The NoSQL market has grown rapidly to support the vast amount of data generated by today’s world of intense, always-on environments.

Datastax, built on Cassandra, a high performance, Apache open-source database technology, spans its database technology in a fully distributed way, across private data centers and the cloud to power this landscape.

It is one of the leading NoSQL vendors providing fault-tolerant distributed databases to 200+ customers including Netflix, Adobe, HP, eBay and Twitter. Additionally, DataStax supplies OpsCenter for visual management and monitoring, along with expert support, training, and consulting services for Hadoop, Cassandra, and Solr. SAP and DataStax are working together to bring their joint capabilities to the enterprise.

 

The integration of the technologies is delivered for 2 scenarios:

  1. SAP HANA SPS09 supports DataStax Enterprise 4.5

 

SAP’s in-memory SAP HANA platform is integrated with DataStax to enable applications and interactive analysis across corporate applications data & operational data with content from Internet of Things, messaging or stream data stored on DataStax.

This integration is delivered via SAP HANA smart data access (SDA) that makes it possible to access data from DataStax remotely, without moving data into SAP HANA. SAP HANA handles the data like a local virtual table performing automatic data type conversions and push-down processing.

The combined capability means that customers can use SAP HANA’s “speed of thought” analytics on millions of transactions per second data coming into DataStax. It also simplifies the landscape by processing data where it resides instead of performing costly and time-consuming data movement.

SDA already supports many different data sources such as: SAP IQ, SAP ASE, Spark, Teradata, Apache Hadoop, Oracle, SQLServer providing a comprehensive selection for its customers’ needs.

DSE 4.5 HANA.jpg

 

 

SAP HANA is a high-performance in-memory columnar database which performs OLTP and OLAP on the same platform.

It provides in-database native support for advanced analytics like predictive, text analysis, spatial and graph processing. The platform is the leading choice of more than 4000+ customers and 1700+ startups.

DataStax Enterprise (DSE), built on Apache Cassandra™, delivers what Internet Enterprises need to compete in today's high-speed, always-on data economy. DataStax Enterprise is the leading distributed database choice for online applications that require fast performance with no downtime.

DataStax Enterprise 4.5 encapsulates a distributed peer-to-peer architecture in which all nodes are equal, ensuring cluster resilience to an arbitrary number of node or data center failures.

 

Architecturally, SAP HANA’s smart data Access Component connects to DataStax Analytics with Spark/Shark through the ODBC driver. The resultant data set is treated as a remote table in HANA and it can be used as the input for all advanced analytics functionalities in HANA, which includes modeling views, SQL Script procedures, predictive, text analytics and so on.

 

The joint capabilities of SAP HANA and DataStax are as follows:

 

Always On

  • Zero downtime. Ever.
  • 100% availability for mission-critical applications

 

Performance

  • High throughput
  • Low latency

 

Advanced Analytics

  • Intuitive interfaces make it easy to explore transactional data in real time

 

No Complex Data Migration or ETL

  • Transactional data is immediately available to SAP Applications

 

Integrate with SAP Ecosystem

  • Use existing SAP workflows and BI tools

 

 

 

2. SAP’s Agile BI Platform,SAP Lumira, supports DataStax

 

We have also completed the integration with SAP Lumira in order for customers to view Datastax data in Lumira Data Visualizations and Storyboards.

DSE 4.5 Lumira.jpg

 

The DataStax Enterprise Sample Connector Template for SAP Lumira makes it easy to integrate DataStax data with numerous other data sources to allow business users to analyze data and get insights in real-time You can download the DataStax Connector Template for SAP Lumira from the SAP HANA Marketplace under this link:

http://marketplace.saphana.com/p/3389

INTRODUCTION:

 

This document is regarding a solution for a specific requirement (mentioned below) that I had when working on a SAP HANA project for a client.

 

REQUIREMENT:

 

I had a requirement to export a complete schema (SCHEMA X) from one system (Development  - BD2) and import to another schema (SCHEMA Y) (Quality system – BQ2) so as to make all tables available in target schema as available in source.

 

SCHEMA X in target system has all the catalog objects as in source system.

 

Therefore when SCHEMA X is exported from source and imported into target system, HANA system raises an error that the catalog objects (in my case - tables) already exist in target system (as the exported file contains references to SCHEMA X and not to SCHEMA Y).

 

PROCESS:

 

To achieve the above requirement, just exporting and importing catalog object will not work as the source schema name is different from the target schema.

 

Also there is SCHEMA MAPPING option from HANA Modeler perspective which is suitable for modeling views but the method I used is a common approach for moving the catalog objects from one schema to another.

 

Source schema: ARJUN (Development)

Target schema: SRIVATSAN (Quality)

 

All the tables available in ARJUN schema of DEV system has to be exported and imported to SRIVATSAN schema of Quality system.

 

Both the schemas are available in Quality system.

 

ARJUN schema in quality system already has all the tables as development system so there is a requirement to rename all the occurrences of ARJUN to SRIVATSAN in the table creation code after the EXPORT.

 

 

 

Step 1: EXPORT the source schema (ARJUN)


  • Right click the schema and choose EXPORT

1.jpg

 

2.jpg

  • Click Next.

 

3.jpg

 


  • Choose Type as BINARY.
  • Provide an empty folder path for the export.
  • Click Finish.


4.jpg

 

Step 2 : Renaming the exported schema (from ARJUN to SRIVATSAN).


  • Open NOTEPAD++ (Advanced text editor that I used for renaming)
  • Press Ctrl + F
  • Go to Find in Files
  • Enter the text to be found and text to be replaced with.
  • Choose the Directory in which the exported file is present.

 

5.jpg


  • Click Find All.
  • All occurrences of the search text are displayed.


6.jpg

 


  • Again press Ctrl + F , open  “Find in Files” tab.


7.jpg

 

  • Click “Replace in Files”.


8.jpg

 

  • Click OK.
  • Now, all the table creation statements of the exported schema will have SRIVATSAN instead of ARJUN.


 

9.jpg

 

 


  • Also it is necessary to RENAME the folder present inside the index folder to the name of the SCHEMA to which the file has to be imported.


10.jpg

Now, the exported binary file is ready to be imported to the target schema (SRIVATSAN) of target system (Quality).

 

Step 3 : Import the catalog object.


  • Go to HANA studio.
  • Click FILE -> IMPORT.


11.jpg

 

  • Choose “Catalog objects” under SAP HANA.


12.jpg

 


  • Choose the target system (Quality).


13.jpg

 


  • Click NEXT.
  • Choose the Location of the exported file.


14.jpg

 

  • Click Next.
  • Choose the required tables that are to be imported to target system.


15.jpg

 


  • Click Next.
  • Choose the required options for DATA and THREADS.


16.jpg

 

  • Click Finish.

 

Now, the tables that are required are imported to SRIVATSAN schema.

 

Conclusion:

 

This approach is a general method to  copy tables from one system to other (different schemas). This might be achieved even through SCHEMA MAPPING which I haven’t tried practically.


Please share your views and comments.

You can find the series and project from Real-time sentiment rating of movies on SAP HANA (part 6) - wrap-up and looking ahead

 

 

 

Intro

Hi everyone, welcome to the series again. We're now in the second half of the series. Hope you've enjoyed the previous three blogs including

 

Real-time sentiment rating of movies on SAP HANA (part 1) - create schema and tables

Real-time sentiment rating of movies on SAP HANA (part 2) - data preparation

Real-time sentiment rating of movies on SAP HANA (part 3) - text analysis and modeling

 

In the above three blogs, we've already finished data preparation and sentiment analysis, in addition we've also created two information views which will be consumed in the following two blogs. Now, it's time for UI. In this blog let's first take a look at SAP HANA info access and build our first smart application based on this super cool feature!

 

What is SAP HANA info access?

Maybe it's the first time you heard SAP HANA info access, never mind. Let's first have a look at it. You can find the detailed introduction and its usage from  Building Search UIs with SAP HANA Info Access - SAP HANA Developer Guide - SAP Library, so I won't show the details here. I just highlight something important for you.

 

"The SAP HANA info access HTTP service wraps search SQL queries and exposes them through an HTTP interface. The service operates on one SAP HANA attribute view at a time and makes use of the indexed and analysed texts joined in that view."

 

22.PNG

 

First of all you can find the above definition and architecture diagram of SAP HANA info access from SAP HANA Info Access Architecture - SAP HANA Developer Guide - SAP Library. As you can see, there are four layers including attribute views and Info Access HTTP Service on the SAP HANA side, while SINA and Info Access Dev Toolkit for HTML5 on the UI side. With SAP HANA info access, you do not need to care the server-side search logic and you can just focus on the UI stuff. So, why not use this awesome feature to build our sentiment app? In our smart app, we'll just use the attribute view which is already defined in Real-time sentiment rating of movies on SAP HANA (part 3) - text analysis and modeling and the Info Access Dev Toolkit for HTML5 at the top layer for simplicity. I'll show you how simple but powerful it is in this blog.

 

There is still something you need to notice. Make sure you've finished the following before you start your journey with SAP HANA info access.

  1. Import the delivery unit "HCO_INA_SERVICE.tgz" from your SAP HANA server, since this DU is not included in the default installed DU list, i.e., when you install SAP HANA server, it is not automatically installed like the SAPUI5 DU.
  2. Download the delivery unit "HCO_INA_UITOOLKIT.tgz" from SAP Software Download Center to your desktop/laptop and import the DU from the client side.
  3. Assign this role "sap.bc.ina.service.v2.userRole::INA_USER" to the user whom you want to use SAP HANA info access.

 

In addition, I highly recommend you to play with the SAP HANA info access demo in SHINE (section 9.10) first, since you can have a basic idea & look and feel about SAP HANA info access from this demo. Visit http://<myServer>:<XS_Port>/sap/hana/democontent/epm/fulltextsearch/ui/search.html after you import SHINE and generate data. Besides, you can also have a look at its source code, as our smart app is based on it.

 

23.png

 

Info Access Dev Toolkit for HTML5

Since we've already preapred our attribute view "AT_TWEETS.attributeview" in Real-time sentiment rating of movies on SAP HANA (part 3) - text analysis and modeling, what we need to do in this blog is more related with the front-end stuff such as HTML, CSS and JavaScript. Now let me show you the most important search.html file which we will visit directly in our browser. If you've read the source code of the SINA demo in SHINE, you'll be familiar with it. Here I just want to mention some key points from top to bottom. (I just found the syntax highlighting does not work on SCN today, so the below code may look ugly. Sorry for that. )

 

1. You need to include the following script when you use the info access development toolkit.

<script data-main="/sap/bc/ina/uitoolkit/js/main.js" src="/sap/bc/ina/uitoolkit/js/lib/require.js"></script>













 

2. createFulltextIndex.js is used to create full text index of the searched fields for the first time. We'll discuss it later.

 

3. The following code defines the name/package/key of the attribute view.

<div data-sap-widget="workarea" data-entityset="AT_TWEETS" data-packagename="movieRating.models" data-aggregationdimension="id_counter">

4. We have 6 work area items (position from 0 to 5) in our facet pane, while there are only 3 in the SINA demo of SHINE. Besides, we have one work area item (position 6) for the result list in our content pane.

 

5. We have 17 work area items with the chart widget in our facet repository and the following 6 are displayed by default which seem the most important.

  • SENTIMENT BY MOVIE
  • SENTIMENT
  • TWEET SOURCE
  • TITLE
  • TOKEN
  • STUDIO

 

6. Except SENTIMENT BY MOVIE, for each work area item with the chart widget, you can switch between bar chart, pie chart and tag cloud for your convenience.

 

7. We have one work item area with the result list widget which shows the details of each MENTION and the result list is sorted by "created_at" in a descending order, i.e., you will see the most recent mention at first. Similar with the SINA demo of SHINE, we also define 3 separate result templates as follows.

  • tweet.html: Defines the template of the result in our content pane.
  • tweet_small.html: Defines the template of the result in our facet pane.
  • tweet_detail.html: Defines the template of the result in the overlay container.


search.html

<!DOCTYPE HTML>
<html lang="en">
<head>
<meta charset="utf-8">
<!-- Always force latest IE rendering engine (even in intranet) & Chrome Frame -->
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title>Movie Sentiment Rating based on SINA</title>
<!-- CSS: implied media="all" -->
<!-- CSS concatenated and minified via ant build script-->
<link rel="stylesheet" type="text/css" href="search.css">
<!-- end CSS-->
<link rel="shortcut icon" href="img/favicon.png" />
<script data-main="/sap/bc/ina/uitoolkit/js/main.js" src="/sap/bc/ina/uitoolkit/js/lib/require.js"></script>
<script>
requirejs(['server/createFulltextIndex.js'], function   () {
});
</script>
<!-- create fulltext index -->
<!-- <script src="server/createFulltextIndex.js"></script> -->
</head>
<body>
    <header>
        <div data-sap-widget="header" data-title="Movie Sentiment Rating based on SINA" data-helphref="http://help.sap.com/hana/SAP_HANA_INA_Search_JavaScript_Reference_en/index.html"></div>
    </header>
    <div class="background">
    <div data-sap-widget="alert"></div>
   <div data-sap-widget="workarea" data-entityset="AT_TWEETS" data-packagename="movieRating.models" data-aggregationdimension="id_counter">
    <div class="searchnfilterframe">
    <div data-useresponseattributes="false" data-usedimensions="true" data-sap-widget="searchnfilter"></div>
    </div>
        <div class="facetPaneShadow">
          <div class="facetPane">
            <div class="facetColumn" data-children-drag-handle="header">
                <div data-sap-widget="workareaitem" data-title="Drop here" data-target-position="0"></div>
                <div data-sap-widget="workareaitem" data-title="Drop here" data-target-position="1"></div>
                <div data-sap-widget="workareaitem" data-title="Drop here" data-target-position="2"></div>
            </div><!-- end of the facetColumn" -->
          </div> <!--  end of facetPane -->
        </div> <!--  end of facetPaneShadow -->
        
        <div class="facetPaneShadow">
          <div class="facetPane">
            <div class="facetColumn" data-children-drag-handle="header">
                <div data-sap-widget="workareaitem" data-title="Drop here" data-target-position="3"></div>
                <div data-sap-widget="workareaitem" data-title="Drop here" data-target-position="4"></div>
                <div data-sap-widget="workareaitem" data-title="Drop here" data-target-position="5"></div>
            </div><!-- end of the facetColumn" -->
          </div> <!--  end of facetPane -->
        </div> <!--  end of facetPaneShadow -->
        <div class="contentPaneShadow">
            <div class="contentPane" data-children-drag-handle="header">
                <!-- The following div is a placeholder that will be replaced on the first time a workarea-centerable will be centered. -->
                <div data-sap-widget="workareaitem" data-iscenter="true" data-title="Drop here" data-target-position="6"></div>
            </div>
        </div>
        <div data-sap-widget="facetrepository" data-children-drag-handle="body">
        
        <div data-sap-widget="workareaitem" data-title="SENTIMENT BY MOVIE" data-source-position="0">
                        <div data-sap-widget="switchbox" data-activeindex="0">
                            <a href="javascript: void(0);" class="iconTimeline">Timeline</a>
                            <div data-sap-widget="chart" data-dimension-line="sentiment" data-dimension-x="title" data-dimension-y="$$Count$$" data-charttype="line" data-color="colored" data-toplarge=10 data-maxlabelnum="all"></div>
                            <a href="javascript:void(0)" class="iconGroupBarChartVertical">Goup Bar Chart</a>
                            <div data-sap-widget="chart" data-toplarge="25" data-topsmall="5" data-title="SENT-MOVIE-GROUPBAR" data-dimension-line="title" data-dimension-x="sentiment" data-dimension-y="$$Count$$" data-charttype="groupbar"></div>
                        </div>
                </div>
        
        <div data-sap-widget="workareaitem" data-title="SENTIMENT" data-source-position="1">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="SENTIMENT-BAR" data-dimension="sentiment" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="SENTIMENT-PIE" data-dimension="sentiment" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="SENTIMENT-TAG" data-dimension="sentiment" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="TWEET SOURCE" data-source-position="2">
                    <div data-sap-widget="switchbox" data-activeindex="0">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="SOURCE-BAR" data-dimension="tweet_source" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="SOURCE-PIE" data-dimension="tweet_source" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="SOURCE-TAG" data-dimension="tweet_source" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="TITLE" data-source-position="3">
                    <div data-sap-widget="switchbox" data-activeindex="2">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="TITLE-BAR" data-dimension="title" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="TITLE-PIE" data-dimension="title" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="TITLE-TAG" data-dimension="title" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="TOKEN" data-source-position="4">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="TOKEN-BAR" data-dimension="token" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="TOKEN-PIE" data-dimension="token" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="TOKEN-TAG" data-dimension="token" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="STUDIO" data-source-position="5">
                    <div data-sap-widget="switchbox" data-activeindex="0">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="STUDIO-BAR" data-dimension="studio" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="STUDIO-PIE" data-dimension="studio" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="STUDIO-TAG" data-dimension="studio" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="TWEET BY YEAR">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="TWEET-YEAR-BAR" data-dimension="tweet_time_year" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="TWEET-YEAR-PIE" data-dimension="tweet_time_year" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="TWEET-YEAR-TAG" data-dimension="tweet_time_year" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="TWEET BY MONTH">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="TWEET-MONTH-BAR" data-dimension="tweet_time_month" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="TWEET-MONTH-PIE" data-dimension="tweet_time_month" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="TWEET-MONTH-TAG" data-dimension="tweet_time_month" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="TWEET BY DAY">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="TWEET-DAY-BAR" data-dimension="tweet_time_day" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="TWEET-DAY-PIE" data-dimension="tweet_time_day" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="TWEET-DAY-TAG" data-dimension="tweet_time_day" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="TWEET BY HOUR">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="TWEET-HOUR-BAR" data-dimension="tweet_time_hour" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="TWEET-HOUR-PIE" data-dimension="tweet_time_hour" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="TWEET-HOUR-TAG" data-dimension="tweet_time_hour" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="TWEET BY WEEK">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="TWEET-WEEK-BAR" data-dimension="tweet_time_week" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="TWEET-WEEK-PIE" data-dimension="tweet_time_week" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="TWEET-WEEK-TAG" data-dimension="tweet_time_week" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="MOVIE BY YEAR">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="MOVIE-YEAR-BAR" data-dimension="movie_year" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="MOVIE-YEAR-PIE" data-dimension="movie_year" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="MOVIE-YEAR-TAG" data-dimension="movie_year" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="MPAA RATING">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="MOVIE-MPAA-BAR" data-dimension="mpaa_rating" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="MOVIE-MPAA-PIE" data-dimension="mpaa_rating" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="MOVIE-MPAA-TAG" data-dimension="mpaa_rating" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="RELEASE DATE BY YEAR">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="RELEASE-YEAR-BAR" data-dimension="release_date_year" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="RELEASE-YEAR-PIE" data-dimension="release_date_year" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="RELEASE-YEAR-TAG" data-dimension="release_date_year" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="RELEASE DATE BY MONTH">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="RELEASE-MONTH-BAR" data-dimension="release_date_month" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="RELEASE-MONTH-PIE" data-dimension="release_date_month" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="RELEASE-MONTH-TAG" data-dimension="release_date_month" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="RELEASE DATE BY DAY">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="RELEASE-DAY-BAR" data-dimension="release_date_day" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="RELEASE-DAY-PIE" data-dimension="release_date_day" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="RELEASE-DAY-TAG" data-dimension="release_date_day" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="RELEASE DATE BY WEEK">
                    <div data-sap-widget="switchbox" data-activeindex="1">
                        <a href="javascript:void(0)" class="iconBarChartHorizontal">Bar Chart</a>
                        <div data-sap-widget="chart" data-toplarge="25" data-title="RELEASE-WEEK-BAR" data-dimension="release_date_week" data-charttype="bar" data-color="green"></div>
                        <a href="javascript:void(0)" class="iconPieChart">Pie Chart</a>
                        <div data-sap-widget="chart" data-title="RELEASE-WEEK-PIE" data-dimension="release_date_week" data-charttype="pie" data-color="#F0AB00"></div>
                        <a href="javascript:void(0)" class="iconCorrelation">Tagcloud</a>
                        <div data-sap-widget="chart" data-topsmall="8" data-toplarge="25" data-title="RELEASE-WEEK-TAG" data-dimension="release_date_week" data-charttype="tagcloud"></div>
                    </div>
                </div>
                
                <div data-sap-widget="workareaitem" data-title="MENTION" data-source-position="6">
                <div data-sap-widget="switchbox" data-activeindex="0">
                     <!-- Optional data-sap-widget="resultlist" attributes:
                     - data-cutoverflowresults
                         ="false": resultlist shall show all results regardless the size if the resultlist container
                         ="true":  (default) show only so many results that they fit into the resultlist container
                     - data-maxresults
                         ="10": (default) number of results to be shown
                     - data-responseattributes
                         ="": (default) use default attributes of view/collection (HANA database) for ODATA response
                     - data-highlightterms
                         ="true": (default) search term is highlighted
                     -->
                     <!-- First switchbox item -->
                     <a href="javascript:void(0)" class="iconListView">Result List</a>
                     <div data-sap-widget="resultlist"
                      data-showheader="false"
                          data-maxresultslarge="8" data-maxresultssmall="3" data-responseattributes="user_screen_name,user_profile_image_url,title,movie_year,mpaa_rating,runtime,release_date,poster,studio,sent,created_at_str,tweet_source,source_url,token,sentiment,text_head,text_tail,created_at_time_str,poster_pro,synopsis"
                          data-detailwidth="650" data-detailheight="500"
                          data-highlightterms="true" data-highlightedattributes="title" data-highlightmaxlength="100" data-highlightstartposition="1"
                          data-snippetterms="false" data-snippetedattributes="title"
                          data-opendetailevent="click" data-showdetail="true"
                          data-orderby="created_at_str"
                          data-sortorder="DESC"
                          data-resulttemplate="templates/tweet.html"
                          data-resulttemplate-css="templates/tweet.css"
                          data-resulttemplate-small="templates/tweet_small.html"
                          data-resulttemplate-small-css="templates/tweet_small.css"
                          data-resulttemplate-detail="templates/tweet_detail.html"
                          data-resulttemplate-detail-css="templates/tweet_detail.css">
                     </div>
                </div>
                </div>
            </div>  <!-- end of facetrepository div -->
    </div> <!-- end of workarea div -->
    <div class="clear"></div>
    </div> <!-- end of background div -->
</body>
</html>


Create full text index

Regarding the second key point in the last section, we'll now discuss creating full text index. First you may find the following code snippet at the beginning of the HTML body which I have not mentioned yet.

 

<div class="searchnfilterframe">
    <div data-useresponseattributes="false" data-usedimensions="true" data-sap-widget="searchnfilter"></div>
    </div>

 

So what's that? It will display a search filter at the top of the web application. Moreover, it should be the heart & key feature of SAP HANA info access. So, I'm gonna show you in this separate section. Because we want to enable searching the used dimensions in our chart widgets instead of the used attributes in our result list, we set "data-useresponseattributes" to false and "data-usedimensions" to true. As we all know, we can use full text index to search in SAP HANA. However, SAP HANA isn't so smart that it can create the full text index for you automatically. Currently we need to create the full text index manually. That's why we need createFulltextIndex.js and createFulltextIndex.xsjs in our project. Since we just use createFulltextIndex.js to call createFulltextIndex.xsjs, we just need to change the URL of createFulltextIndex.js in SHINE. Here let's just have a look at the server-side logic.

 

createFulltextIndex.xsjs

(function (){
    'use strict';
    //initialize variables
    var conn = null,
        body = '',
        prepStat = null;
    //initial database setup, create fulltext indizes
    try {
        //get connection
        conn = $.db.getConnection();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_TA_TOKEN_I ON "MOVIE_RATING"."$TA_TWEETS_I" ("TA_TOKEN") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_TA_TYPE_I ON "MOVIE_RATING"."$TA_TWEETS_I" ("TA_TYPE") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_SOURCE_STR_I ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("source_str") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_CREATED_AT_YEAR_I ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("created_at_year") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_CREATED_AT_MONTH_I ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("created_at_month") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_CREATED_AT_DAY_I ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("created_at_day") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_CREATED_AT_HOUR_I ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("created_at_hour") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_CREATED_AT_WEEK_I ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("created_at_week") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_TITLE_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("title") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_MPAA_RATING_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("mpaa_rating") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_YEAR_STR_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("year_str") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_STUDIO_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("studio") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_RELEASE_DATE_YEAR_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("release_date_year") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_RELEASE_DATE_MONTH_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("release_date_month") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_RELEASE_DATE_DAY_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("release_date_day") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_RELEASE_DATE_WEEK_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("release_date_week") SYNC SEARCH ONLY OFF');
        prepStat.execute();
        prepStat.close();
        // --- commit changes and close connection
        conn.commit();
        conn.close();
        body = 'fulltext index created';
        $.response.status = $.net.http.OK;
    }
    catch (e){
        // 289: index already exists, thats ok
        if(e.code && e.code !== 289){
            body = e.message;
            $.response.status = $.net.http.BAD_REQUEST;
        } else {
            body = 'fulltext index already exists';
            $.response.status = $.net.http.OK;
        }
    }
    $.response.contentType = 'text/plain';
    $.response.setBody(body);
}());








 

You can see in the above XSJS file, we create 16 full text indexes for the dimensions in our chart widgets except SENTIMENT BY MOVIE if these full text indexes do not exist. Otherwise, we need to handle the exception. Now you may ask, "Can I run these SQL statements in SQL console manually?" Absolutely you can. But considering the DU transportation, it's better to use this dynamic way.

 

Besides we can also set the "Search Properties" in our AT_TWEETS.attributeview. For more details, please refer Modeling Your Search Content - SAP HANA Developer Guide - SAP Library

 

24.PNG

 

Look & feel

Now it's time to show the look & feel! You can find four parts from the following window dump.

  • Search filter in red box
  • Facet pane in green box with six default chart widgets
  • Content pane in blue box with result list
  • Facet repository in yellow box with 17 chart widgets, you can drag and drop them to facet pane.

As you can see, since we filter the movie title to "The Judge", everything including the chart widgets and the result list will only display the stuff with this movie.

 

25_1.png

 

Now let's click one mention and the following overlay container will prompt to us. It'll show you the very detailed tweet info and mentioned movie info. From strong positive sentiment to strong negative sentiment, we use green to red, similar with the traffic light. In addition, the sentiment token is also highlighted in the tweet text. Remember in Real-time sentiment rating of movies on SAP HANA (part 2) - data preparation, we insert tweets into SAP HANA in real-time and the movie sentiment is continuously analyzed by SAP HANA. So with this smart app, we can analyze the movie sentiment in real-time! For instance, you can analyze the sentiment and # of mentions of a specific movie or you can even analyze all new release movies in current week. Exciting?

 

26.PNG

 

Movie Sentiment Rating based on SINA

Still not enough? Don't worry. I've recorded a video for you. In the following video I'll show you how to use this smart app to analyze the movie sentiment based on SINA.

 

 

Next steps

In this blog, we've jumped into SAP HANA info access and built our first movie smart app based on it. With the simplicity of SAP HANA info access, we are able to finish our first app in a very short time. What we need is just an attribute view and some front-end stuff like HTML, CSS and JavaScript. We did less but got more. In the next blog, we will have a look at sap.m (a package in SAPUI5) and build our movie app on mobile devices. See you soon.

 

Hope you enjoyed reading my blog and playing with the smart app.

With HANA all around (speaking about the HANA database, not HANA XS or Hana Cloud) I as ABAP developer expect myself needing to leverage more database features. This consequently means more to code more SQL – and this implies more headache to me. My brain more used to more imperative commands instead of declarative expressions.

But technology does not care about my brain, so I have to get it used to this type of programming. Luckily, I don’t need a HANA-DB in order to train myself, so I started to write a short series about how to use SQL – oh wait, it’s now called “code pushdown”. This makes it easier to sell and might trick my brain that this is not veeeery old stuff which I should have been knowing for a long time but funky new stuff which is worth investing time into.

This first post is about the very basic of joins – particularly LEFT OUTER JOINs in combination with restrictions.

 

Business background

In our application, the database scheme is quite well normalized (don’t ask me which normal form it is, but it should be the third one or higher, I was never eager to differentiate further). However; to the user, the data is presented in a denormalized form. While transactionally, the dependent data is retrieved via an association, we also need to enable finding instances where dependent data matches some criteria. However, the non-existence of this dependent data must not prevent finding instances when not searching for attributes of the dependent data.

Implementation alternatives

Reading the requirement, the first thing that comes to every ABAPer’s mind is a database view. Go to se11, model your relations, use OpenSQL – that’s it. Unluckily not.

An DDIC DB-view always joins tables with an INNER JOIN. This means that the right hand side table (RHST) needs to provide matching entries for each instance of the Left HSTas prerequisite for the joined data to exist. This makes DDIC DB views unusable for all those selections, where data shall be searchable which has a minimum cardinality of 0. Thus, specifying the join as LEFT OUTER at runtime seems to be the only alternative. However, as we are being told by the OPenSQL-exception, placing a restriction on the RHST via OpenSQL is also not possible. Both limitations – as far as I know – shall be obsolete when using Core data services (CDS). But as this is only available from 7.40 upwards, it can only be subject of another post.

Until then, in our project we were forced to used native SQL based on LEFT OUTER JOINs in order to fulfill those requirements.

 

The INITIAL issue

After we had implemented a generic query class executing the native SQL, we felt quite comfortable having found a nice solution. Until the first tickets appeared in out bug tracking tool. As we’re using a floorplan manager based UI, our users were enjoying the “new” search options – particularly the “is initial” and the “is not initial” option were attracting a lot of affection: “oh, this makes it so easy for me to find a … where no purchasing requisition has been created yet”. Of course! “But it does not work!”. The reason for that was the custom table containing the purchasing requisition number was a depend one to the table which contained the instances for which shall be searched. With dependent_table.purch_req = ‘’, the desired instances for which the dependent data does not exist were not returned. When explaining the issue and solution in my team, I felt that not every ABAPer is aware of that behavior.

Thus, I wrote a short report executing multiple selects in order to explain the different result sets.

Have look at the following screenshot picturing the content of the joined tables and the different SELECT-statements issued. Do you know all the results?

Sample_Selects_wo_results.PNG

 

You can find a screenshot of the report including result and explanations separately at below. I did not want to spoil your curiosity presenting the results directly

 

Sample_Selects_results_1.PNG

sample_Selects_results_2.PNG

 

 

Did you expect all the results? There was one, which surprised me. And to my even bigger surprise, it behaves differently on NW 7.31 and NW 7.40 – maybe I should open a message for that one…

You can find the report attached including the report source and alternatively SAPLINK-nugget containing the source tables if you’re too lazy to create the two tables yourself.

 

Outlook

There is one massive limitation with respect to performance on this approach of using LEFT OUTER JOINs. Therefore, in a subsequent post, I may introduce the advantage of subqueries (once I understood it myself to such an extent that I feel comfortable writing about it… )

 

Got feedback?

I’d love to read your comments on this topic and how you prepare yourself for the obviously upcoming re-incarnation of SQL – sorry: of the code pushdown.

openSAP: Next Steps in Software Development on SAP HANA – my thoughts after week 4

 

This week we looked at OData and ways to consume the data. The OData protocol allows you to create, read or update  and delete data via an URL.

The OData protocol is a powerful part as it is an endpoint that allows access to the SAP HANA data database. This access can be from a xsjs or html file on the HANA server, but it can also be from somewhere else.

 

 

odata.jpg

Source: OData.org

 

In SAP HANA you have functions to do aggregations, can define associations, associations can be picked up later by SAPUI5 components for added functionality. (look for the purchase order header and item sample)

And typical for sap was also the OData4SAP Annotations. Typical as when you enable this extension it will deliver you the label, the unit and the text (but not the currency)

 

What I don’t remember from the initial HANA course, but apparently available since SP06 are SQLSCRIPT extensions. These extensions are procedures that are triggered by the OData service. The input is basically the data that is passed from the OData service i.e. the new to be created record or the update of a record.

What you can do in the trigger is validation, but if you want you can get creative and think of other things such as an online ticket sales procedure that checks the timestamp of the order to look which place in the virtual line the prospect buyer has for the final ticket of that important concert of <insert your favorite music here>

This is an improvement over the earlier version as I can remember that previously you had either a straight OData service with nothing else or a xsjs service.

Another part of this week was Batch requests and deep insertions. The latter was interesting as this provided a way to maintain relationships between two tables. For example if you have purchase order headers and items, the item table needs the foreign key ordernumber to keep a relation to the header file.

The deep insert functionality creates an ID place holder that will be replaced by the actual ordernumber ID in the end. It’s a shame that SAPUI5 doesn’t support it yet. The example was a concatenation of string variables to manually create an OData Request.

 

Exercises:

In this week we had to type A LOT for the exercises. (I know you can cut&paste but I like to type it in at least for a part, to better digest the lessons).  Especially interesting was using the SAPUI5 for creating a table using the OData source. It takes a little getting used to as the entire setup of SAPUI5 is that you declare variables based on objects.

 

You set properties in a Json variable :

Var myButtonproperties = ({text:"Hello World!",tooltip:"Hello Tooltip!"}

 

Then you create a new instance using those properties :

var myButton = new sap.ui.commons.Button (myButtonproperties);

 

you can set properties via methods :

myButton.setText("Hello World!");

 

and finally you can attach functions to events :

 

myButton.attachPress(function(){$("#btn").fadeOut()});

 

and if you want to have more buttons you can try something like this :           

var namearr = ["btn1", "btn2", "btn3"];
                var textarr = ["text1", "text2", "text3"];
                var buttons = new array
               
                for (i = 0; i < namearr.length; i++) { 
                                buttons.push(new sap.ui.commons.Button(namearr[i]));
                                buttons[i].setText(textarr[i]);
                                               buttons[i].attachPress(function(){$(namearr[i]).fadeOut();})}

This will give you an array of buttons that you can place all over your document.

But admittedly the last was a bit of playtime after the exercises J

Your Chance to Provide Feedback to SAP

 

SAP developers and employees from the SAP User Experience team are giving you the opportunity to try out and provide feedback on the new SAP® HANA Modeler based on the SAP Web Integrated Development Environment by participating in a usability test.

 

The test examines the new cloud-based development environment that allows you to create complex analytic models right inside your  browser.  Experience new ways of user interaction to work with the HANA Modeler in the WebIDE and create Analytic Models (Calculation Views & Analytic Privileges) with SAP’s cutting-edge HTML5 technologies.

 

During the test you will evaluate the ease-of-use, usability and functional completeness of the HANA Modeler in WebIDE.

 

Targeted roles are Consultants or dedicated data modelers who create, manage and deploy analytic models on HANA as well as application/content developers responsible for building analytical models on HANA for further consumption in HANA Native Applications or in reporting tools.

 

If you are interested in discovering the latest developments in this area, please sign up for this topic:

 

SAP HANA – SAP HANA Modeler in SAP Web Integrated Development Environment

 

 

As a tester, you can actively work on the prototypes or products during the test. At the end of the test, you will have an opportunity to share your feedback on the prototype. Please note that it is the tool that is being tested, not you, to further enhance the user experience of SAP's products.

 

Testing will start at Tuesday, November 11th 2014 after the keynote.

 

Access to the full topic catalog and registration instruction for the usability testing can be found here https://usability.sap.com/

 

See you in Berlin!

For the SAP HANA Modeling- and Product Management Team Christoph Morgen

Recently, I have experienced something while modeling HANA information view that I want to share with everyone.


Case:

Dynamic Filter for projection with current Fiscal Year and Fiscal Period.


Method:

In order to filter it dynamically I decided to use input parameter. There are different parameter types that we can use to do this. I decided to use following "Derived from table" type as shown below:



Challenge:

When filtering column then it displays list of values from the same HANA information view and we can select the date. Now, as the requirement is current Fiscal Year and Period so how do we filter the column to dynamically select current date in "Filter Value" section?


Solution:

Like everyone, I expected it to understand the expression. So I tried following and it worked. I used the expression


currentdate()




I was aware that I could use expression in the "Default Value" section (as explained in SAP HANA Developer Guide) but didn't expect it to

work in "Filter Value". I was able to filter the projection using this input parameter. To test the filtering, I removed the input parameter and the projection displayed all the years rather than just current year. Hopefully, when I get an explanation for this then I will share it here. For now, you can try different expressions.

You can find the series and project from Real-time sentiment rating of movies on SAP HANA (part 6) - wrap-up and looking ahead

 

 

 

Intro

Till now we do not need to worry about the lack of data, since we've prepared the metadata of new release movies and tweets in Real-time sentiment rating of movies on SAP HANA (part 1) - create schema and tables and Real-time sentiment rating of movies on SAP HANA (part 2) - data preparation. With job scheduling, the data will be inserted into SAP HANA automatically and continuously. So, in this blog we will make some text analysis on our data and prepare some models which will be consumed later in the UI part.

 

Sentiment analysis on tweets

I won't explain the details of text analysis in SAP HANA. If you are interested in SAP HANA text analysis, you can have a look at SAP HANA Platform – SAP Help Portal Page, "SAP HANA Search and Text Analysis References" part.

 

Regarding our scenario, we want to do the sentiment analysis on tweets, especially the "text" field. Now let's do it! It's not difficult for us to implement the sentiment analysis part, since what we need is just a full text index. Unfortunately, we cannot use full text index in SAP HANA XS currently. So, in order to realize this, we have to run the following SQL statements manually.

 

CREATE FULLTEXT INDEX "MOVIE_RATING"."TWEETS_I" ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("text") CONFIGURATION 'EXTRACTION_CORE_VOICEOFCUSTOMER' ASYNC LANGUAGE DETECTION ('EN') TEXT ANALYSIS ON;
ALTER TABLE "MOVIE_RATING"."$TA_TWEETS_I" ADD CONSTRAINT TWEETS_FK FOREIGN KEY("id") REFERENCES "MOVIE_RATING"."movieRating.data::movieRating.Tweets"("id") ON DELETE CASCADE;















 

Here is the explanation.

 

- For the first SQL statement, we create a full text index "TWEETS_I" on the "text" field of tweets table. In addition, we use the voice of customer configuration which will detect the sentiment of the "text" field and since we only search tweets in English, we can limit the language detection to English only. After you run this SQL statement successfully, a new table named "$TA_TWEETS_I" will be created which maintains the analysis results. For the syntax of "CREATE FULLTEXT INDEX", please refer CREATE FULLTEXT INDEX - SAP HANA SQL and System Views Reference - SAP Library

 

- For the second SQL statement, it is optional. We just define a foreign key "id" in the automatic generated table "$TA_TWEETS_I" which references the "id" column in our "TWEET" table. Imagine if we delete a tuple in "TWEET" table, the corresponding text analysis records will also be deleted automatically.

 

Now we can find what's going on in the "$TA_TWEETS_I" table. You may find with the capability of text analysis, SAP HANA is able to detect the property, e.g., SOCIAL_MEDIA/TOPIC_TWITTER, DAY, DATE, URI/URL, PRODUCT, Sentiment, etc. (in "TA_TYPE" column) of the extracted token (in "TA_TOKEN" column). Here I highlighted some records in blue boxes and red boxes respectively. For example, there was a movie titled "Justice is Mind", since we searched tweets with the hashtag "#JusticeisMind" for this movie, you can find lots of "#justiceismind" in the "TA_TOKEN" field which are marked in blue boxes. As in our smart app, we focus on the sentiment analysis, I selected some records in red boxes, e.g., "great" is tagged as "StrongPositiveSentiment" and "Nice" is tagged as "WeakPositiveSentiment". Everything seems reasonable.

 

10_1.png

 

GENERATED ALWAYS AS

Because we plan to make some analysis based on the time dimension, let's first have a look at what we have now. From the last two pictures in Real-time sentiment rating of movies on SAP HANA (part 2) - data preparation, you can find we have "release_date" of movies and "created_at" of tweets.

 

Movies

8_1.png

 

Tweets

9_1.png

 

But we want more such as year, month, day something like that, so we can create some "GENERATED ALWAYS AS" columns to achieve this. Since currently CDS does not support this feature, we need to write some SQL statements manually. You can find the syntax from ALTER TABLE - SAP HANA SQL and System Views Reference - SAP Library or CREATE TABLE - SAP HANA SQL and System Views Reference - SAP Library which means you can create "GENERATED ALWAYS AS" columns not only when you create tables but when you alter tables.

 

We create the following "GENERATED ALWAYS AS" columns for movies.

ALTER TABLE "MOVIE_RATING"."movieRating.data::movieRating.Movies" ADD ("release_date_year" NVARCHAR(20) GENERATED ALWAYS AS TO_NVARCHAR("release_date", 'YYYY'));
ALTER TABLE "MOVIE_RATING"."movieRating.data::movieRating.Movies" ADD ("release_date_month" NVARCHAR(20) GENERATED ALWAYS AS TO_NVARCHAR("release_date", 'MON'));
ALTER TABLE "MOVIE_RATING"."movieRating.data::movieRating.Movies" ADD ("release_date_day" NVARCHAR(20) GENERATED ALWAYS AS TO_NVARCHAR("release_date", 'DD'));
ALTER TABLE "MOVIE_RATING"."movieRating.data::movieRating.Movies" ADD ("release_date_week" NVARCHAR(20) GENERATED ALWAYS AS TO_NVARCHAR("release_date", 'WW'));
ALTER TABLE "MOVIE_RATING"."movieRating.data::movieRating.Movies" ADD ("year_str" NVARCHAR(20) GENERATED ALWAYS AS TO_NVARCHAR("year"));











 

Meanwhile we create the following "GENERATED ALWAYS AS" columns for tweets.

ALTER TABLE "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ADD ("created_at_year" NVARCHAR(20) GENERATED ALWAYS AS TO_NVARCHAR("created_at", 'YYYY'));
ALTER TABLE "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ADD ("created_at_month" NVARCHAR(20) GENERATED ALWAYS AS TO_NVARCHAR("created_at", 'MON'));
ALTER TABLE "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ADD ("created_at_day" NVARCHAR(20) GENERATED ALWAYS AS TO_NVARCHAR("created_at", 'DD'));
ALTER TABLE "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ADD ("created_at_hour" NVARCHAR(20) GENERATED ALWAYS AS TO_NVARCHAR("created_at", 'HH24'));
ALTER TABLE "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ADD ("created_at_week" NVARCHAR(20) GENERATED ALWAYS AS TO_NVARCHAR("created_at", 'WW'));
ALTER TABLE "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ADD ("source_str" NVARCHAR(200) GENERATED ALWAYS AS SUBSTR_BEFORE(SUBSTR_AFTER("source", '>'), '<'));











 

That's it. Now you can see additional columns in the "Movies" and "Tweets" table.

 

Movies

11_1.png

 

Tweets

12_1.png

 

Modeling

Now let's create some information views which can be easily consumed in the UI part.

 

1. AT_TWEETS.attributeview

 

Step 1: Data Foundation

This attribute view is focused on sentiments with tweet and movie info, so we use "$TA_TWEETS_I", "Tweets" and "Movies" these three tables as data foundation. Sub-steps are shown as follows.

  1. Join "$TA_TWEETS_I" and "Tweets" on the ID of tweet;
  2. Join "Tweets" and "Movies" on the ID of movie.
  3. Filter "TA_TYPE" with only five values "StrongPositiveSentiment", "WeakPositiveSentiment", "NeutralSentiment", "WeakNegativeSentiment" and "StrongNegativeSentiment", i.e., we only show tweets with sentiment and we ignore tweets without sentiment.
  4. Add necessary columns to the output and create some calculated columns based on the output.

13.PNG

 

Step 2: Semantics

In this step, we define key attributes and hide some useless columns.

14.PNG

 

Step 3: Validate, activate and preview data

15.PNG

 

The following list displays the final output attributes.

  • token: the sentiment token, e.g., great, nice, wonderful, amazing, love, bad, ...
  • sentiment: StrongPositiveSentiment/WeakPositiveSentiment/NeutralSentiment/WeakNegativeSentiment/StrongNegativeSentiment
  • user_screen_name: the screen name of the user who posted the tweet
  • user_profile_image_url: the URL of the user profile icon
  • tweet_source: the source of the tweet, e.g., Twitter Web Client, Twitter for iPhone, ...
  • tweet_time_year: year of created_at
  • tweet_time_month: month of created_at
  • tweet_time_day: day of created_at
  • tweet_time_hour: hour of created_at
  • tweet_time_week: week of created_at
  • movie_id: movie ID
  • title: movie title
  • mpaa_rating: MPAA rating
  • movie_year: movie in year
  • runtime: movie runtime
  • release_date: movie release date in theater
  • poster: the URL of movie poster in thumbnail
  • studio: the studio of the movie
  • id_counter: the primary key
  • sent: sentiment in text like "Strong Positive Sentiment" instead of "StrongPostiveSentiment"
  • created_at_str: created_at in 'YYYY-MM-DD HH24:MM:SS' format
  • created_at_time_str: created at in 'MM-DD HH24:MM:SS' format
  • source_url: the URL of tweet source, like Download the free Twitter app | Twitter
  • text_head: the text before the token
  • text_tail: the text after the token
  • synopsis: movie synopsis
  • release_date_year: year of release_date
  • poster_pro: the URL of movie poster in profile
  • release_date_month: month of release_date
  • release_date_day: day of release_date
  • release_date_week: week of release_date

 

2. CV_MOVIES.calculationview

Compared with the above attribute view, this calculation view is focused more on new release movies themselves. What we want to do is given a release date range getting several attributes and calculated measures of movies in this range, e.g., # of mentions and the rating score.

 

Step 1: We use SQLScript instead of graphical view.  Here is what we want to get:

  • id: movieID
  • title: movie title
  • poster: the URL of movie poster in profile
  • release_date: the release date of movie
  • mention: # of sentiments detected related with this movie
  • rating: the rating score

 

rating = (# of strong positive sentiment * 10 + # of weak positive sentiment * 8 + # of neutral sentiment * 6 + # of weak negative sentiment * 4 + # of strong negative sentiment * 2) / # of total sentiments

 

16.PNG

 

Step 2: Define input parameters date_from and date_to

 

17.PNG

 

Step 3: Add target columns

 

18.PNG

 

Step 4: Handle semantics

 

19.PNG

 

Step 5: Choose dates and preview data

 

20.PNG

 

21.PNG

 

Next Steps

Till now we can automatically analyze the sentiment from real-time tweets which we've searched in Real-time sentiment rating of movies on SAP HANA (part 2) - data preparation and we've built two information views, one for sentiments and the other for movies. In the next two blogs we'll have a look at the UI part and consume the information views in this blog.

 

Hope you enjoyed reading my blog.

We all start out doing things the wrong way, and then wonder how we get ourselves back out of it.

 

Over the last couple of weeks, I've been trying to bring ourselves into a better development method by building objects into HANA using the SAP HANA Development perspective. Unfortunately, we've been and built a whole series of tables using SQL. I know, bad boys and so on, but we're trying to get back on the right track.

 

So we've finally started building our new tables using hdbtable files (and hdbdd files), but how can we reverse engineer the tables we've already built?

 

The answer lied in the catalog tables.

 

What started as a seemingly painful problem, ended up as a fairly simple query that I wrote and fine tuned in a couple of hours in between other things

 

This is what I ended up:

 

SELECT

  '{name = "' || A.COLUMN_NAME || '";'

  || ' sqlType = ' || A.DATA_TYPE_NAME

  || '; nullable = ' || LOWER(A.IS_NULLABLE)

  || ';' ||

  CASE

    WHEN A.DATA_TYPE_NAME IN ('CHAR','VARCHAR')

      THEN ' length = ' || TO_CHAR(A.LENGTH) || ';'

    WHEN A.DATA_TYPE_NAME = 'FLOAT'

      THEN ' precision = ' || TO_CHAR(A.LENGTH) || '; scale = ' || TO_CHAR(A.SCALE) || ';'

    WHEN A.DATA_TYPE_NAME = 'DECIMAL'

      THEN ' precision = ' || TO_CHAR(A.LENGTH) || '; scale = ' || TO_CHAR(A.SCALE) || ';'

    ELSE ''

  END

  ||

  CASE

    WHEN A.COMMENTS IS NULL THEN ''

    ELSE ' comment = "' || A.COMMENTS || '";'

  END

  || '},' AS COLUMN_COMMAND

FROM TABLE_COLUMNS A

WHERE A.SCHEMA_NAME = <schema_name>

AND A.TABLE_NAME = <table_name>

ORDER BY A.COLUMN_ID

 

The script output looks like this excerpt from one of our tables:

 

{name = "CCYY_IND"; sqlType = VARCHAR; length = 4;},

{name = "ACCT_PERIOD"; sqlType = INTEGER;},

{name = "ACTUAL_AMOUNT"; sqlType = DECIMAL; precision = 15; scale = 2;},

 

 

Once I have this, it's just a matter of creating a new hdbtable file, completing the schema name using the demo lines and pasting in the results from the above script. I've attached a copy of the script file, and an actual file from our landscape.

 

Hope this helps someone else out there.

IMG_8564s.jpg Last week the SAP Big Data Truck visited Poland during its European Tour. Two days of intensive discussions in Warsaw followed by one-day stop in Poznań.

 

Poznań was on the map for a good reason: the Big Data Smarter conference was organized that day at the University of Economics. SAP has long time University Alliances relationship with this university through cooperation with Professor Witold Abramowicz and his team at the Department of Information Systems. It is worth to mention that two SAP Mentors from Poland – Marek Kowalkiewicz and Michal Krawczyk - both graduated from University of Economics in Poznań too.

 

The conference started with introductory sessions: first one discussing the history and the context of Big Data presented by Prof Abramowicz and second one presenting the inner technology of SAP HANA platform delivered by Karol Dworak.

 

IMG_8579s.jpgProf Abramowicz reminded that the “Big Data” can be the new buzz word, but is not the new challenge. Volume, Variety and Veracity of the data had been around no matter if we were talking about 2MB in 1980s or 2PB of data in 2010s.

 

Then team of Prof Abramowicz presented applications they had been building with SAP HANA and other technologies. Impressive spectrum of Big Data use cases has been covered:

  • Analysis of demand and supply on the energy market
  • Sentiment analysis on the consumer market
  • Management of micronetworks of energy prosumers
  • Applying graph processing and visualization to analysis of social networks
  • Threat detection in marine traffic based on AIS data
  • Optimization of public procurement
  • Monitoring (illegal) content in deep web

 

The University of Economy in Poznań and SAP will continue their partnership through announced Smart and Big Data Science Research & Innovation Center.

In this week we took a deeper dive into actual coding and advanced language features. First we revisited the architecture. The point got reiterated that data intensive logic is moved to the database. This week’s episode were presented by Rich Heilman.

But this week is all about coding, so we immediately move into the .hdbprocedure files and SQLScript editor. Luckily the amount of help in the editor increased.  Syntax highlighting, code hints and syntax checking now all are there.

Rich made a point that type definition is now moved to CDS as before you would declare them here. Additionally you can now define were the runtime artifact will live.


Into the procedure we get to hear about declarative logic and imperative logic. The first is the workhorse where the parallelization takes place, the second is more for looping, decisions and data manipulation. Finally Arrays are mentioned and how you use them. With unnest you can get these values and move them to tables. A bit confusion is that apparently the first index number is 1? As JavaScript starts at 0 you might have to switch back and forth in your thinking as you develop applications.  Finally a look at the dynamic filtering, nice little thing to add filters dynamically.

 

Triggers are usable. In a current project we just had a discussion how we want to log who did what changes. These triggers may be the answer to that. You can set up triggers to perform actions on table updates Additionally you can respond, also events, but this is called exception handling. You can perform some actions or re-signal to the user that he’s done something that our beloved procedure doesn’t like. How dare he ruin our application!

By now I start noticing that we are looking at a lot of live demos of procedures that do what Rich just said.

Further with script we get into the user defined functions. These kind of functions are in two flavors: Table or Scalar. With table get a lot of in parameters and this table function will return you a… table.


Scalar functions on the opposite can return multiple values.  The only thing that still puzzles me a bit is when to use hdbprocedure and when the user defined functions. I can build input and output in the former, so probably I am still missing something here. So if anyone can enlighten me, please ? J Otherwise I’ll just post a question on the forum.


The AFL is actually quite well known to me as I already used some predictive functions to simulate a worldwide market. Shame though you still need all the tables as parameter tables. The Business Function library is a collection of business functions that you can use primarily for finance functions such as depreciation.


In the late stages of this week we come to the plan visualizer. At first I wasn’t too keen about this thing as I thought that HANA itself should manage how it would do all the work. But after the example my opinion changed. This is a great map to see which functions all contribute to the final result. With the simulation of the market I mentioned earlier I actually had over 30 views and procedures that were interconnected and some contained Cartesian relations (deliberately ;-) ) and I overdid this so much that 1 query took over 30 seconds in HANA. I was quite proud of that actually, although for an audience it can be a hard sell. “look, we made HANA slow!”

A tool like this would probably have helped to do some tweaking as you are able to find the needle in your function haystack that is responsible.


The last theme of the week is ABAP or in full “ABAP Managed Database Procedures”. Now you are able to write SQLScript in the ABAP layer and it was shown how SQL script in an ABAP function is able to perform much faster than a badly written (according to Rich) ABAP.

Actually I was wondering about ABAP in HANA in general. What is the future? I can imagine that in the future you would fit all the specific SAP methods in the AFL and use SQLScript and JavaScript and Fiori, River etc for your custom programming. So where does ABAP does actually fit in here. Does it do anything specific that keeps the need for an extra language for a long time? Or is it there to enable people to hold on to their code?

This week was all about code. In addition to show how things are done it raised new questions. I guess it is supposed to be this way as the course is a starting point to get going.


Context:

There are times when a particular procedure say X, needs to be executed only as part of a workflow or else it may cause unwanted behavior, particularly when the procedure deals with DDLs and DMLs, as the workflow might involve a chain of checkpoints with the output of one checkpoint leading into the next checkpoint as input. The schema structure is at stake. Such procedures need to have restricted access. A typical application will have a Roles and Privileges framework, which restricts/grants certain users from/to creating/dropping/executing DB artifacts (procedures, tables, views etc.). So with the users who do have the privilege to create/modify DB artifacts, its always possible to create multiple wrappers around the procedure X in our case, and execute the wrappers, or in another case, directly execute X, outside the workflow. So, for DB artifacts, I prefer to think of them having 2 groups of users, we might say, in our case. One is the server, the other the client. While the server has the privileges to create/modify/delete (collectively called serve) and execute the procedures, the client can only execute the DB artifact. Restricting the client from executing X outside the workflow, therefore, becomes simple. We just give EXECUTE access only on this schema containing X to the client(s). But this itself is not enough. Lets say the relevant user in question is UserA. UserA might still log on to the HANA studio and run the procedure from backend, outside the workflow as he has the EXECUTE rights as mentioned before. Lets say the name of the schema is RUNTIME_SCHEMA. Therefore UserA cannot do any DDL/DML changes to the any of the DB artifacts in RUNTIME_SCHEMA, unless doing them from within SQLScript calls on procedures in the schema RUNTIME_SCHEMA. The UserA has all the rights on his schema UserA. He can create procedures in his schema and fire DDLs/DMLs on the artifacts in RUNTIME_SCHEMA. But the procedures won't execute as the user does not have the appropriate DML privileges outside the scope of the EXECUTE privilege on the RUMTIME_SCHEMA. Therefore, the user is completely locked out from making any changes to this artifact X. So, we are good with UserA. So is the case, with other client(s). But if we talk about the server user or the user group that created the procedure X in the first place, then he is the one user (group) who can modify the procedure as well. Lets assume, he (or the group) modifies it for legitimate reasons as part of the workflow. But this user group can also control the possible point(s) of entry into this procedure X if it wants to as part of the workflow strategy. It is also possible to limit the point of entry to none if the procedure is supposed to be a "non-wrappable" procedure (i.e. the procedure can't have a parent), so to speak.


Use Case:

1.Providing Access Control to Procedures- To ensure that the called procedure is only executed if called from a specific caller or caller group and not from other procedures, typically when you want a single/limited point(s) of entry to the called procedure, which typically happens during creation of a service model implementing design patterns at DB level, particularly the factory methods, knowing the caller of the procedure can be handy. Furthermore, the called procedure might be a decision table traversing different paths based on the specific caller from the caller group. One might say, the called procedure can behave as a router. Alternatively, we may think of the caller as the object and the called procedure as the private method of the object that can therefore be executed only from within the caller and not from the outside world. This, as I see it, is a way to implement the access specifiers part of OOPs in a declarative/imperative hybrid such as SQL. Not that this can't be done statically. A procedure X calling Y, can pass its name 'X' to Y, but procedure Z can as well pass 'X' to Y and forge X. This also involves a lot of code maintenance

Example Pseudocode

create procedure schema_name.Inner as begin
//find the caller
....
...
...
if caller is 'Outer1' then
     execute model1
elseif caller is 'Outer2' then
     execute model2
else //for other callers not authorized to run this procedure
     log a 'No authorization' message
     return
end if;
end;

















2.Another use case that comes to my mind is logging the caller procedure. One might have multiple points of entry into the called procedure in the same workflow at different points in time. Logging the name of the caller along with other metadata such as the checkpoint or the timestamp, can help in troubleshooting, which otherwise would need scrutiny of all the possible callers one by one, which can be time consuming, unless we manually pass the procedure name as a parameter to the called procedure, which in my opinion should not be the way to go, as it involves a lot of updates to the code the moment the procedure name changes


Premise

The premise this works on is that recursion is not supported in HANA and that at a particular point in time a DB transaction consisting of a workflow having only one Connection ID can have a particular procedure being called from only one other procedure (or else it would become a cyclic dependency), which stays true till the called procedure is done getting executed, after which another procedure may call the same called procedure.


Implementation

create procedure INNER(in inp nvarchar(32),out status nvarchar(5000)) as
lv_var nvarchar(5000):='{ call "MYSCHEMA"."INNER" (?, ?) }';
lv_statement nvarchar(256):='';
begin
select statement_string into lv_statement from "SYS"."M_PREPARED_STATEMENTS" where statement_status='ACTIVE' and connection_id=current_connection and statement_id= (select parent_statement_id from "SYS"."M_PREPARED_STATEMENTS" where statement_status='ACTIVE' and cast(statement_string as nvarchar(5000))=:lv_var);
if lv_statement= 'call MYSCHEMA.OUTER' then
  status:=:lv_statement||' Success::Procedure called:'||ucase(substr_after(:lv_statement,'call '));
else
  status:=:lv_statement||' Failure::Procedure called:'||ucase(substr_after(:lv_statement,'call '));
end if;
end;
create procedure OUTER as
stat nvarchar(5000):='Hello';
begin
call MYSCHEMA.INNER('TEST',:stat);
select :stat from dummy;
end;
call MYSCHEMA.OUTER;

















In the above sample code, the called procedure 'INNER' checks if the call is coming from the procedure 'OUTER'.

 

Sample Use Case:

Here, I have created a utility tool of re-creating a DB artifact with the latest definition amidst a workflow as a small part of the workflow. Creating a DB artifact from within a procedure would involve checking if it already exists, dropping it if it does, and then creating it again with the latest definition. I want the re-creation of the DB artifact to be run only as a part of this workflow and not outside it, as far as the execution of the procedure is concerned. So, I create a verification utility tool called myschema.check_caller which takes the called procedure as the parameter and returns the caller service, looks up the artifact_definition table (that stores the string definitions of the artifacts each getting generated in the preceding steps of the individual workflow) for the definition and creates the artifact using dynamic SQL

 

 

Sample use case.png

Artifacts definition.png

 

--You might need to make changes to the procedure if you try this example in your system as the example skips the actual code and presents the pseudocode in some sections
--create lookup table that stores artifact definitions created during the workflow before the common service is called
create table myschema.artifact_definition (procedure_schema nvarchar(256),procedure_name nvarchar(256),service_name nvarchar(256), artifact_schema nvarchar(32),artifact_name nvarchar(32),artifact_Type nvarchar(32),artifact_definition nvarchar(5000));
insert into myschema.artifact_definition values('MYSCHEMA','RECREATE_ARTIFACT','Service1','MYSCHEMA','TEST_TAB','TABLE','create table MYSCHEMA.TEST_TAB like MYSCHEMA.STAG_TEST_TAB with data');
insert into myschema.artifact_definition values('MYSCHEMA','RECREATE_ARTIFACT','Service1','MYSCHEMA','TEST_PROC','PROCEDURE','create procedure MYSCHEMA.TEST_PROC as begin select col_a,col_b,col_c*2 as col_d from MYSCHEMA.TEST_TAB; end;');
insert into myschema.artifact_definition values('MYSCHEMA','RECREATE_ARTIFACT','Service2','MYSCHEMA','TEST_VIEW','VIEW','create view MYSCHEMA.TEST_VIEW as select col_a,col_b,col_c from MYSCHEMA.STAG_TEST_TAB');
insert into myschema.artifact_definition values('MYSCHEMA','RECREATE_ARTIFACT','Service2','MYSCHEMA','TEST_PROC','PROCEDURE','create procedure MYSCHEMA.TEST_PROC(out cnt int) as lv_cnt int:=0; begin select count(*) into lv_cnt from MYSCHEMA.TEST_TAB; cnt:=:lv_cnt; end;');
insert into myschema.artifact_definition values('MYSCHEMA','ALTER_ARTIFACT','Service3','MYSCHEMA','TEST_TAB','TABLE','alter table MYSCHEMA.TEST_TAB add(col_e nvarchar(32))');
insert into myschema.artifact_definition values('MYSCHEMA','ALTER_ARTIFACT','Service3','MYSCHEMA','AT_TEST_VIEW','ATTRIBUTE VIEW','<ATTRIBUTE VIEW XML>');
insert into myschema.artifact_definition values('MYSCHEMA','ALTER_ARTIFACT','Service3','MYSCHEMA','TEST_SEQ','SEQUENCE','ALTER SEQUENCE MYSCHEMA.TEST_SEQ RESTART WITH 2;');
--check caller
drop procedure MYSCHEMA.check_caller;
create procedure MYSCHEMA.check_caller(in source nvarchar(5000)
,out status nvarchar(5000)
) as
lv_statement nvarchar(256):='';
begin
select statement_string
into lv_statement
from "SYS"."M_PREPARED_STATEMENTS" where statement_status='ACTIVE' and connection_id=current_connection and statement_id= (select parent_statement_id from "SYS"."M_PREPARED_STATEMENTS" where statement_status='ACTIVE' and cast(statement_string as nvarchar(5000))=:source);
status:=ucase(substr_after(:lv_statement,'call '));
end;
--check if DB artifact exists
drop procedure MYSCHEMA.is_exist;
create procedure MYSCHEMA.is_exist(in artifact_schema nvarchar(32),in artifact_name nvarchar(32),in artifact_Type nvarchar(32)
,out is_exist int
) as
begin
if :artifact_type='USER' then
select count(*) into is_exist from "SYS"."ROLES" where role_name=:artifact_name;
elseif :artifact_type='ROLE' then
select count(*) into is_exist from "SYS"."USERS" where user_name=:artifact_name;
else
  select count(*) into is_exist from "SYS"."OBJECTS" where schema_name=:artifact_schema and object_name=:artifact_name and object_type = :artifact_Type;
end if;
end;
--drop DB artifact
drop procedure MYSCHEMA.drop_artifact;
create procedure MYSCHEMA.drop_artifact(in artifact_schema nvarchar(32),in artifact_name nvarchar(32),in artifact_Type nvarchar(32)) as begin
exec 'drop '||artifact_type||' '||artifact_schema||'.'||artifact_name;
end;
--create DB artifact
drop procedure MYSCHEMA.GENERATE_ARTIFACT;
create procedure MYSCHEMA.GENERATE_ARTIFACT(
in artifact_schema nvarchar(32),
in artifact_name nvarchar(32),
in artifact_Type nvarchar(32)--TABLE/PROCEDURE/VIEW
)
as
result nvarchar(5000);
begin
call check_caller('{ call "MYSCHEMA"."GENERATE_ARTIFACT" (?, ?, ?) }',result);
--1. Lookup the myschema.artifact_definition table to get the artifact definition string based on the caller name, artifact name, schema, and type
--2. execute the artifact definition string using dynamic sql
end;
--wrapper: Master Procedure
drop procedure myschema.recreate_artifact;
create procedure myschema.recreate_artifact(
in artifact_schema nvarchar(32),
in artifact_name nvarchar(32),
in artifact_Type nvarchar(32),--TABLE/PROCEDURE/VIEW
) as
result nvarchar(5000);
begin
  call MYSCHEMA.is_exist(:artifact_schema,:artifact_name,:artifact_Type,result);
  if result != 0 then
  call MYSCHEMA.drop_artifact(:artifact_schema,:artifact_name,:artifact_Type);
  end if;
  call MYSCHEMA.GENERATE_ARTIFACT('MYSCHEMA','TEST_TAB','TABLE');--can only be excuted from recreate_artifact
end;
call MYSCHEMA.RECREATE_ARTIFACT('MYSCHEMA','TEST_TAB','TABLE');


Benefit

The benefit is that as long as the GENERATE_ARTIFACT procedure is executed, unmodified, the procedure, on the fly, determines the service to pick the definition of the artifact from, based on the caller, without any need for parameter passing.


Request your thoughts on the use of such a utility.

 

Thank You

 

Regards,

Sheel

So another week gone and This week the course was set up quite different from the first week. Was the first week very theory oriented explaining all the architectures. This week had a large hands-on part. I spend two nights doing all the practical exercises.

Basically in the first week you get the explanation of where all the tools in the shed are and what they are for.

I’m more the kind of guy that just stumbles into the shed, doesn’t even turn the light on and starts rumbling and trying until I get a grasp of things. And what happened to Richie? Oh well, guess he will do other parts.


(very) roughly you could say that the course this week was divided into two parts. First working with the .hdbdd file where you insert all the definitions of your database artifacts. First you can define you types and structures and using those types you can create tables, views.

 

In your .hdbdd file you use CDS syntax. Something that you can also yuse in HANA/river and ABAP. That’s nice as things I learn now are also usable in a wider area. Thing is though that as an acronym .hdbdd leaves something to be desired. I was impressed that Thomas was able to say this without stumbling in over 95% of the time. Behind my desk at home I tried it, but my percentage was way lower.


As someone with a lot of BW experience I was thinking about those types and structures and how you might be able to use them as sort of limited info objects. I would like first though that the functionality of export/import of tables is implemented.

Maybe I should explain that some more. When you update types or structures. All tables that hold that type/structure will also be updated. This can lead to activation errors if the table is holding data. Thomas said that some automatic export/import functionality is being worked on that when activating your table data will be exported. Update takes place and then data gets imported again.


That sounds like something that is really handy. Although you might want to be careful with changing your types and structures. A much used type (business key anyone?) that resides in large tables could create an avalanche of runtime actions to be taken.

In general though I like CDS, but I would like it to be SQL complete.

For the models We had an impressive example with lots of calculation views (graphical and scripted), a decision table and analytical / attribute views. In the scripted calculation view we used the procedure that was generated through the creation of the decision table.


I was wondering though if you could insert overlapping conditions in the decision table and if you could general multiple outcomes for one record. Does anybody know this or should I just try out. I suppose it is the latter.

In my notes I started this segment bravely by stating I would type everything and not cut & paste things, but due to the size of the exercises I had to tone that down a bit.

Additionally I had to revisit earlier parts I skipped as I had to create some roles and authorize the system user so the record create & test app could run.

All in all a good week, a lot more time was spent actually working in the HANA system.

Next week we will go into more SQLscript advanced. Let’s see how that is continued from last year’s course.

I was working with a Sample data which was Geographically distributed. when i was trying to create a Geographical Hierarchy, Some places were not found and some had a close match.

 

I suggest Lumira should provide us to pic an area from the Map/ By entering details manually. Visualizations with "Not found" data will not be complete. My idea was posted in ideas.sap.com here 

 

PFB of a Geographic Visualization - where i had data related to Europe, West Indies :- but Lumira Din't find a match to these places.

 

getfile.png

 

Regards,

Arun

With the advent of HANA SPS 08 circa Q2 2014, SAP finally released the HANA data provider for Microsoft ADO.NET. Admittedly, this is not trousers stirring news this but I am pleased to know that there is now an effective way to allow the .NET framework to efficiently communicate with SAP HANA. In the past, you could still do that but it was through the use of the ODBC driver – there is nothing wrong with it if all you wanted to do was to connect to SAP HANA and perform some simple SQL statements but we all know SAP HANA if more than just an in memory database.

 

This article will to tease out some of the key concepts and features introduced by SPS 08 on the client interfaces for ADO.NET.  Data access and manipulation will be the theme here today using C# and SAP’s new .NET API – Sap.Data.Hana. If you are wondering what ADO.NET is, it is Microsoft data access technology for the .NET framework using programming languages such as C# or VB.NET.

 

I beg the question that you ask why is it important for SAP to release this new API when ODBC is sufficient? Given that the standard ODBC connection to SAP HANA is still a valid method to bridge the connection to the database or any database for that matter, we need to understand the core use of an ODBC driver itself. Microsoft’s primary goal when they released the initial ODBC driver was to provide a vendor neutral way of accessing stored data in any
machine. As you might have already guessed, the whole host of ODBC driver that Microsoft currently supports spans vast and wide from databases such as Oracle, Informix, Teradata and all the way to SAP HANA, just to name a few. The result of building an ODBC driver to be as generic as possible can come at a cost. Cost in terms of lost of performance, unleveraged hidden features, missing core functionalities and best practices surrounding a given database.


Although there are no official information from SAP detailing why a developer should use the new API, one can only infer that SAP has put in a lot of hard work to ensure that your .NET code communicates efficiently with their HANA database. Take for example if your team were to deliver a .NET solution specifically on Microsoft SQL Server, they will certainly use the System.Data.SQLClient.SQLConnection object, on Oracle they would most likely end up using the ODP.NET driver and with SAP HANA, it would be this new API. Although the .NET framework is able to connect to SAP HANA through the ODBC driver, SAP reckons that your application will take full advantage of all the underlying advances build specially for the .NET framework to play nicely with SAP HANA. There will be instances when you find yourself with no vendor delivered driver and in this case the ODBC driver is your only option to exchange information with the database on the .NET framework e.g. when developing application on Hadoop Hive.

 

Getting Started


To start using the new ADO.NET API delivered in SPS 08, you will need to have the latest SAP HANA Client Developer Edition version 1.00.80 installed on your machine or alternatively check the default installation path under C:\Program Files\sap\hdbclient\ado.net. If you have that installed, you should see two folders call v3.5 and v4.5 and clicking through v4.5, you should have the Sap.Data.Hana.v4.5.dll.

sap.data.hana.png

 

View, Insert, Update and Delete


Building an enterprise .NET application on SAP HANA will undoubtedly require you to perform one of these four actions and in this section of the article, the aim is to demonstrate the basic database call to interact and manipulate the data that resides in SAP HANA.

 

Under the .NET solution explorer, expend the Reference dropdown and ensure that Sap.Data.Hana.v4.5 has been included to your .NET solution and if not you can add a new reference under the Extensions path. Once you have that, standard C #codes ensue for namespace: using Sap.Data.Hana;

 

ref1.pngref2.png

 

Establishing the database connection


The bare minimum parameter required to establish a connection is to have the connection string include the server address, user name and password. Other parameters such as connection lifetime, database, pooling, etc. are optional but worthwhile exploring if you want to maintain a healthy level of database activity.

conn.png

 

Selecting records

 

Once a database connection has been established it is a straight forward approach to acquire the data that you require from any table or schema. In this example, I have issued a simple SELECT statement with the intention of publishing the data to a .NET DataGridViewer. The class demonstrated here is the HanaDataReader class use to return a read only result set and output into a Windows form.


//Establish an active connection to the SAP HANA database

HanaConnection conn = new HanaConnection("Server=your.hana.server;UserID=username;Password=password");

 

//Open the database connection

conn.Open();

 

String strSQL = "select * from customers order by ID desc";

//Execute the SQL statement

HanaCommand cmd = new HanaCommand(strSQL, conn);


//Read and store the result set

HanaDataReader reader = cmd.ExecuteReader();


//Bind the result set to a DataGridViewer/

dgViewTweet.DataSource = reader;


//Close the reader connection

reader.Close();


//Close the database connection

conn.Close();

 

table1.png

 

Updating new records

Inserting records into the database is as easy as querying for it but not without additional effort to format and prepare the data. The example provided here stores the information from 5 text fields using standard SQL inserts statements executed by C#.


Upon loading the application, it connects to SAP HANA, loads 150 rows of customer records from the database and the intention here is to create a new piece of information and write it back to the table.

table2.png


When the Save New button is clicked, the .NET framework runs through this piece of simple code to perform two sets of activities. Firstly, it will perform a SQL insert statement to store all the values captured from the text box by running the ExecuteNonQuery() method and secondly it reread the information by running the ExecuteReader() method as this will repopulate the DataGridViewer  with the new record.


The only information that the ExecuteNonQuery() method returns is the number of rows affected by the specific action and this method is useful for running SQL statements such as INSERT, UPDATE or DELETE.

 

HanaConnection conn = new HanaConnection("Server=your.hana.server;UserID=username;Password=password");

 

string sCust, sLifespend, sNewspend, sIncome, sLoyalty;


conn.Open();

HanaCommand insertCmd = new HanaCommand("INSERT INTO Customers(CUSTOMER, LIFESPEND, NEWSPEND, INCOME, LOYALTY) " + "VALUES(?, ?, ?, ?, ? )", conn);

 

sCust = txtCustomer.Text;

sLifespend = txtLifespend.Text;

sNewspend = txtNewspend.Text;

sIncome = txtIncome.Text;

sLoyalty = txtLoyalty.Text;


HanaParameter parm = new HanaParameter();

 

//Customer

parm = new HanaParameter();

parm.HanaDbType = HanaDbType.NVarChar;

insertCmd.Parameters.Add(parm);

 

//Life Spend

parm = new HanaParameter();

parm.HanaDbType = HanaDbType.Double;

insertCmd.Parameters.Add(parm);


//New Spend

parm = new HanaParameter();

parm.HanaDbType = HanaDbType.Double;

insertCmd.Parameters.Add(parm);

 

//Income

parm = new HanaParameter();

parm.HanaDbType = HanaDbType.Double;

insertCmd.Parameters.Add(parm);


//Loyalty

parm = new HanaParameter();

parm.HanaDbType = HanaDbType.Double;

insertCmd.Parameters.Add(parm);


//Customers

insertCmd.Parameters[1].Value = sCust;


//Life Spend

insertCmd.Parameters[2].Value = sLifespend;


//New Spend

insertCmd.Parameters[3].Value = sNewspend;


//Income

insertCmd.Parameters[4].Value = sIncome;


//Loyalty

insertCmd.Parameters[5].Value = sLoyalty;


//Execute the insert statement

insertCmd.ExecuteNonQuery();


//Requery for the latest record from SAP HANA

String strSQL = "select * from customers order by ID desc";


//Execute the SQL statement

HanaCommand cmd = new HanaCommand(strSQL, conn);


//Read and store the result set

HanaDataReader reader = cmd.ExecuteReader();


//Bind the result set to a DataGridViewer/

dgViewTweet.DataSource = reader;


//Close the reader connection

reader.Close();

//Close the database connection

conn.Close();

 

table3.png


Using Stored Procedure

 

My guess is Stored Procedure has always been off limits for SAP applications that sits on the NetWeaver stack because of the vast database vendors out there and the effort to cater for the different method of creating and calling a procedure can add up to an unnecessary amount of overhead. What SAP has given us in equivalent is the lovely function module builder that we have all known to work with and love throughout the years. Like any high performing, respectable RDBMS out there in the market, SAP HANA has included the Stored Procedure function and using it with .NET is just as simple as the 
rest.

 

In this next example, the goal here is to change the record for Customer ID 150. The customer name Samuel Johnson has been misspelled and it needs to be corrected. Upon clicking on the Update Changes button, the .NET framework will call a Stored Procedure in SAP HANA, passing in the Customer ID and update the record using a standard SQL update command that is embedded within a Stored Procedure. A Stored Procedure can contain logic as simple as accepting fields and processing it to as complex as performing specific validation and returning an error if the conditions are not met.

 

CREATE PROCEDURE prcUpdateCustomers(IN ID INT, IN CUSTOMER NVARCHAR(60)) AS

  BEGIN

         UPDATE customers SET customer = :CUSTOMERWHERE ID = :ID;

  END;


The change to execute Stored Procedure is to set the Hana command type to StoredProcedure and passing in the actual name of the Stored Procedure.

 

                HanaCommand cmd = new HanaCommand("", conn);

                //Pass in the Stored Procedure Name

         cmd.CommandText = "prcUpdateCustomers";

         cmd.CommandType = CommandType.StoredProcedure;

table4.png

//Read the changed column data

string col0 = dgViewTweet[0, dgViewTweet.CurrentCell.RowIndex].Value.ToString();

string col1 = dgViewTweet[1, dgViewTweet.CurrentCell.RowIndex].Value.ToString();

string col2 = dgViewTweet[2, dgViewTweet.CurrentCell.RowIndex].Value.ToString();

string col3 = dgViewTweet[3, dgViewTweet.CurrentCell.RowIndex].Value.ToString();

 

//Establish an active connection to the SAP HANA database

HanaConnection conn = new HanaConnection("Server=your.hana.server;UserID=username;Password=password");

 

//Open the database connection

conn.Open();

 

//Create an instance of a HanaCommand

HanaCommand cmd = new HanaCommand("", conn);

 

//Pass in the Stored Procedure Name

cmd.CommandText = "schia_pal.prcUpdateCustomers";

cmd.CommandType = CommandType.StoredProcedure;

 

//Prepare input parameters

HanaParameter param = new HanaParameter();

              param = cmd.CreateParameter();

               param.HanaDbType = HanaDbType.Integer;

               param.Direction = ParameterDirection.Input;

               param.Value = col0;    

               cmd.Parameters.Add(param);

 

               param = new HanaParameter();

               param = cmd.CreateParameter();

               param.HanaDbType = HanaDbType.NVarChar;

               param.Direction = ParameterDirection.Input;

               param.Value = col1;

               cmd.Parameters.Add(param);

 

//Execute he update statement

               cmd.ExecuteNonQuery();

               cmd.Dispose();

               conn.Close();

 

table5.png

 

In brief and not trying to include needless details on the vast .NET sample codes which SAP has documented for the wider community, this article is just an example of some of the basic database communication that you can work with using Sap.Data.Hana. With the new SPS 08 client interface, customers who are heavily reliant on the .NET framework to address business problems can now have full SAP support when working with SAP HANA and the ability to fully utilise all features within the product.

Actions

Filter Blog

By author:
By date:
By tag: