1 2 3 43 Previous Next

SAP HANA and In-Memory Computing

640 Posts

A win-win situation for SAP & SAP HANA customers.


That's how I can easily describe the HANA operation expert summit in one sentence. My blog is late (some months) but that doesn't matter, the message it brings out stays relevant and important. I hope this event becomes an annual appointment on my agenda.

 

The event was concentrated around the operations part of SAP HANA and since many SAP experts are in or near SAP Walldorf, SAP gathered them up and delivered a two day event including a keynote, networking dinner, roadmap sessions and group discussions with the experts from SAP.

 

I've told this before and I really had this impression: All the customers running SAP HANA who were present where happy about the fact they are running on SAP HANA. That's an important pointer. I hope all customers who have it and leverage it are happy about it. If not, more reason why you should be at this type of event as it provides the opportunity to give direct feedback and discuss issues with SAP. Merely being there and giving feedback might already solve your problem.

 

The topics were picked with the help of surveys done by SAP (organizer Kathrin Henkel) up front to check what attendees were looking for in terms of information and around which topics they wanted to discuss. Unfortunately I missed most of the keynote because I had to do a workshop presentation at customer side, in Belgium still. I arrived late in Walldorf and I walked in near the end of the keynote (like last 10 minutes) but I could still get a sense of what was shown during the keynote. I saw some examples of non-traditional SAP HANA use. With that I mean, non SAP BW or SAP ERP based scenarios.

 

After the keynote, the networking dinner started and it was organized in a way that each table had two SAP HANA experts present to provide an entry point for discussion and allow participants and experts to get to know each other. The concept was good and it was an enjoyable evening. One of my Belgian colleagues was there with me so we ended up furthering the discussion in a local pub near the center of Walldorf.

 

The next day, presentations were on the agenda where the experts explain the current status and roadmap for the different topics related to SAP HANA. Again, well organized and pretty interesting from a participant point of view.

 

After the presentations, break-out sessions were planned in to discuss topics with the relevant experts, network and exchange knowledge. The discussions were interesting and most of them brought interesting information and insights. Notes were being taken by SAP to ensure follow-up.

 

Networking was left mostly to the participants because of rules / regulation (sharing personal information) and therefore I do believe there is room for improvement here. Perhaps participants could be allowed to opt-in up front to share their contact details with other participants early on or create a SAP JAM group for longer lasting social contact around the topic.

 

Participants received (optionally) a SAP HANA operation expert polo (check the video) which I personally like a lot and a doggy bag to have food for the road back home. I really liked both.  Provide me with (good looking) sponsored clothing like that and I wear it proudly. Why? Because I'm proud to be part of SAP's ecosystem and I believe in SAP HANA. I enjoy spending time on SAP HANA, its as simple as that. Its exciting, new technology that has already started to make an impact on the world around us.

 

You can have a look at the overview video to get an idea of what the event was about:

hana_expert_summit.jpg

 

300_suse_logo_color.png

“SAP CEO Bill McDermott gets software group fit for the cloud,” was the title in the German magazine Computerwoche, summing up this year’s SAPPHIRE NOW, which ran from June 3 to 5 in Orlando, Florida. Following in the same vein was the presentation given at the SAP customer event by Peter Mauel and Steven Jones from Amazon Web Services (AWS). Here, the audience learned about the benefits of obtaining SAP HANA from the AWS cloud based on SUSE Linux Enterprise Server.


Customized Service Packages

 

“SAP, AWS, and SUSE Linux have been working together successfully since 2008. This has resulted in a range of customized service packages that provide customers with the necessary SAP applications as part of Linux-based leasing models,” explained Steven Jones, Senior Manager of Global Alliance Solutions Architecture at AWS. “Naturally, there are also specific offerings for SAP HANA users who want to leverage the benefits of AWS cloud services, such as reducing costs and complexity.” 


As Steven Jones explained, companies currently have three options for using the SAP HANA in-memory database on an AWS and SUSE Linux basis:


  1. Infrastructure subscription: “Bring Your Own License”
    1. This means that although customers purchase the SAP HANA license directly from SAP, the implementation itself is in the AWS cloud. With this model, all SAP HANA use cases are supported, including SAP Business Suite and SAP NetWeaver Business Warehouse on SAP HANA.
    2. Benefits:
      1. Immediate provisioning of SAP HANA licenses in the cloud
      2. No long-term capital tie-up for hardware resources
      3. Enterprise support
  2. SAP HANA One
    1. Customers obtain SAP HANA applications entirely from the cloud. Support is provided for real-time analyses, data merging, temporary and event-based analyses, self-service BI, as well as pilot and test models.
    2. Benefits:
      1. Instant, self-service access – system up and running in 10 minutes
      2. Start and stop when needed, thus reducing license and infrastructure costs
      3. Community support
      4. On-demand SAP HANA license fees: USD 0.99 per hour
  3. Offering for developers
    1. SAP offers developers a free AWS license for SAP HANA to help create a community that deals with all aspects of the high-speed database. The aim of this is to help customers put SAP HANA to the test and try out their first application scenarios.
    2. Benefits:
      1. Free SAP HANA developer licenses
      2. Easily accessible and rapidly deployable
      3. Pay-per-use AWS infrastructure costs


Maximum-Performance Computer Clusters in Use


“For SAP HANA, we use our biggest and most powerful computer clusters,” emphasized Peter Mauel, SAP Global Alliance Leader at AWS, at the SAPPHIRE NOW 2014 presentation. With SUSE Linux, AWS and SAP are making use of an operating system that ensures high availability, improved system performance, and cost savings in terms of operating the in-memory database. 

For more information: Youtube Video "Leveraging SUSE Linux to run SAP HANA on the AMAZON Web Services Cloud"

The SAP Business Suite ECC on HANA, will open the possibilities to transform in a very positive way your current approach in how you are serving your company.

Be prepared to be changed from your old title of Chief Information Officer to Chief Business Transformation Officer (CBTO).  It is not you changing but rather the way that you will provide real value to your Corporation, adopting the new technology trends, shifting functions that do not belong to your core business value out of your Company, and concentrating your efforts in truly understanding the real Business opportunities to create value through a fast, reliable and efficient business transformation.

What to approach and what to look in your initial planning to adopt or upgrade or migrate your current SAP ECC systems to the new SAP Business Suite on HANA system.

Installation -   Commonly is completed via an Upgrade and DB Migration.  Technical

Functional  - Even if the Project is approached as a pure Technical Migration.

Activation of Transactions available on HANA

Analysis of specific Transactions that can be activated to work in HANA.

Analysis of ABAP programs that need to be enhance and/or enable to run efficiently in SAP HANA.

There are some specific tools to analyze the Programs,  to be changed, re-programmed, unit tested etc.

Functional Transformation – For those customers adapting the new Business Processes that come with SAP ECC 6.0 EHP 7.

Those business processes require an entire re-engineering analysis to transform

Blueprint – Design – Realize – Implement

The business process transformation is heavily impacted by the possibilities to re-create the User Experience SAP FIORI.

Analysis of current business process (re-engineering)

Analysis of Business Process Flows

Analysis and enablement of new processes that combine one or several transactions in a single one.

Design , truly from the Art point of view the new look and fill of the system.  (open development via SAP Fiori)

The major functional change so far , is around SAP FI, where it is no longer required the use of two Financial objects

CO – Documents and the PA documents are now in a single document.  SAP Smart Finance concept)

The major benefit for you adapting this new technology.

It is not in the speed.  OLTP transactions are not improved by better performance but rather by a better Business Processes based on smart software due to the advantages of SAP HANA as a platform.

Significant improvement in OLAP processes where now truly the SAP Business Suite on HANA can leverage the User BI experience at all levels, from operation, to higher Strategic Business Intelligence.

Integration + Integration + Integration – Finally you will be able to reduce your diverse Vendors footprint that your were collecting in the past 10 or 15 years. Don’t forget the rule of thumb.  It will cost you 20% to 30% of your implementation and further maintenance costs to maintain connectivity, like for example Process Integration Software, at the same time SAP HANA new and flexible Smart Data Access offers you new possibilities to store Big data using smart tools like Hadoop, federating Data from several other systems and allowing you for a better and safer Business too Business interaction.

The fact that SAP will no longer support SAP ECC on traditional relational database management systems by 2020.

Introduction:

In this article I would like to talk a bit about the possibilities xsjs which I did not find any documentation on scn, but which, in my opinion, would be useful to know those who study or work with SAP HANA XS about. The article is also dedicated to iterators, dynamic function arguments, easy way to copy objects, the Map object, a small example of currying, and an example of typed arrays.

Code, examples, explanation:

Definition of Iterator:

“An Iterator is an object that knows how to access items from a collection one at a time, while keeping track of its current position within that sequence. In JavaScript an iterator is an object that provides a next() method which returns the next item in the sequence. “

For better understanding Iterator just have a look at the following code:

function Generator(){
  yield "num1";
  yield "num2";
  for (var i = 0; i < 3; i++)
    yield i;
}
var g = Generator();
var temp1 = g.next();
var temp2 = g.next();
var temp3 = g.next();
$.response.setBody(temp1+', '+temp2+', '+temp3+' ');
$.response.contentType = "text/html";

 

The Result:

num1, num2, 0

The yield keyword is used to pause and resume a generator in a program.

Here is more complex example:

function vals() {
  for (var i = 0; i < arguments.length; i++) {
    yield arguments[i];
  }
}
var o = vals(1, 2, 'abc');  
var temp = o.next();
temp = temp+o.next();
temp = temp+o.next();
$.response.setBody(temp);    
$.response.contentType = "text/html";  

 

 

An iterator is also formed in this example. But the sequence number depends on the number of arguments in a function.

The Result:

3abc

Why 3abc, and not 12abc? Because o.next () returns all the arguments one by one, but the variable temp first will be Number, and after retirning of the third argument –it will become  string.

And if the order of the arguments call is changed like this:

 

var o = vals('abc',1, 2); 

 

The result will be:

аbc12

The following example shows a function that turns any object into a symbol representation. It is the Inverse function of eval - uneval.

One of the most obvious ways to use it - is to copy objects:

var m = function () {this.a = 1};
var temp = new m();

temp.am_i_copy = 0;
var h = temp;
var h2 = eval(uneval(temp)); 
h2.am_i_copy = 1;
$.response.setBody(h2.am_i_copy+' - '+h.am_i_copy);    
$.response.contentType = "text/html";

The Result:

1 – 0

  1. i.e., h reference refers to object temp.

And Object h2 – the new one – is clone, so when we change property am_i_copy – to 1 source  h – doesn't change. We couldn’t use JSON.stringify() and JSON.parse() because of functions..

The following example is a description of the Map() object. Map allows to operate the “key –value” pair. Here we create an array of values pairs ​​, and then we add to it the result of the method execution.

var res = [];
var Maper = new Map();
Maper.set(0, "zero");
Maper.set(1, "one");
Maper.set(2,function() {return 1;})
for (var [key, value] of Maper) {
  res.push({k:key,v:value});
}
var temp = Maper.get(2)();

res.push({t:temp});
$.response.setBody(JSON.stringify({res:res}));    
$.response.contentType = "text/html";

The Result:

{"res":[{"k":0,"v":"zero"},{"k":1,"v":"one"},{"k":2},{"t":1}]}

The important point is that the third object – {“k”:2} – doesn’t have value because of using function, so if we change this:

res.push({k:key,v:value});

to:

res.push({k:key,v:uneval(value)});

the result will be :

{"res":[{"k":0,"v":"\"zero\""},{"k":1,"v":"\"one\""},{"k":2,"v":"(function () {\n\"use strict\";\nreturn 1;})"},{"t":1}]}


Next example will be interesting for people who doesn’t know about currying on javascript.

This is the example of javascript code which returns another function as the result of the first one and concatenate the first and the second values:

function curry_concat(x){
    return function(y){
        return x +'  '+ y;
    }
}
var a = curry_concat('Test')('Curry');
$.response.setBody(a);    
$.response.contentType = "text/html";

Result:

Test Curry

Test yourself! Do you know the result of the following two statements? What do You think?

var x = new Uint8ClampedArray([-47]);
var x1 = new Uint8Array([-47]);
$.response.setBody('Clamped -' + x[0]+'; NoClamped'+x1[0]);    
$.response.contentType = "text/html";

Conclusion

Over the last example - if the community is interested in this topic, in the future I would be glad to write a detailed article about the pros and cons of using the typed arrays.

Besides the presented here functions there are also others that are not represented in the standard documentation. May be, it also will be interesting ...

Companies today are facing critical challenges.  You need to know how to optimize maintenance while providing support services.  Plus you are always worried about how to save costs. But where do you start?  How do you take your maintenance program to the next level? 

 

To help you face these challenges, SAP will be hosting two Innovation Days focusing on Predictive Maintenance and Service.   You can join us free of charge at the SAP offices in Atlanta on Thursday August 14th or in Dallas on Thursday August 21st.

 

We’ll discuss real business cases were companies have used huge amounts of data gathered from machine to machine connectivity and powered by the real-time processing capabilities embedded in SAP HANA to make informed decisions.

 

You’ll learn how a large agricultural equipment manufacturer was able to realize a multi-million dollar annual return on their investment through:

 

  • Increased Revenue due to greater asset uptime supporting increased productivity;
  • ReducedService Parts Inventory allowing for more accurate & timely service parts inventory forecasting;
  • Decreased R & D Costs based on predictive maintenance findings regarding equipment design
    resulting in higher quality products and fewer warranty claims

 

Helping increase uptime of your critical assets is a win-win for both you and your customers. SAP’s Solution for Predictive Maintenance & Service can
help.  So to register or to  learn more just click on the appropriate link below:

 

INNOVATION DAY in ATLANTA, GA

Thursday August 14th,  9 AM – 1 PM (local time)

SAP Office, 1001 Summit Blvd, Suite 2100, Dogwood Conference Room, Atlanta GA 30319

 

Click here to Register or Learn More.

 

INNOVATION DAY in DALLAS, TX

Thursday August 21th,  9 AM – 1 PM (local time)

SAP Office, 5215 N. O’Connor Blvd, Suite 800, Lone Star Conference Room, Irving TX 75039

 

Click here to Register or Learn More.

 

Can make either day? Learn more about SAP HANA and Predictive Maintenance

Lots of people think of those questions in terms of black/white, good/bad, legacy/new. As in many, many situations in software and real life, it usually is a trade-off decision. This blog attempts to clarify the differences in both approaches. Ideally, this leads to a less ideological and biased but factual discussion of the topic. Furthermore it should become apparent that the HANA EDW approach allows to work along both approaches within the same (HANA) system. So it is not the case to have - for example - one data warehouse system based on BW (managed approach) and a second, purely SQL-based (freestyle approach) data warehouse system on an RDBMS. Fig. 1 pictures the two different approaches.


Fig. 1: Two different approaches: managed vs freestyle data warehousing.

Managed Approach

On the left-hand side, jigsaw pieces represent various tools that are harmonised, meaning that their concepts / objects "know" each other and have the same lifecycle. For instance, when a data source(as an example of such an object) gets changed then the related data transformations (further examples of such objects), cubes, views, queries, ... (i.e. other objects) can be automatically identified, in the best case even automatically, adjusted or at least brought to the attention of an administrator so that the necessary adjustments can be manually triggered.
Another example is BW's request concept which manages consistency not only within the data warehousing (data management) layers but is also used in the analysis layer for consistent reporting. It's a concept that spans many tools and processing units within BW.
The individual tools to build and maintain such a data warehouse need to understand and be aware of each other. They work of the same repository which allows one consistent view of the meta data that consitutes the organisation of the data warehouse. When one object is changed, all related objects can be easily identified, adjusted and those changes can be consistently bundled, e.g. to apply them in a production system.
Due to the dependencies, the tools are integrated within one toolset. Therefore, they cannot be replaced individually by best-of-breed tools. That removes some of the flexibility but at the benefit of integration. SAP BW is an example of such an integrated toolset.

Freestyle Approach

On the right-hand side, you see similar jigsaw pieces. On purpose, they are more individually shaped and do not fit into each others slots from the very beginning. This represents the situation when a data warehouse is built on a naked RDBMS using best-of-breed tools, potentially from various vendors. Each tool only assumes the presence of the RDBMS and it's capability to process (more or less standard) SQL. Each tool typically comes with its own world of concepts and objects that are then stored in a repository that is managed by that tool. Obviously, various tools are needed to build up a data warehouse, like an ETL tool for tapping into source systems, a programming environment - e.g. - for stored procedures that are used manage data flows and transformations, a tool that monitors data movements, a data modeling tool to build up analytic scenarios etc. Technically, many of those tools simply generate SQL or related code. Frequently, that generated code can be manually adjusted and optimized which provides a lot of freedom. Per se, the tools are not aware of each other. Thus their underlying objects are independent, meaning with independent lifecycles. Changes in one tool need to make it "somehow" to the related objects in the other tools. This "somehow" can be managed by writing code that connects the individual tools or by using a tool that spans the repositories of the individual tools. SAP's Information Steward is an instance of that. This is pictured as "glue" in fig. 1.
The freedom to more easily pick a tool of your own choice and the options to manually intercept and manipulate SQL provide a lot of flexibility and room to optimise. On the other hand, it pushes a lot more responsibility to the designers or administrators of the data warehouse. It also adds the task of integrating the tools "somehow". Beware that this is an additional task that adds revenue for an implementation partner.

SAP's Product Portfolio and the two Approaches

As can be seen from this discussion, each approach has its merrits; there is no superior approach but each one emphasises certain aspects. This is why SAP offers tooling for both approaches. This is perceived as a redundancy is SAP's portfolio if the existence and the merrits of the two approaches is not understood.

I frequently get the question whether BW will be rebuilt on SAP HANA. Actually, it is philosophical to a certain extent as BW evolves: e.g. today's BW 7.4 on HANA has less ABAP code in comparison to  BW 7.3 on HANA. This can be perceived as BW being rebuilt on HANA if you wish. However, what does not make sense [for SAP] is to kill the approach on the right-hand side of fig. 1 by integrating the "freestyle tools" into a second, highly integrated toolset which mimics the BW approach because that would simply remove the flexibility and freedom that the right-hand approach has. Fig. 2 pictures this.


Fig. 2: It does not make sense to implement two toolsets for the exact same approach.

What is true, however, is that HANA will see a number of Data Warehousing Services arising over time. They already do exist to some extent as they surged when BW was brought on to HANA. They can be generalised to be usable in a generic, non-BW case. Nearline storage (NLS), extended storage, HANA-based data store objects, a data distribution tool etc are all execellent examples for such services that can be used by BW but also by a freestyle approach.

Finally, I like to stress - again - the advantages of the HANA EDW approach that allows to arbitrarily combine the two approaches pictured in figure 1. You can watch examples (demos) of this here or here.

 

The figures are available as PPT slides. This blog has been cross-published here. You can follow me on Twitter via @tfxz.

After establishing the connectivity between HANA and R, It is time to Integrate these two in memory technologies for data analysis and data visualization.

 

Both environments support excellent tools for data analysis including predictive analysis and data visualization.  Analytic & Graphics based  Data Visualization Applications  can be easily developed in both environments and bundled into a Web Product.  If Vector operations in R make it ideal for statistical and data analysis of large data sets, then HANA in memory SQL operations and calculations are better suited for constructing entity relationships of the data. If interactive and animated graphics are more easily produced in R then HANA Cloud Platform is ideal to roll out a web based solution.

Though advanced graphs for data visualization, for example plotting data points over Google maps can be made in either platform, R supports variety of libraries and packages, which makes producing multivariate graphs very easy compared to SAPUI5 library. The choices are plenty for enabling end user scenarios. Best of the breed is the way forward and Integrating the process and data remains the key.

 

For my test case, I decided to use HANA's Data base artifacts created by SQL for storing Sales data in memory data base. Shown below is an Image of Analytic view created in HANA for Sales Orders Line items  (for die hard R/3’r it is filled with table VBAP data). It can be easily created in HANA Studio IDE as a catalog object and its column attributes can be extended by joining it with Product , Customer and Supplier/Vendor  Master data tables.

 

R Integration Pic - Sales Order Lines Analytic View.JPG

 

The extended sales line item data in this view is read to an R environment as shared in the earlier post.

Once the sales information data is available in a R data.frame object. It can be easily consumed by R functions.

 

Product Category is one of the attributes available on the data view which describes what kind of product was sold i.e. Notebook computer or Handheld device or a printer. If we are interested to analyze sales of Notebook computers  then sales information regarding Notebooks can be easily extracted to R data.frame object without using SQL by use of following R code.

notebooks <-  result[result$Product_Category == "Notebooks",]

 

Now , sales information for notebooks can be analyzed further.


Typical Sales information analysis like how much sold by customer, region, product category is important and data visualization for analysis is quite easily achieved.

However, If we wish to visualize and analyze sales based on country of products origin or supplier of the product, Multi variable Graph Plot to group sales of products by their country of origin or suppliers as shown in the image below can be easily created using R graph functions, for example.



qplot(NetAmount,data = notebooks, facets = .~Supplier_Country, main = "Sales by Supplier Country", binwidth = 500)



R Intgeration with HANA - Plot 1.png

Notice, there are no orders worth between $2500 and $4000 that are being supplied from Japan.


Graph plot to visualize correlation between Product Price and Sales Amount , creating scatter plot by supplier country can be easily achieved by using R Functions, for example

 

qplot(Product_Price,NetAmount,data = notebooks, facets = .~Supplier_Country)

R Integration with HANA - Plot 2.png


There are many more advance functions in R to plot graphs and do data visualization.

There are plenty of data sheets available that depict performance criterion of using these environments.

Their capabilities to do fast computations on huge volumes of data are unquestioned.

Illustrated above are just quick examples to show how easy it is to visualize data from HANA using R graphs.

 

Next task is to work on use case to analyze data combining ERP and Non ERP domain data !


 


In my previous test of Integrating R with HANA R Integration with HANA  - Test of "Outside In Approach" - Part 1

ODBC was used to make connection with  HANA Database on a Cloud Platform. Eventhough I used R environment to make the Database  connection and do SQL operations. ODBC should work similarly for any other Programming environments.

 

SAP supports variety of Integration technologies and methods to write interfaces to connect with its data, right from its native & proprietary RFC techniques (my favorite)  to some of the web based and open standards techniques.

 

This time i thought of using JDBC for making connections to HANA .Again I have used R in my example . It should work similarly if writing a Java based application. One of the key dependency is to use "ngdbc.jar" . This Jar file with its location need to specified in the Build path of Java application. It comes as part of HANA client install.  It is needed for integrating R environment with HANA as well.

 

I am using HANA on cloud , so i have to use secure Data base tunnel to connect and get my user credentials.

The JDBC works with on premise HANA install as well, however you may need to contact HANA system administrator for the parameters values to be used in the function calls.

 

The R Script snippet  using JDBC to connect to HANA is shared below

 

install.packages("RJDBC")

library("RJDBC")

drv <- JDBC("com.sap.db.jdbc.Driver", " <path of ngdbc.jar>", "'")

conn <- dbConnect( drv,"jdbc:sap://<host>:<port>/?currentschema=<your HANA Schema>", "HANA User ID", "HANA Password")

res <- dbGetQuery(conn, "select * from Schema.Table")

 

 

With correct system credentials put in the above R script,  and putting in the correct values of HANA Schema and Table name.

I was able to read the twitter messages stored in the HANA table. Snapshot image of the <res> shared below.

 

So in effect , Data can be read from HANA and brought back in a R data.frame Object which can be now used for further analysis.

 

R Integration Pic 4 - JDBC Tweets sample records from HANA.JPG

 

So another successful test done to connect with HANA using R !! - this time using JDBC

Hi Folks,

 

I got into one of the requirements where we were supposed to get the nearest cities (by distance) based on the user location. This is a simple requirement which has already been done in multiple ways.

 

Having never used the Geo spatial functions before getting myself started in learning them and blogging here to share my initial experiences.

 

Let take an example of how to calculate distance between 2 cities.

 

Let us create a table Cities, where we store the city name and its Coordinates.

 

CREATE COLUMN TABLE CITY(
Id BIGINT not null primary key generated by default as IDENTITY, /* To automatically fill the Id column */
City NVARCHAR(40) NULL,
LongLat ST_GEOMETRY (4326)) /* Using ST_GOEMETRY shape which is a super set to load points information */



Some observations while creating the table:

 

1) ID Column:

 

Here I used Identity column to generate the numbers for the ID column, you can see more details about it in the below blog mentioned by Lars:

Quick note on IDENTITY column in SAP HANA

 

2) Longitude & Latitude points:

 

I have loaded the Latitude and Longitude details with the information I got from this website: Geographic coordinates of Hyderabad, India. Latitude, longitude, and elevation above sea level of Hyderabad

 

3) ST_GEOMETRY:

 

We are using this data type ST_GEOMETRY to load our coordinates for the city.

 

SRID Value 4326:

 

Spatial reference identifier (SRID) and that the 4326 refers to the WGS84 standard which is commonly used.

 

Now let us load the data:

 

insert into CITY (City,LongLat) values('Hyderabad', new ST_POINT('POINT(78.4744400 17.3752800)'));
insert into CITY (City,LongLat) values('Vishakapatnam', new ST_POINT('POINT(83.3000000 17.7000000)'));

Note: While Inserting also we can mention the SRID value as shown below but it will not make any effect and will remain as 4326 only ( because we created with 4326 as reference while creating the table ) as shown below. With 4326 and if we try to calculate the distance then it would give the result in metres.

 

Screen Shot 2014-07-09 at 2.14.40 PM.png

 

If we had create the table like below :

 

CREATE COLUMN TABLE CITY(
Id BIGINT not null primary key generated by default as IDENTITY, /* To automatically fill the Id column */
City NVARCHAR(40) NULL,
LongLat ST_GEOMETRY) /* Using ST_GOEMETRY shape which is a super set to load points information */

 

And you have used the below insert statements:

 

insert into CITY (City,LongLat) values('Hyderabad', new ST_POINT('POINT(78.4744400 17.3752800)',4326));
insert into CITY (City,LongLat) values('Vishakapatnam', new ST_POINT('POINT(83.3000000 17.7000000)',4326));

 

Still the SRID will refer to the default value i.e 0 as shown below:

 

SELECT LongLat.ST_AsEWKT() FROM CITY;

Screen Shot 2014-07-09 at 2.08.59 PM.png

Hence we are using 4326 as reference while creating itself.

 

OK ! now we have data so now let us create stored procedure to calculate the distance between the 2 cities Hyderabad and Vishakapatnam. And also convert the distance into KM's or Metres as required.

 

Procedure Code:

 

CREATE PROCEDURE SP_CALC_DISTANCE
( In Latitude DECIMAL(18,10), In Longitude DECIMAL (18,10), In Convesion NVARCHAR(10))
LANGUAGE SQLSCRIPT AS
BEGIN
DECLARE STRING_STR varchar(200);
/* Converting the Metres to KM */
IF :Convesion = 'KM'
THEN
EXECUTE IMMEDIATE ('select A.City AS "Origin City",B.City AS "Destination City"
,A.LongLat.st_distance(B.LongLat)/1000 AS "Distance(KM)"
from CITY A,CITY B
where A.id = 1 and B.id = 2');
    
ELSE
EXECUTE IMMEDIATE ('select A.City AS "Origin City",B.City AS "Destination City",
A.LongLat.st_distance(B.LongLat) AS "Distance(meters)"
from CITY A,CITY B
where A.id = 1 and B.id = 2');
END IF;    
/* Calculating the distance from the location points given in the input against the table */
   
STRING_STR:= 'SELECT NEW ST_Point(''POINT
(' || :Latitude ||' ' || :Longitude || ')'',4326).ST_Distance(LongLat)/1000 AS "Distance(KM)" FROM CITY
WHERE id = 2';
          
EXECUTE IMMEDIATE ( :STRING_STR);
    
END;   

 

CALL SP_CALC_DISTANCE (78.4744400,17.3752800,'KM')     

 

Output:

 

Screen Shot 2014-07-09 at 2.48.13 PM.png

 

Screen Shot 2014-07-09 at 2.48.30 PM.png

 

CALL SP_CALC_DISTANCE (78.4744400,17.3752800,'Metres')     

 

Output:

 

Screen Shot 2014-07-09 at 2.50.50 PM.png

 

Note that the distance you are seeing is the distance between those 2 points.

 

Am mentioning the below referenced documents which has helped me to learn and should also help you guys in further exploring.

 

References:

 

Hana SPS07, the spatial engine and taking a byte out of the Big Apple

Hana SPS07 and spatial data

Reverse Geocode your HANA Data with this XS JavaScript Utility

Serving up Apples & Pears: Spatial Data and D3

 

Well My first exercise on spatial , we got some results.  Hope you enjoyed the blog and you will join in my journey of learning this.

 

Your's

Krishna Tangudu

R is an open source programming language which is popular among data scientists, statisticians and data miners for developing software for statistical computing and data visualization graphics. It is widely used for advance data analysis.

SAP HANA also provides excellent support for advance analytics. R language can also be used for SQL procedures in HANA and HANA supports embedding of R code. However R environment supports many and much more advanced set of statistical functions and growing set of new libraries of functions. When R environments more advanced capabilities are needed in HANA SQL procedures, then HANA can be integrated with R environment as shown in image below.

R Integration Pic 1 - Inside Out.jpg

In this approach, the advanced analytical and statistical capabilities of R environment are leveraged by transferring the data from HANA to R and aggregated results are sent back to HANA.

However, how about a scenario where HANA is treated as just another database and connected using ODBC or JDBC ?. I thought of testing a use case in which R session can fetch data from HANA Tables into its environment. I term it as Outside in approach as this way of reading data can be viewed from R developers perspective.I feel if this works it opens lots of possibilities in the scenarios where ERP data needs to be viewed and merged with data from other outside domains. However I am thinking of using example data from a previous exercise in which i was able to save twitter messages into a HANA table. Although lot of linguistic analysis can be done in HANA, much more extensive analysis using regular expressions is possible in R environment.

 

RStudio console snapshot images of a successful connection to HANA system are provided below.

 

R Integration Pic 2 - Console HANA connected.JPG

tweets <- sqlFetch(connection,"MyHanaSchema.Table") reads the data from the HANA Table and

 

Here is the snapshot image of the tweets data in R .

R Integration Pic 3 - Tweets sample records from HANA.JPG

Note : Duplicate twitter message also exists in HANA Table.


There are few parameters in sqlFetch() function that allows control on number of records that can be fetched, next i am going to test that and write my own select statement and run it with function sqlQuery() to execute that SQL or even try JDBC connection with HANA ?

 

 

 

 

 

 

 

 



Innovation is what drives progress and what changes society; without it progress would come to a screeching halt.   Whether you are a leader or follower, innovation and disruption is going to happen. You need to lead with innovation, or be left to respond to disruption. These are the critical success factors in business today as the pace of change accelerates, and more importantly, innovation impacts how we perform our work and live our daily lives

 

However, innovators are most often the unsung heroes who dare to dream, possess creativity to envision new possibilities and the courage to seize opportunities presented by new technologies. The SAP HANA innovation awards were designed to showcase and honor customers innovating with SAP HANA.

 

The winners were announced on June 3 at Sapphire. Watch the video to experience the excitement as the winners are announced and the first place winners receive their “big cheques” for charity. Congratulations to the winners on your achievement.

 


Trailblazer Award 2.jpgBig Data Award 2.jpgSocial Hero Award.jpg

 

View winners on the award website , read the blog by Steve Lucas or listen to the session from Sapphire as Dan Lahl from SAP presents the HANA innovation award together with Steve Pratt from CenterPoint Energy, the first place winner of Big Data Innovator category.

 

Everyone loves a good contest

 

  • Customers using SAP HANA in production from 15 countries were invited  to enter their innovation story in 3 categories : Trailblazer Innovator; Social Hero Innovator; Big Data Innovator. Thanks to every one of you who participated and shared your story – every one of you is a winner!
  • Check out the 27 amazing stories that were shared by leading brands from 7 countries across the globe.  They provide a simple and powerful way for you to relate to their challenges and how they are empowering specific individuals by using SAP HANA.  More importantly they enable you to find your own inspiration to the possibilities of using SAP HANA to drive innovation.
  • 22 Finalists were selected by public voting format similar to American Idol and Dancing with the stars.  It was exciting to see how engaged folks got and how they mobilized colleagues and friends and even family to vote. The social buzz for the contest hashtag #HANAStory generated over 6M  impressions on twitter, 6.7K votes and 76.1K website visits - with a voting frenzy on the last 2 days of voting.
  • stats.jpg

 

social media 2.png

  • The SCN team were engaged and developed missions for the SAP Community to get involved almost 1800 SAP Community members participated in the Ready for the Innovation.award mission.

missions.jpg

 

 

  • SAP User groups were invited to nominate judges and user groups from  from Germany, Netherlands, Spain, UK and USA submitted nominations. We also had and an incentive for user groups to help spread the word and inform their members about the opportunity to participate in the award.   I am pleased to announce that ASUG has won the user group incentive for the most number of entries. Congratulations ASUG
  • 11 judges comprised of SAP User Group members and SAP employees reviewed and scored each of the 22 finalists using a score card to select the final winners. Thank you to all your hard work.


SAP HANA Innovation Awards - Be a Winner Next Year!

 

 

 

 

SAP and the community wants to learn from your amazing story.

tweet.png

How are you using SAP HANA to transform and re-imagine your business or changing the world.  Tell us how you are using SAP HANA to:

  • Improve business processes and performance
  • Overcome data management challenges
  • Drive new business models
  • Empower your workforce, customers and society

Stay tuned to more details about the contest

 

Last but not least please fill in the short survey on your experience and help us improve aspects that did not work so well. Link to survey

 

 

Thank you to everyone who supported this incredibly exciting project.

PuurC.png

Technology is all around us, but not always as easy to grasp, let alone understand how it can change our lives.

HANA seems to be such a difficult to grasp technology for many companies. Having lived with the client-server architecture for decades and being used to traditional databases, the challenges of SAP HANA may seem daunting.

Worse even, SAP initially marketed SAP HANA as a really fast database, or at least, that's how many people understood the message.

 

So we at our company decided that the time was right to level the playing field and explain organizations what the real added value of the HANA platform means for their company.

crowd.JPG

 

Technology is Childsplay

Any marketeer will tell you that location is one of the 3 main pillars to get your message across. To prove the invited crowd how technology is overtaking daily life and how the next generation of employees is playing with science and technology in ways that we never imagined, we organized the event at Technopolis.

Technopolis 002.jpg

We wanted to present the HANA platform to our audience, not from a technological perspective, but from a perspective that a kid would take.

Not about: How does it work?

But about: How can I play with it? What will it do for me?

 

Content

Keynote

Dr Juergen Hagedorn from SAP Walldorf was kind enough to do the keynote for our event. He just returned from a holiday, but made a smashing effort to be present at the HANA event and level the playing field. During his keynote, Dr. Hagerdorn lined out the future vision on the SAP HANA platform and explained that HANA is more than just a database. It's an entire platform on which you can build your enterprise IT systems, both SAP and non-SAP.

 

User Experience - Powered by HANA

MRP.png

Next up, Yours truly dived deeper into the User Experience, powered by HANA. Everyone already knows what Fiori is, and what it looks like. Many people however, doubt why so many Fiori applications only run on Hana. During the UX session, we demonstrated what the additional computing power of HANA can do to your User Experience.

 

Remember that User experience is not just about a fancy screen. It's about bringing data and functionality to the user, in a way that he understands what he's doing and he can do it intuitively. In that sense, HANA is very important to UX because the extra computing power allows you to aggregate data n the fly, and resent it to the user in a summarized vision. (preferably graphically)

 

We also demonstrated how the HANA system points the employee to problems that will arise in the future and actively helps the employee in understanding and analyzing the problem. Moreover, the HANA system can simulate possible solutions and propose the most applicable scenario's to the user. In turn, the employee can visualize each scenario, understand what the solution entails, and eventually choose the most appropriate solution.

 

More scenarios such as Advanced Search and step-by-step applications followed, which really impressed the audience. It helped them understand how HANA does not just speed up their current processes, but it actually makes the employees more efficient and helps them deliver better quality of work.

 

Data Analysis - Powered by HANA

Our own data-scientist Kim Verbist demonstrated the audience how SAP HANA, in combination with Lumira, makes every employee a data-scientist. The ease of use that Lumira brings into play, enables every employee to explore data within his/her limits of authorization and build reports on top of it.

 

No longer do you require lengthy phases of blueprinting, developing, change requests, and testing, but now you can simply organize workshops with the employees and experts side by side. They can work together interactively and at the end of the workshop, have a complete report and storyline ready for productive use.

 

HANA doesn't just speed up your report execution, it speeds up the process of building new reports!

data.png

 

Business processes - Enabled by HANA

Our distinguished Business Process veteran Luc De Winter explained the audience how the massive computing power of HANA enabled opportunities for new business processes, which were not possible before due to technical limitations. gone are those limitations now, and the sky is the limit.

 

Of course, you can just use HANA as a fast database underneath your ERP system, but why linger in the consume phase, when there is so much more to gain by improving your existing processes.

He demonstrated his point by explaining the allocation process in the Retail industry, which typically generates massive amounts of data and takes a nightly batch job to run. On HANA, this process can run online.

Pink_Shoes_with_handbag.jpg

If we take it a step further and adapt our processes to the new reality, we can also enable new business processes. Which we did on our own RetailOnHana system called with a process called re-allocation.

This is something which does not exist in SAP-Retail today. The allocation process is always about pushing products from distribution centers to stores, which is difficult enough on a classic environment. With re-allocation, you can analyse which stores are selling a lot of certain products and are running out of stock, meanwhile also identifying other stores that have too much stock and are running behind on their sales. Rather than discounting leftover stock, we can analyse which nearby stores can re-allocate some of their stocks to stores that are showing impressive sales figures. This maximizes your margin and efficiency and was not possible before.

 

In a last phase, you can even start inventing possibilities. The example showed how we can analyse historical sales data to cross reference popular products, and determine which products are often bought together. the pink handbag and the pink shoes appeared to be a quite popular combination, so using the process of allocation and re-allocation, we can push leftover pink handbags from shop A to shop B, and distribute the pink shoes from the distribution center to shop B and start a new fashion trend!

 

The road to HANA

By this time, our audience had fully come to grips with HANA as an enabler for their data, processes and people, and were curious to know how they can move to a HANA based architecture, with the least risk and friction as possible.

 

My colleague and fellow Mentor Tom Cenens described how an organization can start experimenting with the HANA platform, move their data and start exploring and eventually moving their entire ERP system onto HANA. He described the different low cost options are available to start taking your first steps on the journey to HANA and on how to eventually migrate entirely.

Path forward.png

Some nice additional features to SAP HANA are the HANA answers, where your employees can actually collaborate around questions coming from the organization itself. These can be business questions, but can just as well be technical questions. This type of built-in collaboration platform helps to ease the transition by combining the available talent and intelligence to overcome hurdles.

 

Feedback

Afterwards, we lingered a while longer to engage with the participants and give them more one-on-one advice. We also received a lot of positive feedback. Many participants were fearful that the event would mainly be about marketing HANA, but they were pleasantly surprised to receive a good mixture of Technical aspects, vision and Business cases.

 

Some participants even came to us and said “Today, you gave us the ammunition we need, to justify an investment on HANA, to our business”.

In the past business would still regard HANA as a technical investment, but with the demos that we gave, we managed to convince the audience that the HANA Platform really is a business solution.

 

reception.JPG

 

Final words

The Belgian HANA event was a tremendous success, made only possible with the help from many people. So I want to give a big shoutout to everyone that participated, behind the scenes and on the stage. All those people that invested evenings and weekends in setting up our own HANA system, setting up the demos, providing data, organizing the location,…

 

A big Thank You!

When i try to execute backup.sh i am getting below error

 

./bacup.sh: line 1056:/usr/sap//HDB//trace/script_log_backup_fri.txt: No such file or directory

 

 

./bacup.sh: line 365:/usr/sap//HDB//exe/script_log_backup_fri.txt: No such file or directory

 

 

./bacup.sh: line 385:/usr/sap//HDB//exe/script_log_backup_fri.txt: No such file or directory

./bacup.sh: line 392:/usr/sap//HDB//exe/script_log_backup_fri.txt: No such file or directory

 

Please help me to Solve.

 

Thanks,

Gopinath.

Sentiment Analysis on comments made by customers or users can provide insights into several aspects just not only on what they like or not like. Key is to approach it like a good questionnaire or survey would, which aims to find apart from obvious inferences from the individual ratings or responses of the questions but also not so easy inferences by analyzing responses using multiple attributes , like gallop surveys which can quite accurately predict results for a big constituency even by sampling only minuscule set of cross section of people.


In continuation of my thoughts on Feedback and Sentiment analysis shared earlier here at Voice of Customer , Sentiment analysis & Feedback service where I used text analysis features available on HANA to do sentiment analysis on the customer feedback collected from external web sites.

It was done by providing users or consumers a HTML based form to input their ratings and free text as review & feedback on a web site and then using JQuery/AJAX  call to a service on HANA Cloud Platform to collect these customer or user ratings and feedback .

 

But how about gathering the feedback from social media and analyze what people or customers are talking on social media, for example on Twitter.

 

Twitter provides a REST API service to search message feeds or tweets , however it requires requesting applications to authenticate all requests with OAuth. It means an oauth token or access token must be present in each request. As a consequence of this requirement, requestor of Twitter search service has to create a developer account on http://dev.twitter.com.  The consumer and access token keys will get assigned to the requestor account .

With these tokens, REST API search can be made on all messages or messages of a specific person or account.

 

Though there are several ways to invoke this Twitter API, Client based JQuery/AJAX methods do not work. Work around may exist to use JQuery/AJAX however this method is not recommended, as server side call to twitter API is the recommended secure method.

Searching  for Twitter messages can be implemented in many ways , for example a Java application to search twitter messages using Eclipse IDE and open source Java library available at  http://twitter4j.org can be a way. However I chose to use PHP script to search twitter messages on my local server. There are many open source libraries and widgets which work with OAuth. I have used twitteroauth PHP library. It can be downloaded from GitHub · Build software better, together and source can be easily included in PHP script.

 

Example of a search could be say If you want to search for all the tweets with word "architecture" in it and want to limit the search to return only 10 tweet messages.

 

 

 

Snippet of PHP code that will do this search is shared below.

 

<?php
$consumer = "put your consumer key";
$consumersecret = "put your consumer secret id";
$accesstoken = "put your access token";
$accesstokensecret = "put your access token secret";
$twitter = new TwitterOAuth($consumer,$consumersecret,$accesstoken,$accesstokensecret);
$tweets = $twitter->get('https://api.twitter.com/1.1/search/tweets.json?q=architecture&result_type=mixed&count=10');
?>

However If you want to make the search more flexible and would like to search for any word in Twitter messages then below is a snippet of PHP that i have used.

 

<html>
  <head>
               <meta charset = "UTF-8" />
               <title>Twitter Search using PHP</title>
  </head>
  <body>
               <form action = "" method = "post">
               <label> Search : <input type="text" name="keyword" /> </label>
               </form>
<?php
  if ( isset($_POST['keyword'])) {
  $tweets = $twitter->get('https://api.twitter.com/1.1/search/tweets.json?q='.$_POST['keyword'].'&lang=en&result_type=mixed&count=50');
if(isset($tweets->statuses) && is_array($tweets->statuses)) {
 if(count($tweets->statuses)) {
 foreach($tweets->statuses as $tweet) {
 echo $tweet->user->screen_name.'<br>';
 echo $tweet->created_at.'<br>';
 echo $tweet->text.'<br>';
 echo '*************************************************************************************'.'<br>';
 }
 }
       }
  }
?>
</body>
</html>

 

 

 

We can now search and print the twitter feeds using PHP script. However still an important question remains unanswered, which is How to save these message feeds or texts into HANA ? Again several approaches can be taken. In the past I have cURL libraries to invoke REST Web services and POST method to update. This time around I thought of using ODBC. I am using HANA on a Cloud Platform so I need to open a secure DB Tunnel to get access to database from my client machine. To open the secure DB Tunnel, i used another software application called Neo which is available at HANA cloud platform tools download area. When the tunnel is open, it provides you with a temporary host, port, user and password to access the HANA database on cloud instance. These values need to be provided for in the PHP script using ODBC to connect to the  HANA database . One will need similar values to connect to on premise HANA database in the same fashion. PHP Code snippet to connect to HANA database is provided below.



<?php
 
 
$driver = 'HDBODBC32'; // i am using 32 bit
$host = "localhost:port";
// Default name of your hana instance
$db_name = "HDB";
$username = "Your user name";
$password = "Your password";
// Connect now
$conn = odbc_connect("Driver=$driver;ServerNode=$host;Database=$db_name;", $username, $password, SQL_CUR_USE_ODBC);
if (!$conn)
{
    // connection failed
     echo "ODBC error code: " . odbc_error() . ". Message: " . odbc_errormsg();
}
else
{
echo " connection sucess";
?>

 

 

Once the ODBC connections are established , all SQL operations like Select & Insert statements can be done by forming the SQL command in a string variable $sql and using  function odbc_exec()  call to execute the SQL statement . For example $result = odbc_exec($conn, $sql);

When all operations on HANA database are finished, for the final action to close the data base connection function odbc_close() be performed. For example statement odbc_close($conn);



HANA table which stores the Tweets data’s named as "MyTweetsTable" with columns to store tweets information like the tweet user, created date, message or TEXT and hashtags values in the message .

 

Using HANA SQL statement  Create FullText Index "TWEETS_myindex" On "MyTweetsTable"("TEXT") TEXT ANALYSIS ON CONFIGURATION 'EXTRACTION_CORE'; another HANA table  gets created with text analysis performed on the Tweet Messages. Another configuration option for text analysis is  LINGANALYSIS_FULL.

This Text analysis option parses the text message into different components of grammar like noun, adjectives etc. & these words once categorized by their grammar called tokens are grouped into different categories . For example word SAP is a noun of category called Organization. CNN will be categorized as Media.

Since Text analysis Table also maintains how many times a token or word has occurred.

As i was talking in my previous post , it is now fairly easy to know what word is trending most.

 

For example i searched Twitter messages for word "SAP" and I found "SAP Package Technologies"  was being talked most in the tweets .This text analysis index table can be queried in many different ways to get more insights into the data.


twitter feed text analysis.gif

 

 

 

Similar to Text analysis index table another index table can be built  to do the sentiment analysis on these tweet messages. This time use configuration option EXTRACTION_CORE_VOICEOFCUSTOMER instead of EXTRACTION_CORE used in previous example.

 

Note HANA will not allow creation of another text index , if a index already exists on a column . i.e. since the text column in MyTweetsTable already had text analysis index  built previously. I needed to create copy of the original table storing the tweet messages. It can be done easily by leveraging the same SQL create statement used to create the original table, however this time I used a different table name. The data could be easily copied by another SQL statement  -- insert into "Name of  the target table_copy"  select * from "Name of the source table_orginal" --.

 

In the new voice of customer table index  table, the tokens  categories are analyzed much further to not only indicate if these are any organization names, product names or social media names but can also  categorize the tokens into a list of positive and negative sentiments like weak or strong . It can even probe messages into identifying if the messages are calling for information  and categorizes these as Request types.

 

Examples of what was mentioned most in these tweets ? and what kind of sentiments were in these tweets ? are easily feasible in the voice of customer table index. See  snapshot images below from a query on the table.

VOC - what was mentioned most.JPG

Notice most sentiments were weak positive comments.

VOC - What Sentiments used  in tweets.JPG

If we want to explore what were some of the words used in strong positive sentiment tweet messages. Those can be also easily identified. Shared below are results for the same. The same analysis can be done for negative sentiment comments as well.

VOC - What made Strong Postive Sentiments used  in tweets.JPG

 

 

Also , since younger in age people tend to use emoticons more, query can be done on this data to get what emoticons were used in these twitter messages to get the general feeling of happy , smily and sad faces and image below shared does show that.

VOC - What emoticons were used in tweets.JPG

 

One of the categories, in which words were grouped together's PRODUCT .

Image below shows what kind of Products were being referred in the tweets.

VOC - What PRODUCTS  were named in tweets.JPG

 

If one wants to know what Organizations were mostly talked about in these tweeter messages, query on the indexed table can provide the result.

VOC - What organizations were used in tweets.JPG

 

Since Twitter messages have lots of other useful information like how many times it has been re-tweeted or liked, user demographics and location etc. By capturing all these attributes from the Twitter API , a much more meaningful sentiment analysis can be done.

So next time when i watch TV , broadcasters are telling people in different parts of the world are cheering which team most ,  it will be no more a mystery for me !!.

 

Analysis need to be carefully done on this twitter ( or any social media) data to get insights, which could be useful to the business. Needless to mention it is highly dependent upon quality of data. Perhaps it makes more sense to monitor the customer sentiment during certain time windows, like state of union address by president, or during special business events i.e. whenever there is new product launch or commercial being played on media.

 

Technology is here to support business and looking to make it work in real use cases, where positive sentiments over time increase (or there's decline in negative sentiment ) .

The upside of a new release every six months or so is the speed of innovation that can occur.  The downside for us busy people is simply keeping up.  Well, you will be happy to know that the team from the SAP HANA Academy has you covered. Since the release at the end of May, we have added over 60 videos highlighting the new and updated features found in SPS08.  Rumor has it there are even more topics on their way in the coming weeks.  If you haven't checked it out, you should!

 

Why not spend a nice summer evening with your favorite cool beverage, a laptop or smart phone, and brush up on some SAP HANA happenings? 

Check out these new playlists introducing new capabilities with the SAP HANA platform:

SAP HANA Answers

SAP HANA Installations - SPS 08

SAP River Rapid Development Environment

SAP HANA SPS 08 - What's New

These topics received updated videos and highlighted new features from the release:

SAP HANA Security

SAP River

Modeling and Design with SAP HANA Studio

SAP HANA studio

SAP HANA smart data access

SAP Enterprise Cloud

Predictive Analysis Library

SAP HANA Cloud Platform

 

And one for the sports fan in all of us:  a video series showing how we used play-by-play data from the recent IPL Twenty20 tournament to create insight into the matches using SAP HANA and SAP Lumira.

Cricket

 

Don't miss any new updates – subscribe to our YouTube Channel today!  And if you have some free time while your colleagues are on vacation, why not tackle a hands-on step-by-step project like SAP HANA Academy Live2 – with videos and code samples to guide your way?  It's more fun than summer school!

Actions

Filter Blog

By author:
By date:
By tag: