1 2 3 6 Previous Next

SAP Predictive Analysis

84 Posts

First some background about the issue:
InfiniteInsight (II) is not letting you use your analytical views, calculated views and so on in the user interface

 

In the background, II will use the capabilities of the ODBC driver to get the list of "data space" to be presented to the user using a standard ODBC function.

Unfortunately, the HANA ODBC driver is not currently including the names of the analytical views, calculated views.

 

However this ODBC driver behavior can easily be bypassed in two ways:
- simply type in the full name of the calculated view (including the catalog name) like "PUBLIC"."foodmart.foodmart::EXPENSES"
- configure II to use your own custom SQL that will list the item you want to display.

This feature is used in II to restrict the list of tables for example when your datawarehouse has hundreds of schemas.

 

One file needs to be change depending on if you are using a workstation version (KJWizard.cfg) or a client/server version (KxCORBA.cfg) by adding the following content:

 

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog1="  SELECT * FROM (   "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog2="   SELECT '""' || SCHEMA_NAME || '""', '""' || OBJECT_NAME || '""', OBJECT_TYPE FROM SYS.OBJECTS WHERE OBJECT_TYPE IN ('TABLE', 'VIEW') AND SCHEMA_NAME NOT LIKE '%%SYS%%'   "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog3="  UNION ALL   "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog4="   SELECT '""' || SCHEMA_NAME || '""', '""' || VIEW_NAME || '""', VIEW_TYPE FROM SYS.VIEWS WHERE NOT EXISTS (  "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog5="         SELECT 1 FROM _SYS_BI.BIMC_VARIABLE_ASSIGNMENT A JOIN _SYS_BI.BIMC_VARIABLE v ON a.CATALOG_NAME = v.CATALOG_NAME AND a.CUBE_NAME = v.CUBE_NAME AND a.VARIABLE_NAME = v.VARIABLE_NAME  "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog6="         WHERE SCHEMA_NAME = a.CATALOG_NAME AND VIEW_NAME = a.CUBE_NAME AND ( MANDATORY = 1 OR MODEL_ELEMENT_TYPE IN ('Measure', 'Hierarchy', 'Script') )  "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog7="   ) AND IS_VALID= 'TRUE' AND VIEW_TYPE IN ('CALC', 'JOIN')   "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog8="  ) order by 1,2   "

 

 

In this example I only include tables, views, calc and join views with no mandatory variables or 'Measure', 'Hierarchy', 'Script' variables at all.

 

You may need to adjust this configuration SQL if you want to list Smart Data Access objects.

 

You can notice here that we are changing the behavior for one ODBC DSN (MyDSN), so this value might need to be adjusted in your environment.

You can also replace it with a star (*), then this configuration will be applied to all ODBC DSN, which may not work on other databases.

 

Some functionalities in II may not work yet properly despite this workaround.


For example:

  • data manipulations requires the configuration file change
  • view placeholhers and in general views attributes are not properly supported
  • some type of aggregates are not "selectable by name" which mean that if used in a select statement in HANA Studio it will not be returned (select * vs select cols).

 

Hope this will save you some time

Hello !

This is my first post to scn, so, please, be generous)

 

I'm working with HANA PAL for 4 monthes. My domain is time series predictions, so I'm using *ESM functions collection, espeially TESM.

When I build my forecast models, I always want to visualise the results - that gives me the first understanding of whether I'm doing right or not. You know that - two charts are much less "readable" than one:

 

ScreenShot041.jpg

vs

 

ScreenShot042.jpg

 

When you look at the second one - you get very clearly that your forecast is not realy good, while looking at the first two you might think "Mmm?... "

 

 

So, what we want is to merge the input PAL table/view (let it be fact) and the output one - let it be prediction.

 

 

There would be no problem here if you had your data in the appropriate structure by default:

ScreenShot040.jpg

 

But usually I don't.

My raw data usually comes as PSEUDO_TIMESTAMP | DOUBLE table.

Where PSEUDO_TIMESTAMP may be of mm-yyyy, ww-yyyy, yyyy.mm, yyyy.ww and so on...

 

So, the question is - how to sort it in an appropriate way and then to numerate the rows?

 

  1. Sorting
    My solution is to transform any input pseudo_timestamp format to YYYY.[ MM | WW | DD ] with the help of DateTime and String functions. (1.7.2 and 1.7.5 in SAP HANA SQL and System Views Reference respectively).
    After you've done it, order by clause will work just fine.
  2. Numerating
    First I've tried to use undocumented HANA table's technical row "$row_id$" - but it works bad..
    The clear and fast solution is to perform the following code before PAL call:

    --assuming that fact table has two columns, timestamp and values. Timestamp is a primary key.

    alter table fact add ("id" bigint);
    drop sequence sequence1;
    create sequence sequence1 START with 1 increment by 1;

    upsert fact select  T1."timestamp", T1."values", sequence1.nextval from fact T1;


After that you can easily create table/view with {"id","value"} to feed to ESM, and then to left join with prediction results

ScreenShot043.jpg

on fact.ID = prediction.ID


Then you visualize the final table/view of your prediction in HANA Studio -> Data Preview -> Analysis



Hope that will help you

 

Precise forecasts to all of us

These are some brief notes with question and answer and polls from yesterday’s SAP webcast.  The usual disclaimer applies that things in the future are subject to change.

 

Also note I didn't stay for the whole session so I may have missed some points.

 

1fig.png

Figure 1: Source: SAP


The Speed of evolution has changed.  As Figure 1 shows, today we have the challenges and inefficiencies of current analytics landscape including complexity, speed, and cost (Source: @SAPAnalytics)

2fig.png

Figure 2: Source: SAP

 

SAP wants to democratize advanced analytics and make it easy, fast, and efficient as the slides shows.

 

They want to make it easy so you don’t need advanced degrees to do this work.

3fig.png

Figure 3: Source: SAP

 

Figure 3 shows you can embed Analytics so the user doesn't know it's underneath

 

Business Analysts are the lynch pin, want things easier to use said SAP’s Shekhar Iyer

4fig.png

Figure 4: Source: SAP

 

The above shows an overview of predictive analytics solutions from SAP

5fig.png

Figure 5: Source: SAP

 

Figure 5 shows bringing together lines of business and industries to make things “efficient and effective”

SAP says to consider the new analysis that is possible with predictive analytics & put our creativity to work

 

Question:

Q: What is biggest stumbling block?

A: Complexity, KXEN – Infinite Insight combines both

6fig.png

Figure 6: Source: SAP

 

Figure 6 shows the results of attendees poll responses.  Most of us aren’t using any predictive analytics solution.

7fig.png

Figure 7: Source: SAP


A customer example is eBay. They saved millions by finding an attribute that contributed to a lack of pipeline (Source: @SAPAnalytics)

8fig.png

Figure 8: Source: SAP

 

Analogy was made that InfiniteInsight is the espresso machine & Predictive Analysis is the barista

 

Learn more about InfiniteInsight at ASUG Annual Conference, where the data modeler for the 2012 Obama Presidential Campaign discusses Using Analytics to Help Win the US Presidency

9fig.png

Figure 9: Source: SAP

 

The above shows an overview of the HANA Predictive Analysis Library (PAL)

 

Learn how a customer is using PAL – see Predictive Analytics for Procurement Lead Time Forecasting at Lockheed Martin Space Systems Using SAP HANA, R, and the SAP Predictive Analysis Toolset at ASUG Annual Conference next month.

10fig.png

Figure 10: Source: SAP

 

The above shows an overview of SAP R Integration for predictive analytics

 

Question:

Q: Can you contrast solutions – R with HANA PAL

A: Algorithms with HANA PAL are a subset of R, optimized to run in R

SAP will continue to invest in #HANA PAL, R Integration

Continue to invest in PAL. Added 100 engineers in this area

 

Q: How see algorithms in KXEN?

A: Not algorithms in KXEN/InfiniteInsight – they are functions

What you see in InfiniteInsight are functions, sorted by category vs. the algorithms

Proprietary algorithms in KXEN -/ II but they do share details

11fig.png

Figure 11: Source: SAP

 

Attendees said the biggest barrier to adopting predictive analytics is skills shortage.  Second is cost.

12fig.png

Figure 12: Source: SAP

 

Figure 12 shows the smart vending example of “Smart operations”

 

Asset management is used keep things cold

 

It also helps personalize the experience

13fig.png

Figure 13: Source: SAP

 

The customer in Figure 13 went from 4 days to 3 hours breakdown time on the Smart Vending example.

14fig.png

Figure 14: Source: SAP

Figure 14 shows a Cox case study.

 

Question and Answer

Q: I’d like to understand predictive and stochastic capabilities and how it understands unstructured data

A: address any model data

Unstructured – when build predictive models, need to structure data in some ways

Use SAP HANA libraries, data services, InfiniteInsight to structure data

 

Q: How often do you switch models out?

A: It depends on business problem and data

Tool to manage models is Infinite Insight –Factory, which lets you reconstruct original data set on the fly. Model management is a big piece

15fig.png

Figure 15: Source: SAP

 

Figure 15 asks who is using predictive analytics to build models in your organization?  Looks like it is mostly the business analyst

16fig.png

Figure 16: Source: SAP

 

Figure 16 is an overview of future direction/roadmap of predictive analytics solutions from SAP. For more details attend ASUG Annual Conference Session Predictive Analysis Roadmap with SAP’s Charles Gadalla.

 

 

If you missed yesterday’s session and you can register for today’s 7:00 PM session http://bit.ly/RMHgEm

 

Other (source: @SAPAnalytics):

 

  • If you are interested in test driving SAP Predictive there is a free trial available at http://bit.ly/1sqaowj
  • SAP offers Rapid Deployment Solutions to speed up deployments
  • It can use HANA smart data access feature. You can use HANA as overlay to federate the data into Predictive Analysis.

 

ASUG Annual Conference

Preview of ASUG Annual Conference 2014: Focus on Analysis Office/OLAP/Predictive

 

Share your Story: Call for Sessions for ASUG at SAP d-code (former TechEd)

 

You are invited to submit a proposal to share your experience and expertise with your colleagues to speak at SAP d-code to be held October 20-24 in Las Vegas.  Others will benefit from your experience while you make a valuable contribution to the profession's field of knowledge.


Follow this link to create a speaker account where you can formally submit your proposal, review important deadlines, and other general information about SAP d-code.  The deadline to submit your abstract is May 25. If you have any questions, please e-mail sapdcodespeaker.info@sap.com


Upcoming ASUG Analytics Webcasts:

 

May 15: Lumira Self Service for Business User

May 21: SAP Lumira Question and Answer Session

June 23: Predictive Analysis Roadmap

September 15: Design Studio and Analysis Scenarios on HANA

This is part 2 of today’s ASUG webcast with SAP's Charles Gadalla.

 

Part 1 is Predictive Analysis - KXEN is not a Radio Station -  ASUG Webcast - Part 1

1fig.png

Figure 1: Source: SAP

 

Figure 1 shows the popularity of R, with a “hockey stick from 2011 and up”

2fig.png

Figure 2: Source: SAP

 

Figure 2 shows an example of editing a custom component with R inside Predictive Analysis.

3fig.png

Figure 3: Source: SAP

 

Figure 3 shows an example of "live editing" of the Custom R component inside Predictive Analysis.

4fig.png

Figure 4: Source: SAP

 

Figure 4 shows upcoming sharing options.

5fig.png

Figure 5: Source: SAP

 

Figure 5 shows building the deployment, solution set, extend it through the organization

6fig.png

Figure 6: Source: SAP

 

An example of embedding is shown in Figure 6 - no one knows this is Predictive Insight, it is part of the module

7fig.png

Figure 7: Source: SAP

 

Figure 7 shows RDS content and it is “free”

8fig.png

Figure 8: Source: SAP

 

Figure 8 shows Predictive Analysis and KXEN are converging over time (subject to change).

 

Question & Answer

Q: What Predictive Analysis capabilities are available in ECC without HANA?

A: SAP InfiniteInsight EXPLORER

A: APO, BW modules, if not use HANA you can use Predictive and KXEN - not dependent on HANA.

________________________________________________________________

Q: Quite a few client tools. Is there a guide to know when to use which tool?

A: Yes, a few client tools.

A: Predictive & Infinite Insight sold as Infinite Insight Modeler. Data  scientists -  Lumira is a visualizaiton tool - 2 algorishm.

________________________________________________________________

Q: Any plan on having SAP Lumira to be a thin client?

A: SAP Lumira is available in the Cloud cloud.saplumira.com

________________________________________________________________

Q: Have you seen any successful models used in healthcare that predict patient outcomes (micro) or hospital admits (macro)?

A: Health care  - SEPSIS / influenza analysis

A: Yes SEPSIS, hospital management, research etc

________________________________________________________________

 

Q: Are there any projects / RDS to use HANA to speed up pricing rebuilding

A: Price optimization - complicated module- sister line - retail product lines using Hybris / Customer Engagement Intelligence.

_______________________________________________________________

Q: Can the tool extract data from external sources such as websites/partner portals (maybe usig RSS or other feeds), and include in my data assessment/analysis?

A: Yes, typically have intermediary of Hadoop

________________________________________________________________

Q: What are the client tools scheduled to be running in 64 bit soon and in in-memory?

A: Predictive and InfiniteInsight running in 64 bit

________________________________________________________________

Q: With regard to Lumira Server, currently the artifacts look to be persisted on HANA, what are the plans to integrate these into Business Objects Enterprise or is the idea to position Lumira Server as a lightweight content repository?

A: Lumira Server is Lumira on HANA and will integrate with BI Platform > will standardize as one on BI framework

________________________________________________________________

Q: Pricing question restated - I have pricing programs that must rebuild prices based on commodity market input and it has difficulty completing overnight.   Any projects to apply HANA to this problem?

A: If look at pricing on market input, projection, trend, can do this with HANA - PAL library algorithms, Monte Carlo that would help with simulations

________________________________________________________________

 

Q: looks like the biggest use cases are currently in market forecast and customer analysis. Are there any for supply chain?

A: Yes - Demand Signal Management, APO, - 150+ use cases and growing.

 

Related:

Join Us at ASUG Annual Conference

 

Upcoming ASUG Webcast next month:

SAP's Charles Gadalla provided this webcast today.

 

1fig.png

 

Figure 1: Source: SAP

On the left of Figure 1, high skill sets are needed to be a data scientist, with a masters in statistics.

 

On the right side, you have business users

 

Consumers take output from data scientists and take an action.


In the middle: data analysts/business analysts – do more than basic reporting – segmentation, forecasting, in a more sophisticated manner

2fig.png

Figure 2: Source: SAP


Data scientists on the far right of Figure 2 are already well served.

 

SAP is interested in group in the middle, including embedding the analytics inside the workflow

3fig.png

Figure 3: Source: SAP

 

Figure 3 shows a paradox that there is a lot of “big data”.

 

We are using more data today and decisions are made in a much shorter time scale, with a huge increase in speed of algorithms

 

Every business is being asked to make decisions faster with more data

Why should I care?

 

highlights.png

Figure 4: Source: SAP

 

Figure 4 shows that back in December, SAP released a survey, showing competitive "ROI"

sap track record.png

Figure 5: Source: SAP

 

Figure 5 shows Mobilink going through 900TB call data records for communities – 6M communities from these calls

 

MONext – decision on fraud transactions in milliseconds

why acquire kxen.png

Figure 6: Source: SAP

 

It was on this slide that Charles said "KXEN doesn't stand for a radio station...it means knowledge extraction engine".  I did not know that.

advanced solution insight to action.png

Figure 7: Source: SAP

 

Insight from KXEN to view thousands of fields of data; Predictive Analysis was built inside SAP

 

Charles used as an example if you drink a diet cola on Tuesday at that means you had chips on a Sunday

 

Another example is to integrate and tell story as Predictive is built on Lumira

 

hana analytics portfolio.png

Figure 8: Source: SAP

 

Figure 8 shows data comes in from any of the channels

 

PAL is on HANA is the implementation on HANA R – maintained by universities and consortium – popular algorithms to use and reuse – execute in memory,

 

It is based on an open source language

 

Client tools on top left of Figure 8.

 

SAP combined Predictive Analysis with Insight in a tool called Insight Modeler

 

It also includes a line of business application – like Fraud Management, etc.

 

SAP has RDS solutions using Predictive

 

They partner with ESRI, SAS

 

Charles has special speaker from Obama campaign presenting on how the Obama campaign used KXEN to win the 2012 US Presidential election.

predictive analytics portfolio on HANA.png

Figure 9: Source: SAP

 

With SAP embedded on HANA, you are not getting the SAS algorithm

 

You can see the two ways to access HANA in Figure 9

predictive and kxen.png

Figure 10: Source: SAP

 

Three options:

1) Client side – PA/KXEN – Java based and R based predictive  - connect to relational database and CSV

 

2) Server – Infinite Insight Explorer – connect to database (say Oracle)

a. Factory – model management – how data looked 1-2-3 months

b. Factory scheduling to refresh

c. Infinite Insight Social – trying to detect similar/like-minded people

d. Recommendation engine – buy brown shoes, likelihood to buy belt

 

3) Third option is HANA – with PAL in memory, connected to R

 

More to come...

 

Related:

ASUG Annual Conference has the following SAP Predictive sessions:

Session IDTitleStart Date
202Predictive Analysis Roadmap6/3/2014
203Using Analytics to Help Win the US Presidency6/3/2014
204Predictive Analytics for Procurement Lead Time Forecasting at Lockheed Martin Space Systems Using SAP HANA, R, and the SAP Predictive Analysis Toolset

6/3/2014

 

Charles is presenting the Predictive Analysis Roadmap and co-presenting "Using Analytics to Help Win the US Presidency".

 

Join us in May for ASUG Annual Conference   - Pre-Conference SAP BusinessObjects BI4.1 with SAP BW on HANA and ERP Hands-on – Everything You Need in One Day June 2nd

 

Register at: ASUG Preconference Seminars

 

 

Share your Story: Call for Sessions for ASUG at SAP d-code (former TechEd)

Share your knowledge with others and submit a proposal to speak at SAP d-code. Selected proposals will be part of the ASUG and SAP d-code: Partners in Education program, providing attendees with interactive learning experiences with fellow customers.



View the education tracks planned this year.  If selected, you will receive a complimentary registration for the conference and it will give you valuable professional exposure.


Follow this link to create a speaker account where you can formally submit your proposal, review important deadlines, and other general information about SAP d-code.


The deadline to submit your abstract is May 25. If you have any questions, please e-mailsapdcodespeaker.info@sap.com

How well will you do tomorrow? How can we be sure?

 

Algorithmic and biomedical advances are now producing sports coaches, mangers and team owners the tools to predict which players have picked and which ones have their full potential ahead of them.


I don’t use much of quantitative methods when it comes to sports. I think it takes away my excitement.

 

http://content.intweetiv.com/view?title=SAP+Uses+Own+Big+Data+Analytics+to+Project+Super+Bowl+Winner&iframe=http://www.eweek.com/enterprise-apps/sap-uses-own-big-data-analytics-to-project-super-bowl-winner.html/

 

After the Super Bowl game finished – I saw on twitter that SAP had predicted that Denver will win over Seattle in a close match. As it turned out – Seattle won a rather one sided match with a very young side.

 

I didn’t work on the predictive Analytics solution that made the prediction for Super Bowl and I am not authorized by SAP to provide a response. But I wanted to share my personal views on this matter.


Then I saw Vijay Vijayasankar’s discussion about the perils of predictive analytics. He makes the crucial points:

 

Predictive Analytics in general cannot be used to make absolute predictions when there are so many variables involved . In fact – I think there is no place for absolute predictions at all . And when the results are explained to the non-statistical expert user – it should not be dumbed down to the extent that it appears to be an absolute prediction .

Predictive models make assumptions – and these should be explained to the user to provide the context . And when the model spits out a result – it also comes with some boundaries (the probability of the prediction coming true , margin of error , confidence etc). When those things are not explained – predictive Analytics start to look like reading palms or tarot cards . That is a disservice to Predictive Analytics .

If the chance of Denver winning is 49% and Seattle winning is 51% – it doesn’t exactly mean Seattle will win . And not all users will look at it that way unless someone tells them more details .

In business , there is hardly any absolute prediction ever . Analytics provide a framework for decision making for the business leaders . Analytics can say that if sales increases at the same historic trend , Latin America will outperform planned numbers next year compared to Asia. However , the global sales leader might know more about the nuances that the predictive model had no idea of, and hence can decide to prioritize Asia . The additional context provided by predictive Analytics enhances the manager’s insight and over time will trend to better decisions . The idea definitely is not to over rule the intuition and experience of the manager . Of course the manager should understand clearly what the model is saying and use that information as a factor in decision making .

When this balance in approach is lost – predictive Analytics gets an unnecessary bad rap.

I thought I would post this quick blog to try and help anybody out who may come across the same issue as I did.

 

When trying to do some data preparation via filtering on a date in the "Predict" tab of SAP Predictive Analysis, I received the following error when trying to run the analysis: "An error occurred while executing the query.Error details: SAP DBTech JDBC: [266]: inconsistent datatype: 3-3-2014 is not a DATE type const: line 1 col 779 (at pos 778)". The error with a different format is the same:

 

sap-pa-inconsistent-datetype.PNG

 

Unfortunately, this error is vague and does not point out what a valid date format is or where to find the valid date formats. I tried multiple formats until I stumbled upon one that works (ie, YYYY-MM-DD):

 

sap-pa-valid-date-format-for-filtering.PNG

Hopefully this helps some people out.

 

If you know of documentation or other date formats that are supported, please share!

Continuing from previous post we now explore Sentiment Analysis. First of all let’s talk about Sentiment Analysis and Text Mining and what exactly it means when we speak about these terms. Wikipedia defines Sentiment Analysis as “Generally speaking, sentiment analysis aims to determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a document”. Sometimes it is also called as Opinion Mining which is extracting information from people’s opinions. Opinions are usually in the form of text and hence to do Sentiment Analysis we need some knowledge of Text Mining also. Text Mining in the words of Hearst (1999) is “the use of large online text collections to discover new facts and trends about the world itself" Standard techniques are text classification, text clustering, ontology and taxonomy creation, document summary and latent corpus analysis.  We are going to use the combination of both Sentiment Analysis and Text Mining in our example scenario discussed below.

Before I start let me make it clear that this is only sample data which was analyzed only for the purpose learning. It’s not to target any brand or influence any brand. The outputs and analysis shown here are just based on opinion and should not be considered facts.


I downloaded some public opinion data regarding Car Manufacturer from the NCSI-UK website.

Scores By Industry

The data is from 2009-2013. My intention was to just see what is the public sentiment of people for these manufacturers on Social Networking Site twitter and build a probable score for 2014 based on twitter sample population. The intention is just to see if the scores are similar to those obtained in 2013.

 

The steps to do sentiment analysis using SAP PA and twitter are shown below. The code is shown at the end of this post.

 

1. Load the necessary packages. Also load the credential file that stores the credential information required to connect to twitter. This credential file was created using the steps shown in the below post. Also establish the handshake with twitter.

2. Retrieve the tweets for each of the brand in our data-set (total 9) and save the information in a data-frame for each car brand.

3. The next step is to analyse the tweets obtained for negative and positive words. For this we use something called as Lexicons. As per Wiki, the word "lexicon" means "of or for words". A Lexicon is basically similar to dictionary and collection of words. For our sentiment analysis we are going to use Lexicon of Hu and Liu available at Opinion Mining, Sentiment Analysis, Opinion Extraction. The Hu and Liu Lexicon is a list of positive and negative opinion words or sentiment words for English (around 6800 words).  We download the Lexicon and save it on our local desktop. We load this file to create an array of positive and negative words as shown in the code. We can also append our own list of positive and negative words as required.

4.Now that we have an array of positive and negative words we need to compare them with the tweets we obtained and assign a score of 1 to each positive word in the tweet and -1 to each negative word in the tweet. Each score of 1 is considered a positive sentiment and a score of -1 is considered a negative sentiment.

The sum of overall sentiment score gives us the net sentiment for that brand. For this we require a Sentiment Scoring function. I have used the function available at the below website.I have used the function As-Is from the below website and give full credit to the author who created that function. This function is not created by me.

How-To | Information Research and Analysis (IRA) Lab

5. After getting the sentiment score for each brand next step is to sum the score and assign it to an array. This array than we bind with our original data set. We use this final table to generate heat maps as shown below:

Final Output with Sentiment Score

Pic12.PNG

Histogram

Pic13.PNG

Heat Maps

Pic14.PNG

 

Pic15.PNG

 

As we see from the above analysis that although the industry score for one brand (Audi) is quite high, the current pubic sentiment is with another brand (Vauxhall) that had an overall low industry score. This is just a basic analysis with 500 tweets. We can extend this analysis further and try to increase the tweets and create a more advanced score function that uses other parameters like region, time and historical data while calculating the final sentiment score.

This post serves as a starting point for anyone interested in doing Sentiment Analysis using twitter. There is certainly a lot of possibility to explore.

 

Code:

mymain<- function(mydata, mytweetnum)

{

 

## Load the necessary packages for twitter connecttion

library(twitteR)

library(RJSONIO)

library(bitops)

library(RCurl)

 

##Packages required for sentiment analysis

library(plyr)

library(stringr)

 

##Loading the credential file saved

load('C:/Users/bimehta/Documents/twitter authentication.Rdata')

registerTwitterOAuth(credential)

options(RCurlOptions = list(cainfo = system.file("CurlSSL", "cacert.pem", package = "RCurl")))

 

## Retrieving the tweets for the brands in our excel.

tweetList <- searchTwitter("#Audi", n=mytweetnum)

Audi.df = twListToDF(tweetList)

 

tweetList <- searchTwitter("#BMW", n= mytweetnum)

BMW.df = twListToDF(tweetList)

 

tweetList <- searchTwitter("#Nissan", n= mytweetnum)

Nissan.df = twListToDF(tweetList)

 

tweetList <- searchTwitter("#Toyota", n= mytweetnum)

Toyota.df = twListToDF(tweetList)

 

tweetList <- searchTwitter("#Volkswagen", n= mytweetnum)

Volkswagen.df = twListToDF(tweetList)

 

tweetList <- searchTwitter("#Peugeot", n= mytweetnum)

Peugeot.df = twListToDF(tweetList)

 

tweetList <- searchTwitter("#Vauxhall", n= mytweetnum)

Vauxhall.df = twListToDF(tweetList)

 

tweetList <- searchTwitter("#Ford", n= mytweetnum)

Ford.df = twListToDF(tweetList)

 

tweetList <- searchTwitter("#Renault", n= mytweetnum)

Renault.df = twListToDF(tweetList)

 

##Upload the Lexicon of Hu and Liu saved on your desktop

hu.liu.pos = scan('C:/Users/bimehta/Desktop/Predictive/Text Mining & SA/positive-words.txt', what='character', comment.char=';')

hu.liu.neg = scan('C:/Users/bimehta/Desktop/Predictive/Text Mining & SA/negative-words.txt', what='character', comment.char=';')

 

##Build an array of positive and negative words based on Lexicon and own set of words

pos.words = c(hu.liu.pos, 'upgrade')

neg.words = c(hu.liu.neg, 'wtf', 'wait','waiting','fail','mechanical','breakdown')

 

## Build the score sentiment function that will return the sentiment score

score.sentiment = function(sentences, pos.words, neg.words, .progress='none')

{

 

  # we want a simple array ("a") of scores back, so we use

  # "l" + "a" + "ply" = "laply":

 

  scores = laply(sentences, function(sentence, pos.words, neg.words) {

 

    # clean up sentences with R's regex-driven global substitute, gsub():

 

    sentence = gsub('[[:punct:]]', '', sentence)

 

    sentence = gsub('[[:cntrl:]]', '', sentence)

 

    sentence = gsub('\\d+', '', sentence)

 

    # and convert to lower case:

 

    sentence = tolower(sentence)

 

    # split into words. str_split is in the stringr package

 

    word.list = str_split(sentence, '\\s+')

 

    # sometimes a list() is one level of hierarchy too much

 

    words = unlist(word.list)

 

    # compare our words to the dictionaries of positive & negative terms

 

    pos.matches = match(words, pos.words)

    neg.matches = match(words, neg.words)

 

    # match() returns the position of the matched term or NA

    # we just want a TRUE/FALSE:

 

    pos.matches = !is.na(pos.matches)

 

    neg.matches = !is.na(neg.matches)

 

    # and conveniently enough, TRUE/FALSE will be treated as 1/0 by sum():

 

    score = sum(pos.matches) - sum(neg.matches)

 

    return(score)

 

  }, pos.words, neg.words, .progress=.progress )

  scores.df = data.frame(score=scores, text=sentences)

  return(scores.df)

}

 

## Creating a Vector to store sentiment scores

a = rep(NA, 10)

 

## Calculate the sentiment score for each brand and store the score sum in array

Audi.scores = score.sentiment(Audi.df$text, pos.words,neg.words, .progress='text')

a[1] = sum(Audi.scores$score)

 

Nissan.scores = score.sentiment(Nissan.df$text, pos.words,neg.words, .progress='text')

a[2]=sum(Nissan.scores$score)

 

BMW.scores = score.sentiment(BMW.df$text, pos.words,neg.words, .progress='text')

a[3] =sum(BMW.scores$score)

 

Toyota.scores = score.sentiment(Toyota.df$text, pos.words,neg.words, .progress='text')

a[4]=sum(Toyota.scores$score)

 

##Sentiment Score for other brands is considered 0

a[5]=0

 

Volkswagen.scores = score.sentiment(Volkswagen.df$text, pos.words,neg.words, .progress='text')

a[6]=sum(Volkswagen.scores$score)

 

Peugeot.scores = score.sentiment(Peugeot.df$text, pos.words,neg.words, .progress='text')

a[7]=sum(Peugeot.scores$score)

 

Vauxhall.scores = score.sentiment(Vauxhall.df$text, pos.words,neg.words, .progress='text')

a[8]=sum(Vauxhall.scores$score)

 

Ford.scores = score.sentiment(Ford.df$text, pos.words,neg.words, .progress='text')

a[9]=sum(Ford.scores$score)

 

Renault.scores = score.sentiment(Renault.df$text, pos.words,neg.words, .progress='text')

a[10]=sum(Renault.scores$score)

 

##Plot the histogram for a few brand.

par(mfrow=c(4,1))

hist(Audi.scores$score, main="Audi Sentiments")

hist(Nissan.scores$score, main="Nissan Sentiments")

hist(Vauxhall.scores$score, main="Vauxhall Sentiments")

hist(Ford.scores$score, main="Ford Sentiments")

 

## Return the results by combining sentiment score with original dataset

result <- as.data.frame(cbind(mydata, a))

return(list(out=result))

}

 

 

Code Acknowledgements:

Opinion Mining, Sentiment Analysis, Opinion Extraction

How-To | Information Research and Analysis (IRA) Lab

R by example: mining Twitter for consumer attitudes towards airlines


While doing some research on Sentiment and Text Analysis for one of my projects, I came across a really nice blogspot.

http://www.slideshare.net/jeffreybreen/r-by-example-mining-twitter-for

 

Inspired by the above, I thought of doing some sentiment analysis in SAP PA using twitter tweets.Hence decided to go ahead and do some text mining and Sentiment Analysis using the twitteR package of R.

I have created a multi-series blog where we see the different things we can do using SAP PA, R and Twitter.

 

First blog here talks about how get the twitter data inside SAP PA and build a word-cloud by building a text corpus.

 

Scenario:

I downloaded some public opinion data regarding Car Manufacturer from the NCSI-UK website.

http://ncsiuk.com/index.php?option=com_content&task=view&id=18&Itemid=33

The data is from 2009-2013. My intention was to just see what is the public sentiment of people for these manufacturers on Social Networking Site twitter and build a probable score for 2014 based on twitter sample population. I loaded the data in SAP PA. First I build a word cloud for some of the hashtags of the cars and plot a graph on number of re-tweets. In the next blog postings I will be doing Sentiment Analysis of this data and Emotion Classification.

 

Before I start let me make it clear that this is only sample data which was analyzed only for the purpose learning. It’s not to target any brand or influence any brand. The outputs and analysis shown here are just based on opinion and should not be considered facts.


Step1: Setting up the Twitter account and API for handshake with R

Please refer this step by step document to setup the twitter API and the settings required to call the API and get tweet data inside R.

Setting up Twitter API to work with R

 

Step2: Getting the tweet data in SAP PA and building a word-cloud.

Now we need to create a custom R component to get the data into SAP PA and create a text corpus and display it as a word-cloud. I have used the tm_map function comes that comes with the tm package for setting up the text corpus data for word-cloud. The various commands are self-explanatory as shown in the comments. I have used wordcloud package to generate the word-cloud.

 

The code below lists down the steps you need to do to get the desired output. The configuration settings are shown in the screenshots below.

 

mymain<- function(mydata, mytweet, mytweetnum)

{

 

 

##Load the necessary packages

library(twitteR)

library(RJSONIO)

library(bitops)

library(RCurl)

library(wordcloud)

library(tm)

library(SnowballC)

 

 

## Enable Internet access.

setInternet2(TRUE)

 

##Load the environment containing twitter credential data (saved in Step 1)

load('C:/Users/bimehta/Documents/twitter authentication.Rdata')

 

##Establish the handhsake with R

registerTwitterOAuth(credential)

options(RCurlOptions = list(cainfo = system.file("CurlSSL", "cacert.pem", package = "RCurl")))

 

##Get the tweet list from twitter site (based on parameters entered by user)

tweetList <- searchTwitter(mytweet, n=mytweetnum)

 

##create text corpus

r_stats_text <- sapply(tweetList, function(x) x$getText())

r_stats_text_corpus <- Corpus(VectorSource(r_stats_text))

 

##clean up of twitter Text data by removing punctuation and English stop words like "the", "an"

r_stats_text_corpus <- tm_map(r_stats_text_corpus, tolower)

r_stats_text_corpus <- tm_map(r_stats_text_corpus, removePunctuation)

r_stats_text_corpus <- tm_map(r_stats_text_corpus, removeWords, stopwords("english"))

r_stats_text_corpus <- tm_map(r_stats_text_corpus, stemDocument)

 

 

##Build and print wordcloud

out2 <-wordcloud(r_stats_text_corpus, scale=c(10,1), random.order=FALSE, rot.per=0.35, use.r.layout=FALSE, colors="blue")

print(out2)

 

 

## Return the twitter data in a table

result <- as.data.frame(cbind(Audi.df$text, Audi.df$created, Audi.df$statusSource, Audi.df$retweetCount))

return(list(out=result))

}

 

Configuration Setting:

Pic6.PNG

Pic7.PNG

 

Running the Algorithm and getting the output:

Pic8.PNG

 

The output table (created on is char):

Pic9.PNG

 

Visualizations:

Pic10.PNG

 

 

 

Pic11.PNG

 

The general opinion of the public from wordcloud seems positive. However we will do a detailed sentiment analysis of the various brands in our source file and plot the heat map based on 2013 survey findings in my next blog. This will help us know whether current public sentiment is in line with survey findings.

To be continued in Sentiment Analysis.

Hi,

 

Following on from Clarissa Dold's announcement about the KXEN acquisition end-2013, I wanted to take this opportunity to introduce to you the latest addition to SAP's predictive analytics portfolio: SAP Infinite Insight .

 

The majority of this information is already available through Clarissa's blog and external PA Roadmap presentation. I started chatting about this topic on this discussion here: Starting with KXEN - [Updated with more info] but it wasn't enough.

 

So the purpose of this blog is to offer an overview of the 'solution brief' including product positioning; a description of current software modules & deployment options; followed by some mention of future integration plans and tentative possibilities. Finally, a consolidation of useful resources (links etc) for your own on-boarding.

 

I've shown this type content during regional enablement workshops, so I'm hoping it'll be of use to you too!

 

Regards,

H

 

 

  • Let's start with a positioning slide which describes some of the key benefits and features of this product. The key message here is that you don't need to be a data scientist to use the tool effectively!

 

1 intro.png

 

  • Taking this differentiation further, we can call-out the specific areas where Infinite Insight has clearly gained a competitive advantage over classic data-mining vendors:

 

2 intro why.png

 

  • Infinite Insight is revolutionizing the way companies use predictive analytics to make better decisions on petabytes of big data. Their unprecedented solution approach allows line of business users to solve common predictive problems without the need for highly skilled data scientists.

 

3 model lifecycle.png

 

  • Infinite Insight is a suite of tools providing predictive analytic applications for the automated creation of accurate and robust predictive models.  These solutions replace the classic model creation process, which is manual, repetitive and prone to human error.

 

3 overview modules.png

 

  • Explorer is an extremely powerful data-manipulation tool, which allows the designer to create derived columns and row-values, effectively “exploding out” existing data into new compound variables and ratios. Lots of semantic definitions and transformations can be authored here into the dataset.

 

5 a explorer.png

 

  • The Modeler is the main workspace/module for mining activities: Classification, regression, segmentation and clustering. It generates statistical models, and represents them using indicators and chart types.

 

5 b modeller.png

 

  • Factory is a secured java web-deployed interface, which includes Roles & Rights administration on  the server platform. From there, Projects are accessed by users, are assigned Models, and KPI evaluation/Model retraining can be scheduled as Tasks.

 

5 c factory.png

 

  • Scorer is a feature that exports regression & segmentation models in different programming languages. The generated code (or SQL) allows models be applied outside of InfiniteInsight. It reproduces the operations made during data encoding and either Classification/Regression or Clustering efforts.

 

5 c scorer.png

 

  • Social improves decision capacities by extracting the implicit structural relational information of the dataset. You can then navigate a social network, the structure of which is represented in the form of a graph, made of nodes and links. For example, it can help identify individuals of influence within a community.

 

5 c social.png

 

 

5 component model.png

 

  • In terms of licensing and selling 'software bundles', smaller departments would likely consider the desktop "thick-client" workstation Modeller installations, whereas larger enterprises would implement the full "suite" of client-server components:

 

5 software bunble options.png

 

  • You need to be prudent when obtaining your package from the SMP download marketplace  as there are a number of items available to cover the various license and audience options:

 

6 installation types.png

 

  • Infinite Insight's data mining methods are unique in the market, here are a few of the value propositions & differentiators which set it aside from the competition:

 

8 the benefits of SRM.png

 

  • There is a wealth of existing guides and training available, to help you further your knowledge of the product. The documentation are very detailed, as is the online course, and locally installed media (post-installation):

 

9 product docu.png

 

  • The documentation at help.sap.com perfectly complements the RKT learning maps, you'll be an expert in no time:

 

11 doc page.png

 

  • Just to reiterate again, the legacy named "KXEN" has been totally retired from the product portfolio, we are now dealing exclusively with SAP Infinite Insight (II):

 

22 product rename.png

 

  • This is the snapshot of the combined "PA" and "II" roadmap plans (subject to change). Whilst Infinite Insight's capabilities will strengthen for the next +1 release, incremental features will also be ported to the Predictive Analysis (and hence Lumira) client, and Server capabilities will be delegated down to the HANA in-memory processing platform:

 

555 future integration roadmap.png

 

  • Focusing specifically on Infinite Insight's next-steps, we will be seeing initially tighter, followed by complete/native integration of ex-KXEN assets into the SAP Predictive Analytics portfolio, in keeping with our commitment to strategic initiatives such as In-Memory, Big Data, Cloud, Mobile and agile Visualization:

 

666  II_roadmap.png

 

  • Here's a non-binding illustration of our go-to-market intentions for 2014. These estimated timelines are subject to change and purely communicated in the spirit of openness:

 

555 future integration roadmap FULL overview.png

 

  • One thing is for sure, PA will be the interface going forward (so that Infinite Insight can benefit from its flashy CVOM visualization gallery and HTML5 agility). Our first expectation is that the ex-KXEN proprietary algorithm will start to appear in the Predictive Analysis Designer:

 

33 kxen into PA.png

 

  • We're going to harness the processing power of HANA's in-memory platform to maximize the reach of KXEN's unique approach to data mining. Infinite Insight algorithms are going to be rewritten into HANA as 'function libraries' that can be called by the Application Foundation Modeler or other SAP apps:

 

99 lab preview KFL into hana.png

 

  • As mentioned already, we have a vision of a unified client. A single desktop experience that will cover the full spectrum of use-cases, from the casual end-user Lumira 'visualize' workflows, through to business-users wanting to 'predict', through to analysts/scientists wanting to 'analyze' deeper.
  • Here's a mock-up of what that could look like, as the user is guided into the application:

 

99 lab preview unified client.png

 

  • Other innovations we might see could include an intuitive "drag to forecast" - how pleasant an experience that would be on a tablet device!

 

99 visual drag to forecast.png

 

  • One thing is for sure, Infinite insight's advanced statistical charts will massively benefit from the refresh they are about to receive from its inclusion into the Lumira suite (CVOM charting and HTML5). We can envisage drill-able charts to find influencers, similar to the BO Set Analysis of old:

 

999 drillage influencers chart.png

 

  • This all ties-in very significantly into the wider plans for SAP Lumira integration, and our roll-out plans for the SERVER version. About which, more info can be found at the GA Announcement page:

 

99999 Lum_srv_plans.png

 

** addendum **  (June 2014)

 

Our good friend and colleagues have been busy!

 

 

Enjoy!

Hi,

 

In this rainy sunday, being sent out of home because some serious tidy-up and cleaning going on inside home

 

i decided i will find somewhere to grab a coffee, listen to some awesome music and explore possibilities with R programming.

 

Since i am not very proficient in coding, i will look for some ready code on the internet and will try to adapt it.

 

I believe some social media content will really make my demos and presentations shine, so let's see what we can do with Facebook API.

 

Result component link is at the bottom of this page, ready to be used,

 

What we want to achieve? See this viz.

 

http://snapplab.blogs.wm.edu/files/2013/12/fbnetwork3.jpg

 

First Step :

 

Log on to FB, visit this page, click on "get access token" (dont forget to authorize for friends data) :

 

https://developers.facebook.com/tools/explorer/?method=GET&path=716590462%3Ffields%3Did%2Cname

 

ScreenHunter_326 Mar. 09 14.19.jpg

 

 

Here's the github link to the original code i found online. it gets your friend list and their friends to plot a network cluster to visualize their connections to each other. Will be intresting to try:

 

https://github.com/pablobarbera/Rdataviz/blob/master/code/05_networks.R

 

We have to wrap this inside a function, add a print to actually plot the graph, make the api key a variable so we can pass from the algorithm properties page.

 

ScreenHunter_327 Mar. 09 14.26.jpg

 

Now let's configure our key and run the component :

 

ScreenHunter_330 Mar. 09 14.35.jpg

ScreenHunter_328 Mar. 09 14.30.jpg

 

Run the component and...!!!

 

ScreenHunter_329 Mar. 09 14.32.jpg

 

The code basically plots the clusters, and creates a legend to show names of people right on the center of each cluster. You chan tweak the code to change plot parameters which will probably make it look visually more appealing, like this  :

 

ScreenHunter_332 Mar. 09 15.12.jpg

 

 

Looking at the plot and names which i didnt include in the screenshot, i understand that they are mainly :

 

1) Work network

2) University friends

3) High-school friends,

4) Elementary school friends (yes we did find each other via facebook

5) Family

 

I believe that's a simple enough example of what clustering is, understanding different subcategories amongst a big list.

 

My next aim will be to replicate something similar but using a "product page" from facebook, visualizing people who "liked" the page. Any help is highly apreciated

 

Here's the re-usable component for SAP Predictive Analysis, download it and paste it into the apropriate folder which may look like

 

C:\Users\"usernamecomeshere"\SAP Predictive Components\RScript

 

Please note that the code is provided "as-is" and is not supported by SAP.

https://share.sap.com/a:rcoxsl/MyAttachments/3e3fc10c-3ae7-493b-a49f-a58c424e6711/

 

Happy coding


Hi,

 

In an environment where you are using SAP Predictive Analysis together with SAP HANA integrated with an R Server you might not always have OS access to the R server and can therefore not see which R packages are installed. This impact the use of SAP Predictive Analysis when using SAP HANA Online or "Connect to SAP HANA" connectivity and the built-in R algorithms or custom R algorithms.

 

If you build a SAP Predictive Analysis Custom R component using SAP HANA Offline or other local files and the required package (algorithm) is installed on the your local R laptop it will normally work. However to get it working using SAP HANA Online the algorithms also needs to be installed on the R server integrated with SAP HANA. In a hosted environment you might not be able to get direct access to the R server to check which algorithms are installed.

 

Below is a step-by-step description on how to see which packages are installed on the R server integrated with SAP HANA using SAP HANA Studio.

 

From SAP HANA Studio select the "SQL" to open a SQL console for the SAP HANA system which is connected to the R server.

HANA Studio.png

Type the following script. Replace "DATA" with your own schema.

R Script.png

Execute the script (mark the script and click F8).

 

The results from running the script. The below result might differ from your depending on the installed packages on your R Server.

InstalledPackages.png

 

If you would like to get more information on the packages installed on the R server these are the additional parameters.

"Package"               "LibPath"               "Version"               "Priority"            

"Depends"               "Imports"               "LinkingTo"             "Suggests"            

"Enhances"              "License"               "License_is_FOSS"       "License_restricts_use"

"OS_type"               "MD5sum"                "NeedsCompilation"      "Built" 

 

 

 

Best regards,

 

Kurt Holst

 

 

Here is the script if you wish to copy and paste:

 

SET SCHEMA "DATA";

DROP TABLE "InstalledPackages";
CREATE COLUMN TABLE "InstalledPackages"
(
"Package" VARCHAR(100), "LibPath" VARCHAR(100), "Version" VARCHAR(100), "Depends" VARCHAR(100)
) ;

DROP PROCEDURE RSCRIPT;
CREATE PROCEDURE RSCRIPT(OUT result "InstalledPackages")
LANGUAGE RLANG AS

BEGIN
result <- data.frame(installed.packages())
result1 <- (as.character(result$Package))
result2 <- (as.character(result$LibPath))
result3 <- (as.character(result$Version))
result4 <- (as.character(result$Depends))
result <- data.frame(Package=result1,LibPath=result2, Version=result3, Depends=result4)
END;


CALL RSCRIPT("InstalledPackages") WITH OVERVIEW;
SELECT * FROM "InstalledPackages";




Completing the telecom analytic using SAP PA, I used a different data set for doing Association Analysis and Forecasting.

For Association Analysis I used the prepaid dataset which had subscriber plans. For compliance sake I changed the names of the plans.

Association analysis, sometimes also referred to as market basket analysis or affinity analysis, being used to find the strongest product purchase associations or combinations. Here the prepaid customer data was used identifying the prepaid recharges they have done over last 6 months. The idea was to find the association between recharge patterns. I had to create a custom R component to delete duplicate data and get unique data as Association Analysis needs to be done on unique data only.

Using the Apriori algorithm in SAP PA it was found that users buying the 3G Monthly plan and SMS plan frequently opted for a Net 49 plan. Hence these 3 plans

can be combined in the future to create one single comprehensive plan. Another thing that can be noticed here is that people option for corporate plan along with

video calling facility rarely used the 3G quarter plan. While people opting for Corporate along with 3G quarterly opted for Video Calling. This certainly indicates some

sort of pricing issues in the plan that can be sorted.

Apriori.pngApriori1.png

Forecasting Subscriber Base using Winter's method:


Winters method sometimes also referred to as triple exponential smoothing, was used to forecast the future subscriber base for the company using SAP PA. The historical data of subscriber base from 1985 to 2010 was used to predict the foretasted subscribers.

The green line indicates predicated values of subscribers. While the blue bars represents actual number of subscribers. The prediction vs.

actual for the years 1985 to 2010 shows that the analysis is pretty close to actual. It also forecasts the future demand 2011 onward. By

looking at the graph we can say that the model we designed is pretty good, but how do we ensure quantitatively that the model is actually worth.

For this we need to refer to the “Algorithm Summary” window in the Predicted output. The Goodness of fit is 0.94 means that the fit

explains ~94% of the total variation in the data about the average indicating that the model is indeed a good one.


TS1.png

TS2.png

Big data and predictive analytics are great for preventative maintenance, and are an incredibly powerful platform for the future of healthcare.

 

The SAP HANA powered mMR Predictive Analysis App already monitors patients real-time vital signs and generates an alert when abnormal patient readings signal a possible emergency situation.

 

Can something like this be far away?

predictive-maintenance-health

This was an open webcast to everyone last Monday.  Clarissa Dold covers what is new here What's New with SAP Predictive Analysis SP14?

 

Please note that the usual legal disclaimer applies that anything in the future is subject to change.

1fig.png

Figure 1: Source: SAP

 

Figure 1 describes what is predictive analytics, analysis to "support predictions".

 

2fig.png

 

Figure 2: Source: SAP

 

Figure 2 covers where Predictive Analysis is used.  For me forecasting and trends are the most popular, but for sure businesses look at churn and turnover.

 

3fig.png

Figure 3: Source: SAP

 

Figure 3 covers R integration, PAL, Predictive Analysis and SAP Infinite Insight (new name for KXEN).  These are four separate items and I think this SAP Press book gives a very good overview of PAL and Predictive Analysis.

 

4fig.png

Figure 4: Source: SAP

 

The speaker provided a demo of SAP Predictive Analysis.  One of the upcoming features is the ability to share Predictive Storyboards to Lumira Server (and my take only - will help mobilize your predictive stories).

5fig.png

Figure 5: Source: SAP

 

Figure 5 was part of the demo, showing that once you set up your predictive model, you could export it.  What is good about that?  You can save the settings in the model and then reuse it.

6fig.png

Figure 6: Source: SAP

 

Figure 6 is a summary of features in the current version of Predictive 1.14.

8fig.png

Figure 7: Source: SAP

 

Figure 7 is the high level architecture for Predictive Analysis.  You can see how it uses SAP Lumira

 

The top of Figure 7 reflects the menu in Predictive Analysis.  At the bottom, you can import/configure R packages, which I found very simple in the current version.

 

9fig.png

Figure 8: Source: SAP

 

Infinite Insight is new name for KXEN.  The KXEN algorithms will be part of Predictive (planned).  Future direction is the convergence of Predictive Analysis and SAP Infinite Insight.  Future direction is typically 12-18 months out in the future.  Interesting too that Predictive Consumption and scoring is planned for the cloud.


Question and Answer

 

Q: When we connect live to HANA not getting the formula option

A: Data manipulation not supported HANA online

Planned for later in 2014

Option is to build a HANA Analytical view

 

Q: When will Infinite insight be included in PA?

A: Starting in Q2

 

Related:

In March there are Predictive sessions at BI2014

 

At ASUG Annual Conference in June we have a very special session planned covering Predictive Analysis - TBA - to be announced.

Actions

Filter Blog

By author:
By date:
By tag: