1 2 3 10 Previous Next

SAP Predictive Analytics

146 Posts

Seems it caught your attention last time so I’m happy to repeat what I did for Q1 as you may not have seen all of them: here is the top 10 articles the community voted in Q2. Summer is a good period to make up for what you did not have the time to read before.


#1 Announcing SAP Predictive Analytics 2.2! - by Ashish Morzaria

Discover all the new capabilities released in this new version of our product whether you are a business analyst with the Automated Analytics mode or a data scientist with the Expert Analytics mode!

#2 7 #Predictive Sessions You Should Attend @ #BI2015 in Nice, France - by P Leroux

Going to attend SAP Insider BI Singapore? This post give you an overview of what kind of predictive sessions you will have the opportunity to attend!

#3 Predictive Smackdown: Automated Algorithms vs The Data Scientist - by Ashish Morzaria

3 profiles, 1 tool. Whether you are a Data Scientist, a Business Analyst or a Business User, SAP Predictive Analytics will meet your needs and expectations. Learn more through a funny analogy!

#4 Gartner BI Summit 2015: Big Data = Predictive Analytics - by Ashish Morzaria

A detailed article on one of the key Gartner summit outcomes or how far Big data and Predictive Analytics are linked and work together. Read more!

#5 Learning about Automated Clustering with SAP Predictive Analytics 2.0 - by Tammy Powlas

Discover the step by step explanation of how Tammy run her first clustering analysis with the Automated mode SAP Predictive Analytics and the ease of use she recognizes to it.


#6 Using Application Function Modeler To Create Forecast (APL) Procedure in SAP HANA - by Angad Singh

Angad shared his experience gained while creating forecast procedure using following features of SAP HANA: APL (Automated Predictive Library) Forecast Function and Application Function Modeler (AFM). Learn from it!


#7 Predicting the Future using a BEx Query as a Data Source - by Tammy Powlas

A progressive approach of using a BEx query of actual expenses by project by month to forecast the future with the Expert mode of SAP Predictive Analytics.


#8 Announcing ASUG SAP Predictive Analytics Council Launch - Roadmap Preview - by Tammy Powlas

Learn more about our ASUG SAP Predictive Analytics Council focusing on Exploratory Analytics! And if you are interested in joining the council, please complete the council’s participation survey.

#9 How to install SAP Predictive Analytics Desktop 2.2 - by Tammy Powlas

A detailed post to help you install SAP Predictive Analytics Desktop 2.2. Very useful!


#10 Predicting Happiness - by Kurt Holst & Savaneary SEAN

An end-to-end data mining case using SAP Predictive Analytics to uncover new information related to happiness and also to predict whether a country is happy or not & what features impacts happiness. Read the paper now!

And finally a repeat session of what you may have missed earlier in the year:

The 10 SAP Predictive Analytics Community Most Viewed Q1 Articles

Again, there are many predictive resources available on SCN and sap.com/predictive! Here are 3 ways to get engaged:

- Follow the SAP Predictive Analytics community to be informed as soon as there is something new posted or discussed here

- Check the ‘Content’ tab to make discoveries here

- Follow your favorite authors to be informed when they publish a new piece

And don’t forget the tutorials page that is updated on a regular basis and where you find tons of crucial tips!




This is part 5 of a series of blogs.

Please refer to the other parts that can be found here

From R to Custom PA Component Part 1

From R to Custom PA Component Part 2

From R to Custom PA Component Part 3

From R to Custom PA Component Part 4




In this blog I'm going to focus on how to use swirl. Swirl is a package that we will load that will allow you to learn R in R. This will be the last blog in my series explaining normal R syntax and programming. Blogs to follow will focus purely on using the R code in Predictive Analytics to make a component.


So I thought it would be good to end the R learning part of my blogs with a way for readers to further learn and expand their knowledge on R on their own.


Swirl is not part of the default installation, we will need to install this library to use it. There is two ways to install the libraries. One being straight forward, simple and more automatic. Then another one being more manual, but is good to know in instances where you have problems with automated method.




Swirl - Automatic Library Install


As mention in part 1 of my blogs, we can see the libraries available for us to use in this directory. You will see that there is nothing for Swirl.



If you try to load the library swirl you will get a message saying it does not exist.





The simplest and easiest way to install is to do the command install.packages("swirl")





Then choose CRAN mirror, me being from South Africa I will select the Johannesburg one.





Will then download all the required packages automatically and attempt to install.




Once it is completed you will see that it installed the packages it downloaded. Will also let you know where it downloaded the packages to.




You will also see your library is updated with the new packages.




Swirl should now work, jump to the "Start Swirl" section.






Swirl - Manual Library Install


As mention in part 1 of my blogs, we can see the libraries available for us to use in this directory. You will see that there is nothing for Swirl.





If you try to load the library swirl you will get a message saying it does not exist.




You will need to download the packages required here https://cran.r-project.org/bin/windows/contrib/3.1/

You will download a zip file.

You will need to download swirl, httr, R6 and yaml as a minimum. But it is recommended you should also include jsonlite, mime and curl.





Then to install the package, open the RGui. Select "Install package(s) from local zip files". Select the zip file you downloaded previously.





You will then get a message in RGui that it installed successfully. Will also now see it in your library folder.




You will need to repeat this for the libraries swirl, httr, R6, yaml, jsonlite, mime and curl..






Start Swirl


Once you have done the above you can now load the library swirl, then start swirl with the command swirl(). It will guide you, ask questions and you need to provide answers. There are a few courses, you will need to install the courses. If you are connected to the internet it will do it for you. Alternatively you can ask it to open the github repository and do a manual install.



You can now carry on learning R at your own pace, swirl will guide you and show you the commands step by step.



Hope this is helpful. Hope you will use swirl to learn more R. Part 6 coming soon.




This is part 4 of a series of blogs.


Please refer to the other parts that can be found here

From R to Custom PA Component Part 1

From R to Custom PA Component Part 2

From R to Custom PA Component Part 3



In this blog I will be focusing on how to debug in R. This will assist you in debugging your own R but also assist in understanding what is going on at each step of R code.


So I will cover debugging in RGui and in RStudio




RGui Debug


So lets look at debugging the function we did in a previous blog. You will need to execute the function as shown below.





You then need enter the debug command and indicate the function name we want to debug. Once you have done this, when we call the function it wll go into debug mode.




You will now see that when we call the function it shows Browse. We can now at every step look at hat variables is defined, the values in each variable.





If you press enter it will step through each line. If you enter ls() it will show you the variables been declared. Whatever variable you would like to see the value for you just type the variable name. Below you can see I went through the whole function, look at variables declared, looked at variable values.




Every time you execute the function it will debug the function, to stop this you must issue the undebug command.





The other way to go into debug mode is to insert the command browser() in your R code. It will then go into debug mode from the line where the browser is located.






RStudio Debug




So to debug in RStudio is pretty simple, click in the margin by the line you want to stop at, this creates a break point. Then click source. Note to use the debug your script must be saved.




You will then be in debug mode and be able to step through each line.







Hope this helps, part 5 will be posted soon.

Having predictive analytics and predictive models where more insight and value is gained from data is on the wish lists of many organisations today.  The ability for organisations to use predictive modelling tools to help predict outcomes such as which customers to target, which customers are more likely to buy other products and what is the likelihood of customers leaving is something that is more achievable than ever before.  However, when we ask a business question from our predictive model to solve we must not forget the importance of getting the data right.


Clearly, if you put rubbish data into a predictive model then it doesn’t take a genius to work out that a rubbish model will be generated.  Predictive models are only as good as the data going into them.  Making sure that the source data that you use is properly managed and organised is key to this.


Extraction, transformation and loading tools have been around in the marketplace for many years, however many organisations are using out of date tools to just “lift and shift” data from one environment into another.  This process of extraction, transformation and loading of data (or ETL) is a key area where problems and issues in your data can be identified and rectified before they end up in the models.  Ideally problems and issues should be resolved in the source systems, however this is not always possible.


Here are some key areas where help is needed on data: 

  • Removing duplicates
  • Integrity checks
  • Names and address checking
  • Text checking to pull out sentiment in the data
  • Putting in place rules and analytics to check the quality of the data


Coding SQL is never a scalable and sustainable option –  imagine having to trawl through code just to change one business rule.


SAP have a complete range of tools that provide functionality to profile data, add in business rules and move data without using code.  Data Stewards can create repeatable jobs that are easy and quick to maintain all through graphical interfaces.  For more information about these tools click here http://scn.sap.com/community/enterprise-information-management

As a user of Predictive Analysis I thought that I would share this hidden gem, inside the Expert Analytics option - the ability to do further analysis and create visualisations on the results of your predictive models.


Let me show you an example of what I mean by this.


Below is a screenshot of a cluster analysis that I have created inside the Predict tab of Expert Analytics.


Cluster Analysis.JPG


Looking at the results I can see how the clusters are represented using the standard visuals that are available for cluster analysis.


This is great, however what I also want to do is analyse the clusters themselves and look at the data in more detail: i.e. use the visualise panel to create different visuals to understand what data makes up each cluster and filter on clusters to analyse further.


You can do this however it is not very obvious.  To do this follow these steps:


1. After you have created and run your predictive model click on the Visualise tab.


2. Just below the Visualise tab there is a "Select Component" drop down option

select component.png

3. Notice that I can select "Auto Clustering" - this is the results set of the clustering that I have just performed in the Predict tab.


4. Select this.


Now I can create new visuals.  The data includes the extra predictive column created through my predictive model.  This enables me to filter on specific clusters and analyse further what makes up these clusters.


visualise cluster analysis.JPG


You can do this on any predictive model results set, making further analysis of the results of your models very easy to do.


Hope that this helps in your work.






This is part 3 of a series of blogs.


Please refer to part 1 and part 2 that can be found here

From R to Custom PA Component Part 1

From R to Custom PA Component Part 2


In this blog I will be focusing on more intermediate syntax for a beginner, for an experienced R developer this will be still considered as basic. You will however still require this knowledge to create a Predictive Analytics component.


I will cover the following

  • Vectors
  • Matrix
  • Data Frames
  • R Scripts
  • Functions
  • Loops
  • Read files
  • Graphs


Then we will review the differences in RGui and RStudio





In R a vector is basically what we would call an array in other programming languages. A vectors values can be numbers, characters or any other type. But they should all be the same type.


Start by opening RGui again, type straight into the console. To create a function we use the c function which is short for combine. It combines the values to make a vector. Below is an example of two vectors.




But the previous example we not storing it in a variable, in reality we want to store everything in a variable. So here I have stored a list of values into X. Then display the content of X.



You can also assign a range of values using the below syntax. So below i'm saying variable Y will have a list of values from 5 to 9.




We can also access a single value in the vector. Below i'm accessing the value in the vector at position 3.



We can append values with the below syntax.




We can change one of the values, below we are changing the second value.




We can assign names to each value in the vector.




You can then access by the name of the column you have assigned in previous steps.





All the vector steps shown above can be applied with other data types. Below i'm doing it with text.








In R a matrix is basically what we would call a two dimensional array in other programming languages.


So here is an example of how to create a matrix. mat is a variable. The function matrix creates the matrix, I have said create it with 3 rows, 3 columns and default the values with 1. You can see the result in the variable mat.





We can change the value of a specific item in the matrix, similar syntax as a vector except we must list the column and row. So the below I have said change row 1 column 3 to have the value 5.



We can also access all the values by row or by column. Below I have first access by row, showing row 2. Then I access and display column 3.






Data Frames



Data frames is similar to a matrix. Except that a data frame can have different types of data for the different columns where a matrix can't. Also the data frame is more easy to work with, however a matrix is more efficient when coming down to performance.


In the below example I created two vectors. One with employee names, then another with salaries. I then combine them into a data frame. employee.data is a variable.





You can then access the data frame the same way as you would access a matrix.





R Scripts


Up until now we have entered everything in the console. We have done this to learn the syntax and understand how the console works. But in reality you would not work directly in the console. You would create a R script and enter everything in there.


To create a script go to File->New script




You can now add R code to the script. So in this example I created variable X with value 10, created variable Y with value 2, created variable Z with X*Y, I then output Z





To execute the lines, highlight them and then right click and select "Run line or selection". You can also select one line at a time and execute them.RGui17.jpg


It will then execute the commands in the console.








Being able to write functions is important, you will need this to create a custom PA component.


The basic syntax for a function is


myfunction <- function(arg1, arg2, ...) {

    function body




Below is an example of a function. In this function I receive two values, add them together in variable Z and print the Z variable out. So this is a very basic function. Please note that from now on I will always create a script and place the R code in the script. Then execute from the script.






The above example of a function is very basic, we would not normally do a function like that. A function we would usually create in a way that we return a value, we will need to return a value when creating a component for Predictive Analytics. So be sure to understand the below example.









We get different types of loops in R. I will cover two of them.


Here is an example of a for loop.  Here i'm assigning X variable with a range of values from 1 to 10. Z variable is created and made to be null. Then in the for loop we are saying the variable i will start from 1 and go up until 10. So the loop will repeat the code between the curly brackets 10 times. Each loop I then get the value in X at the position of the value i. So if the loop is on it's third cycle, the variable i will have three in it, we saying get the value in variable X at position 3. In this case it is 3 in position 3. But could of been different value. We then add 1 to the value and assign to Z. Each time I replace Z, then print the value.






Here is an example of a while loop. The biggest difference would be that a for loop the amount of loops is determined before the loop starts. While the while loop the loop will go until the condition is met, the condition changes while the loops occur. In the below example you can see that in the while loop I increment the i variable and will stop when i <= 10.






Read Files


You will need to be able to read files to work with sets of data that you have in text files.


Below is an example where I use the setwd function to set the directory where the file is. I then read in the data by specifying the file name, when reading in the data I indicate the first row is the header. Then display the data. When reading in the data from the file, due to different data types it creates a data frame.








There is several options and libraries for graphs.


Going to stick to the basic ones.


Here is an example of a bar graph, called bar plot in R. Basically I create a vector called graphvalues, I then assign names to the vector as we have done previously. These names are used on the x axis of the bar plot. Then I call the function barplot and send the vector values as a parameter.




When you execute this the following bar plot will be displayed.






Using the same as the above, just change barplot(graphvalues) to plot(graphvalues)





Using the same as the above, just change plot(graphvalues) to plot.ts(graphvalues)








RStudio Differences



Below you can see the differences when doing everything above in RStudio. The script takes up the left top window, the console moves to the bottom. Your plots/graphs can be shown on the bottom right.






When working with data frames we can see another difference. Displaying the data in the data frame is easier. You can see the data in an easy to scroll window that shows the data in an excel looking grid.





Hope you find this useful. Part 4 can be found here From R to Custom PA Component Part 4




This is part 2 of a series of blogs.


Please refer to part 1 that can be found here From R to Custom PA Component Part 1


In this blog I will be focusing on beginner syntax that is required to create a Predictive Analytics component, will cover the following

  • Basic syntax
  • Functions
  • Variables
  • Help


We will then have a look at RStudio again to understand the difference between RStudio and RGui.



Basic Syntax



So open the RGui as shown in part 1 of my this blog series. You can immediately use it as a calculator by typing in 4+4 and then pressing enter. Then 8 will then be shown as the answer. As seen below I have done +, / and *. Give it a try and see the same results as shown below




You can then also do Boolean expressions by typing 5<7, then pressing enter. Will respond with TRUE or FALSE. In this case TRUE. Note to test if 8=7+1, in R = is shown as ==.



In R, the value T is short syntax for TRUE. Same for the value F being short for FALSE.




When working with text you should enter the text in double quotes. As shown below I just typed the text and got an error back. The second time I enterd the text correctly. With text it just responds with what you have entered.







R comes with built in functions. So lets go through some of the functions to see how they work.


To use the function sum, type sum(value1,value2,value3,etc.). Below is some examples




To use the square root function, type sqrt(value1)




There is a repeat function, below is an example.




More examples of functions. Give them a try. Feel free to google some more functions.









When working with variables, most programming languages you need to declare the variable, say whether it is string or integer. In R you don't need to do that. You can just type x<-10. This syntax is saying the variable is X, we assigning the value 10 to X.


So below you can se I assigned X to have the value 10, after pressing enter the console is waiting for another command. If you just type X again it will tell you the value in X.



You can then start working with x. So if x has a value of 10, then if I want to divide by 2 you just simply type X/2, then press enter. Below you can see I have divided by 2, later on I add 10. Bear in mind I'm not changing X value, I'm just calculating what X would be should I divide by 2 or add 10 to it. X still has the value 10.



To change the value you must use <- to assign the value. So below I'm saying X has a new value that is X/2 which is 5.




We can do the same with text. Here I have a variable called Y, I then assign the value Hello World to it.



You can check the data types R has assigned to x and y by using str function. Will show the type and the value.







You can also use the help function to get more info. So to get more help just type help(function), so below I have typed help(sum). This will then launch a browser that shows help for this function.






RStudio Differences




So as mentioned in part 1 of my blog series we can code in the RGui or RStudio. So lets see what RStudio offers.



So in the R Console we can type the commands the same as we did in the RGui console. By looking at the top right window we can then see that it shows the X and Y variables and values. So in the RGui we have to call x and y to see the value where in the RStudio we can see it immediately in the top right hand window. The bottom right window you can access the help.




When working in the console you can press ctrl+space bar and then RStudio will assist with available functions or syntax with regard to the text you typed. Will also provide info on how to use the function. So for example below we can see for substr it requires x which is the string, then a start value, and a stop value.




Hope this helps, part 3 can be found here From R to Custom PA Component Part 3.




SAP has come a long way with their Predictive Analytics solution that has some excellent functionality. Even though the solution has some excellent functionality from time to time you will need to create your own component or code in R.


To create your own component in expert analytics mode can be difficult if you don't know where to start. There is some good content on SCN like the following links




I however didn't find a step by step guide that explains it as simple as possible from the beginning to the end. So I'm going to write several blogs that will take you from the beginning to the end of creating your own component. But when I say the beginning, I'm mean start with raw R and understand the syntax up until creating a custom component.


I'm hoping to create blogs covering these topics.

  • From R to custom PA component Part 1 - will focus on R environment (IDE) where you can script
  • From R to custom PA component Part 2 - will focus on basic R syntax and developing in R
  • From R to custom PA component Part 3 - will focus on more intermediate R syntax and developing in R
  • From R to custom PA component Part 4 - will focus on how to debug in R
  • From R to custom PA component Part 5 - will focus on swirl, continue to learn R on your own.
  • From R to custom PA component Part 6 - will focus on creating a basic component in Predictive
  • From R to custom PA component Part 7 - will focus on making the component in Part 5 to be more dynamic and complex


I will edit the list as I create the blogs and update each one with links.


So this blog being Part 1 we will focus on how we can code in R environment, this environment will help us learn the basic syntax we need to create the R component, will also provide an environment for us to test the R code for our component.






In order to follow these series of blogs you will need to have Predictive Analytics installed. I currently have Predictive Analytics 2.2 installed. I would recommend the same software to follow these blogs.





R Integrated Development Environment (IDE)



There is a few R IDE available in the market. But for this series of blogs I'm going to focus on two products.

  • Option 1 - RGUI, is installed and readily available after the Predictive Analytics 2.2 installation.
  • Option 2 - RStudio, this is a popular R IDE environment that you can download for free.


I would recommend installing both as in my series of blogs I will refer to both environments and at the end of the blogs you will know how to use either one.

R IDE Option 1 - RGUI


You need to ensure that you have installed and configure R as part of your installation.


R Install.png





Once installed you will see on your desktop an icon as shown below. Predictive Analytics from SAP makes use of R and it's libraries and this is why the application is installed and available after installing Predictive Analytics.




Now from within this environment we can create R code, learn R syntax and get us up to a point where we will have fully functional R code to take into Predictive Analytics for our custom component.

R IDE.png



The small window shows the R console. This console acts like a command prompt but for R. To test everything is in order type in 4+4 as shown in red. The console will then reply with the answer, which is 8.


R Console.png



You will also see that the R has installed in the following directory C:\Users\Public\R-3.1.2


From here you can navigate to the libraries. As we learn R you will find these libraries important, many functions will come from different libraries, some functions will be from libraries not installed yet and we can the install them, each new library installed will make a new folder here.





R IDE Option 2 - RStudio


Before proceeding, I would recommend ensuring that you have installed and configure R as shown in option 1. We will then install the R Studio and make use of the same R libraries. This will allow us to use both environments that will use the same R libraries.


You need to download the RStudio from the below link, you can choose the appropriate version required.



I have selected Windows/Vista/7/8 version as indicated below.





Once you have downloaded, you can install the RStudio. The installation is very simple. Just click next next.


Once installed you can open the RStudio. Will open as shown below. As seen below you will have the R console where you can type in 4+4 and get an answer as shown in Option 1.




By navigating to Tools->Global Options you can see the R install being used, this will also dictate the libraries being used. I'm using the same installation as option 1.





You can see the libraries in the bottom right window under the packages tab. Will show the contents in C:\Users\Public\R-3.1.2\library a discussed in option 1.







Hope you have found this helpful. Part 2 can be found here From R to Custom PA Component Part 2



In the previous post  we talked about the concept of Exploratory Analytics.

As a quick reminder, exploratory analytics is not a product, it is rather an approach to data analysis and a set of functionalities where you let mathematical algorithms and computer automation work on your dataset to surface, automatically, some interesting results (correlations in data, outliers in your dataset, groups of items with similarities, etc.).


You, as a business savvy person, can look at those results, see what their business value is and take strategic decisions based on them.

Exploratory analytics are complementary to both classic analytics and advanced analytics as shown in the picture below:


In the classic analysis approach, you decide, step by step, what to do with your data. You create tables, filters, slice and dice with the goal to surface some knowledge which you expect to see in the dataset. You usually work manually towards a specific goal in mind with a trial and error approach. With this approach you can easily answer questions such as “How many customers are buying my product?"


In advanced analysis you let mathematical algorithms work on the data to build a predictive model.  You then use this model to take operational decisions. With advanced analytics you can, for example, build a model which answers (in real time if you want) a question such as “Is this prospect likely to buy my product?”.


Finally in an exploratory analysis, your make use of the same algorithms of advanced analytics to obtain insights which help answering questions such as “Why are customers buying my product? “.  Knowing the ‘why’ behind a decision can help you change your business to improve it.



What you’ll learn reading this blog?

In this blog we show how SAP Predictive Analytics, with its Automated Analytics module can provide you the instruments you need to do exploratory analytics.

Practically speaking, we show that after performing a classification, you are automatically presented with various insights which can be used to drive your decisions.


Supposing that you are analyzing a dataset showing customer characteristics and your target is a flag saying if a customer has purchased or not a product, after running the classification you automatically get various insights:


“Key Influencers” are the variables which mostly explain the target (e.g. what customer characteristics are most related to the decision to purchase or not

a product). You can get insight on specific values of key influencers but you also automatically get “groups” or “bands” of values with a similar influence.

The values are automatically “grouped” together when the variable is categorical (e.g. “customers country is France, USA or Italy”; they are automatically “banded” when the values are continuous (e.g  “age between 29 and 45”). Groups and bands greatly simplify the analysis and the tool does a great job automatically proposing the best ones without you having to worry about the best way to bin your data.


Finally the tool quickly and automatically points you out “segments” of interest. Those are set of records having similar characteristics which have a strong influence on the target (e.g. the tool can show that customers “living in the USA and aged between 18 and 25” show the highest likelihood to purchase your product).


It is time now to see some action and understand, with an example, how you can improve your business based on insights coming out of an exploratory analytics approach.


The whitest napkins you have ever seen!

Imagine that you are working in a company specialized in cleaning table cloths and napkins for restaurants. In the past few months you created a new offer called “Premium Service” which guarantees restaurants to have the whitest napkins in the whole country! You proposed the service to several of your existing customers, some of them purchased it, some other not.


You created a list of all the restaurants to whom you proposed the service. In this list you put all the characteristics about your customers (e.g. how many seats the restaurants have, if they are located downtown, in the suburbs, in the country, the average price of a meal, if they have a valet, etc.).For each customer you marked if yes or no they purchased the Premium Service.

The dataset might look like this:



You can use this list for two tasks: create a predictive model which can tell if a prospect is likely to accept the service (advanced analytics) and/or see if you can find some interesting patterns in the restaurant profiles which you can use to improve your business (exploratory analytics).


Typically a marketing manager focused on a short term marketing campaign (where the goal is to maximize the return and minimize the cost) would use the predictive model in an operational mode.


A business strategist who wants to improve the business globally on the long term would be more interested in the exploratory approach.


To accomplish both tasks you can use SAP Predictive Analytics and its Automated Analytics module.


The basic question you want to answer is if ‘yes or no’ a customer is likely to buy the service. This is a typical classification problem so you apply the Classification module. You set the Premium Service flag as the target variable.  (If you never used SAP Predictive Analytics you can watch this video to see how to use Classification http://scn.sap.com/docs/DOC-62236 )


All other variables (excluding IDs) are going to be analyzed to understand their influence in the purchase decision. The screen where you set the variables looks like the following:


Now you click a few Next buttons and, after the Classification completes its execution, the model has been created automatically for you.

First of all you need to check if the quality of the model is good, to do that look for the Predictive power (also know as “Ki”) and Prediction Confidence (“Kr”) in the model summary.


If the model is good you can now use it in an operational mode to ask  “Is this prospect likely to purchase the service?” or you can use it in an exploratory mode to ask “What are the typical profiles of customers who purchase the service”. This second mode helps taking strategic decisions.


You should notice here that you are using information from the past (your list of customers who, you already know, purchased or not your product). The Classification module is able to discover patterns in the past data. The tool can then apply the same patterns on new data (prospects) or help you analyze them to understand what is influencing a purchase decision.


For an operational usage you can immediately go to the Run or to the Save/Export sections. From there you can check in real-time which new prospects are likely to purchase the service. Alternatively, if you are a developer or work with developers, you can export the model in various programming languages (like Java, C, C++, SQL, and many others) so that it can be embedded in an application suggesting to your sales team which restaurants to approach. SAP Predictive Analytics automatically provides you the code in the language you need. You copy it and paste it into your application.


In this blog we are more interested in an exploratory analytics usage, let’s see how to proceed with it.


For strategic decisions you can look at various information generated with the model, see if they make business sense and decide how to use them.

You can start your exploration by looking at the key influencers.

In the Automated Analytics module you open the Contributions by Variables section under Display. You see a visualization similar to the one below:

3.varaible contribution (2).jpg

This graphic tells you that the variables which are most related to the decision to purchase the service are, in order of priority, the Price Segment, the Location and the Number of Covers of a restaurant. Those are the key influencers of the Premium Service target.


If you double click on Price Segment you see the following visualization:

4.price segment (2).jpg

This is telling you that, according to your past data, a very expensive restaurant (80 and more USD for a dinner, on the left of the display and of positive value) is more likely to purchase the service. On the contrary, inexpensive restaurants (19 USD for a dinner, in the right part and negative) are less likely to purchase the service.


While you are on this screen, you can also see that all other price segments were automatically grouped under the label “KXOther”. Those other price ranges are not really meaningful and the tool simplifies the visual analysis for you by grouping them together.


If you now open the second variable, Location, you see something like the following screen:


This screen tells you that Downtown restaurants are more likely to purchase your service while Countryside restaurants won’t probably purchase it. Here again you see that a new group has been created automatically with restaurants in Small Town or in Suburbs. They have the same influence (negligible), no need to make analysis more complex by showing separate entries.


When opening the third variable, Covers, you have the following screen:


We won’t actually use this information for our analysis but you can see that you automatically obtained bands of values which have a similar influence. If the visualization had a bar for each value of “Cover” it would have been almost useless because too difficult to read and too detailed to be effective.  With the automated banding you can immediately see that restaurants in the band of 76 to 106 seats are the most likely to purchase the service.


Let’s see how we can use the knowledge we already gained.

First of all you have now identified your most important variables in the list of key influencers. You could decide to simplify your analysis (even a classic analysis) by taking into account only them. In this example it might not seem very useful but if you think of a scenario where you have thousands of sensors, you might be able to identify the few ones which are really important for your analysis and use only their data.


Restaurants which are more likely to purchase the Premium Service are expensive, they are probably luxury restaurants. You could propose to your marketing team to refresh your brand so to make it look high-end. New expensive restaurant prospects might be attracted by this luxury aspect of the brand.

On the other hand you might take a completely different approach: reduce your pricing to be more attractive for inexpensive restaurants.


On the location front, you could decide to focus your business to the downtown areas of large cities. This could reduce your cost of transport while making sure your trucks are faster on site when an important customer calls for something urgent. This decision could even mean that you decide to disregard completely restaurants located in the countryside.


We can go even further in our exploratory analysis.

If you open the Decision Tree section of SAP Predictive Analytics you can look at the combined influences of multiple variables. The screen below shows the root and some leaves of the tree (you can actually choose the leaves you want to display or have SAP Predictive Analytics automatically open the most influencing leaves one after the other). The decision tree helps you identify segments of interest for your analysis.

6.decision tree.jpg

Looking at the Decision Tree you see that the most likely customer to purchase the service are restaurants which are expensive AND located downtown (20,65% of them purchased your service). You could be tempted to create a specific marketing campaign for that kind of restaurant but if you look at the absolute numbers you see that there are only 431 restaurants of that type over a whole population of more than 8000 restaurants. This segment contains only the 5% of restaurants. This should make you think: is it a good idea to target such a small population of restaurants? Shouldn’t you have two different marketing campaigns, one for expensive restaurants, wherever they are and one for downtown restaurants whatever their prices? You can talk about this with the marketing team and bring the numbers and visualizations with you to support the discussion.



To summarize, you have seen that in a few clicks, using the classification module and looking the model debriefing, you were able to do exploratory analytics to take strategic decisions for your company. Those decisions were taken on real data based on a good model automatically provided by SAP Predictive Analytics. You didn’t have to think about how to manipulate the data, how to filter it, how to visualize it. The tool did all of that for you. You could then concentrate on deciding how the mathematically correct and interesting output could be used to improve your business.


Here we took the example of napkins but you can use the same concepts in many different situations.

Just think of your business and of some things you want to improve, of the data you already have in stock and you are likely to find good examples.

If you have any idea or example, just post it under this blog so that all the community can benefit from it!


I hope this paper inspired you to try out SAP Predictive Analytics and make exploratory analytics with it. If you want, you can download a free trial version here: www.sap.com/trypredictive.

And if you have any feedback or idea on how to improve SAP Predictive Analytics you can post it here:


Have fun and happy explorations!

Let’s face it – “predictive analytics” can be a bit of a complicated topic once you get into it. The “why” and the “what” are pretty easy to get your head around but the “how” is where the rubber hits the road.  Before we even get to the complexity of which algorithms to use and how to configure them, there’s a higher level consideration to deal with first – which technologies to use and how do they fit together? 


In my last article: Predictive Smackdown: Automated Algorithms vs The Data Scientist, I discussed where our Automated Analytics and Expert Analytics fit into the bigger picture, so in this entry, let’s turn our attention to SAP HANA and the predictive options available there. 

Predictive Alphabet Soup?

Predictive Alphabet soup.jpg

Sometimes the options on SAP HANA look like “Predictive Alphabet Soup” because with R, PAL, and APL, it is not just the letters in the acronyms that are important but also what order they are arranged in.  Unfortunately for the uninitiated, some customers these three technologies as disjoint and confusing – when do you use R and when do you use PAL?  What are the differences between PAL and APL? 

I have even heard from a few customers that they would like to wait until these technologies “merge” into one (hint: it’s like saying you want to wait until an apple and an orange become one fruit).  A better way to look at this is to understand the pros and cons of each and how they work together.  Let’s take a look at each one of them individually and then what that means overall.

R – The De Facto Predictive Language

R.pngIt’s pretty impossible to read anything about predictive analysis and not hear about the open source language “R”.  What is R? It is a language used by statisticians and data scientists to analyze data sets with complex mathematical algorithms.  There are well over 5,800 "packages" (and growing) that implement statistical techniques, data manipulation, graphing, reporting, and more.

R is extremely popular because it is freely available, easy to extend, and there are lots of resources (and people) to learn from.  But R is a statistical language made for the mathematically inclined and therefore isn’t something you just pick up a book on and learn in a few hours unless you have some background already. 

SAP Predictive Analytics 2.x uses R and provides a graphical modelling environment on top to make the creation and comparison of predictive models much easier than by invoking R on the command line.  You can even add your own custom R components so there’s virtually no limit to the types of modelling you can do.


If you have SAP HANA, you can deploy an R server as a sidecar to run predictive algorithms on your data.  This opens up all new possibilities because the full breadth of R’s capabilities can be unleashed on your data in HANA.  However as an external system, this type of deployment requires data extraction from HANA to feed the R server which will crunch the numbers and return the results back to HANA.  In addition to the obvious I/O bottlenecks involved in bringing data to an external system, you lose the parallel processing that SAP HANA is legendary for.

The SAP Predictive Analytics client tool can use R locally, but can also be used for scenarios where you want to leverage an external R server that is connected to SAP HANA.

SAP Predictive Analysis Library (PAL)

The SAP PAL is a native C++ implementation on HANA of the most commonly used predictive algorithms in data science.  The goal of this library is to enable up to 80% of the common predictive scenarios that you would normally use an external R server for.  Note the goal is 80% of the use cases, not 80% of the algorithms - You can imagine that with over 5,000 R algorithms in the world, there is actually a tiny fraction of them that are used very frequently.

By using SAP PAL you can leverage all the in-memory goodness and near-linear parallelism performance that SAP HANA offers to perform training, scoring, categorization, and more without your data leaving the server.  So what’s the problem?

Well, if you need an algorithm that is not in the SAP PAL, you may still need to deploy an external R server.  Additionally, many data scientists develop their own R algorithms - something within their skill set whereas developing those same algorithms in C++ to be deployed natively on HANA may not be.

How do you use the SAP PAL? You can call it directly in SQLScript, but fortunately SAP Predictive Analytics not only supports R, but it also supports SAP PAL – and even a combination of the two.  Of course this only makes sense when you are using an external R server connected to SAP HANA as PAL itself is native to HANA.

But there’s also another thing to consider – what if you aren’t a data scientist? 

SAP Automated Predictive Library (APL)

The APL is a native C++ implementation on HANA of SAP’s patented automated machine learning technologies that make Automated Analytics so cool. Instead of rehashing the benefits of Automated Analytics here, please take a look at my previous blog entry that details it more fully: Predictive Smackdown: Automated Algorithms vs The Data Scientist

You could perform automated analytics with HANA before the creation of the APL, but you would have needed to deploy a sidecar predictive server to run the automated machine learning algorithms.  The APL was introduced at the beginning of 2015 to bring all of that “automagic” goodness to HANA, and just like the PAL, the APL also does not need to extract data from the HANA system to do it's predictive magic.


You can find out more about the APL here: What is the SAP Automated Predictive Library (APL) for SAP HANA? .

Predictive Peanut Butter and Jelly (or Chocolate & Peanut Butter)

PB and J.jpg

A more interesting analogy to the Gestalt Principle is the concept of peanut butter and jelly sandwiches (or chocolate and peanut  cups if you prefer).  Peanut butter is rich and creamy, but adding the sweetness and tartness of jelly somehow creates a magical combination that is better than either topping by itself. "PB&J" is one of the best inventions in the world.

The predictive options are pretty much like peanut butter and jelly – you can use R by itself, you could use SAP PAL by itself, or you could go the automated route with SAP Automated Predictive Library.  Each has it’s own purpose but being able to use one or more of these together based on your needs is where things get very interesting:

  • Need the flexibility of custom R algorithms but use SAP HANA?
    • No problem, deploy R because HANA can be connected to it.

  • Want the speed of HANA but still need R’s flexibility?
    • Deploy both R and PAL together and do as much in PAL as you can.

  • Want to have some intelligent auto-clustering algorithms but still need some hardcore data science requirements?
    • Simple! – deploy PAL and APL.

Know Your Predictive OPTIONS

predictive solutions.png

A prerequisite to “knowing what you are doing” is understanding what is available and what you need. Personas and use cases are usually good hints:

SAP PAL and R:

  • Data Scientists and Mathematicians creating models themselves (typically by hand).


  • Business and Data Analysts as well as Data Scientists wanting automatic model creation.

kc Choice.pngOne reason some of our customers get confused is that  they think "all predictive is the same" and assume that if their HANA system has predictive capabilities, they also have the Automated Predictive Library (APL). However the APL is part of the “Predictive Option for SAP HANA” license so you want to ensure you know if you are licensed for the APL or need to get it.  A future article will go into this option into more detail.

You must resist the urge to try to rearrange the letters and assume that you can replace the APL with PAL or vice versa.  Hopefully this article shows you how each predictive technology on SAP HANA has its place and they are not interchangeable.

Ofcourse SAP Predictive Analytics 2.x operates with all of these - R, PAL, and APL.

Most customers realize the ROI of their existing investment in SAP HANA can be greatly enhanced by enabling users of all types to benefit from automated predictive analytics and adopt the Predictive Option for SAP HANA .  Whether you like your peanut butter with jelly or chocolate, you have to admit, it tastes pretty damn good. 



PA 2.2 has been released, I have recently installed and seen a nice new feature called model compare that exists in the expert side of predictive.




Model Compare


Basically this feature tells you which algorithm has the strongest KI and KR. The algorithm that has the best KI and KR score should be the most accurate. We must remember that KI is a measurement of predictive power while KR is the measurement prediction confidence. The closer these are to 1 the stronger the model is.


Below is an example where data is run through three different algorithms, once run the algorithm with the highest KI and KR will get a star.




The results will display the best algorithm and the supporting KI and KR scores per algorithm.



You will also be able to review the results per algorithm if needed.



Hope you find this helpful.




Smackdown.jpgOxford Dictionaries defines smackdown as “A bitter contest or confrontation”.  I didn’t realize that the word “smackdown” originated from the world of “entertainment wrestling” and isn’t even 30 years old.  But this is the word that comes to mind whenever I talk to someone with a data science background about the topic of SAP Predictive Analytics’ automated machine learning algorithms.


Data scientists have a healthy amount of skepticism whenever we say we have a technology that can automate something that typically requires so much training, practice, and experience.  Can you blame them?


Automatic Vs Manual Transmissions


I’m not a data scientist, and statistically speaking, there’s a pretty good chance you aren’t one either.  There is an analogy that is quite appropriate here (if not completely boring) that most of us can associate with: automobile transmissions.


A manual transmission car relies on the human driver to engage the clutch properly and shift to the correct gear at the correct time.  This co-ordination requires practice, and some are not comfortable with it even after days or weeks of training.   There are others (like myself) that prefer a manual transmission even though it requires more work because it provides better feedback (ability to adapt), more control (flexibility), and in most cases better fuel economy (more efficient operation).


An automatic transmission uses a computer to measure various metrics (speed, RPM, throttle) and operates the clutch and gearbox on behalf of the driver.  Some prefer this because it works automatically without their intervention, training, or experience.


Can you mess up with an automatic transmission? Yup, although it is much harder to stall the vehicle or “bunny hop” the car by being in the wrong gear.  You can spot a person who knows how to drive a manual transmission by the extra special pain they feel when they hear someone "grind the gears" in a car.


Data Science – The Manual Transmission of Predictive


Ask a data scientist about predictive analysis and many times you will get either an extremely simple explanation (they are dumbing it down for you) or a highly theoretical one (they want to make sure you know it is complex).  Now that I’ve ticked off all the data scientists, let me cover by saying “both answers are right” .


gearshift-knob.jpgPredictive modelling is pretty complex – The “secret sauce” is not as much in which algorithm should be used, but the intelligence put into the modelling process by the data scientist him/herself.  As humans, we have a semantic understanding of the data that can improve the predictive model and ultimately the predictive power of it.


For example, if we take all viewers in a movie theater, you likely would want to group people based on something like age or the type of relationship of the others in their party (i.e. parent, sibling, spouse), and then apply a different analysis for each group based on their common traits.  You wouldn't want to apply the same heuristics to siblings watching the same movie as you would for a couple on a date would you?


A good predictive model can take days, weeks, or even longer.  The kicker is that in the end it is the effectiveness of the model, not how long it took to create it.  A data scientist will do a lot of analysis and iterate through a number of predictive algorithms, models, and variables before settling on the final model.


The “data science profession” is probably one of the most subjective occupations in the world – how do you know if you have a great data scientist that is extremely creative or a lazy one that follows a very formulaic process that could be taught to anyone?  Unfortunately it is not immediately obvious, but neither is a person who reached their destination by driving in second gear the whole time.  You'll eventually figure it out when the car gets to the destination but stinks of burning oil.


Automated Analytics – The Automatic Transmission of Predictive


genuine-mazda-miata-gearshift-knob-wood-automatic-transmission-6.jpgAutomatic transmission cars are very easy to drive because you simply need to understand the concepts of “Drive” and “Reverse”  gears and away you go.  The automated predictive algorithms in SAP Predictive Analytics are definitely more complicated than that, but aim to provide the same level of ease – you need to understand the concepts of clustering and time series but do not actually have to know how they work.   This is what makes Automated Analytics so approachable to people without a data science background.


Data scientists typically scoff at these automated capabilities because they do not have the same level of visibility and control they are used to.  We all tend to be a bit suspicious of “magic black boxes” because we usually don’t have any way of determining how effective they are.  In a car, the RPM gauge and the sound of the car are the only indicators that gear shifting is working correctly.   For the majority of drivers, this is enough to operate the car and get to their destination.


Automated Analytics generates reams and reams of analysis to help data scientists understand the performance of the algorithms on a specific dataset, much like an uber-set of gauges.  However in keeping with the nature of “automatic”, there are some limitations on how much a data scientist can configure parameters.  Just like an automatic transmission car, you either like it, hate it, or tolerate it because it gives what you want in the end.


Expert Analytics – The Semi-Automatic Transmission of Predictive


bmw_5_series_550i_sedan_2008_interior_gearshift.jpgSAP Predictive Analytics also includes Expert Analytics which is designed for data scientists to take advantage of any predictive technology they wish – including our automated predictive algorithms, the open source predictive language R, the SAP Predictive Analytics Library (PAL), and the SAP Automated Predictive Library (APL).   In Expert Analytics, the user is not tied to any one predictive technology or algorithm and in fact can create multiple algorithm chains in parallel and use the new Model Comparison feature in PA 2.2 to enable the system to advise which is the best predictive model to use.


A question I get a lot about Expert Analytics is how we position it against other predictive analysis tools from competitors.  That is a topic out of scope for this post, but consider what the purpose of a semi-automatic transmission is – give the driver the control and fun of gear shifting when they want while eliminating the less desirable requirements of a manual transmission such as using the clutch at the right time.   Expert Analytics is about getting you to your destination in the most efficient way, no matter whether you are letting the system do the shifting or if you want to step in and be more prescriptive about what happens when.


Smackdown Winner: You


Shake-hands-website-photo.jpgI was with a customer last week who brought two data scientists and three data analysts to an all-day analytics workshop.   These meetings are usually a challenge because we have data analysts who want to do more predictive analysis but we also have data scientists who tend to be perfectionists around process (since this is the best way to control the quality of analysis).   Presenting Automated Analytics to the data analysts is always well received because it brings a new capability to them that does not require a PhD in Math or some intensely technical statistical training.    This is usually the point where the data scientists say that what they do cannot be automated and lots of arms get folded.


However in this case the data scientists quickly understood the value of others in the organization doing their own analysis for some of (what they consider to be) the simpler tasks so they could focus on the higher value projects where complex modelling is required.   One of them said the coolest (and in my opinion the most humble) thing I’ve heard a data scientist say:


“The business user knows more about the semantics of the data than I ever will.  They can sometimes better understand how the data should be used because they are solving a specific business question. So while I can create complicated predictive models, they may not be as efficient as simpler models that have more business meaning in them”.


The strategy of including auto-nodes in Expert Analytics is to provide data scientists with yet another tool in their spectrum of technologies they can use. So, (some) data scientists will recognize the value of using automated algorithms alongside their traditional techniques. They likely also will want to encourage the data analysts to use Automated Analytics because they can better solve their own problems and free up the data scientist to focus on more hardcore predictive problems that require them to hand-craft their models. 


Take SAP Predictive Analytics For a Test Drive


SAP Predictive Analytics includes both Automated Analytics and Expert Analytics in a single package so regardless of whether you are a business user or a data scientist, there’s something in there for you.  You can download a free trial of SAP Predictive Analytics here: SAP Predictive Analytics Trial Download


For more information, ensure you are checking out the SAP Predictive Analytics regularly.

SAP PA 2.png

On June 12th we formally released SAP Predictive Analytics (PA) 2.2 and it is on the SAP Service Marketplace (SMP) now! .  For those of you who are not already using it, you can download a 30-day trial here.


What’s The Big Deal With PA 2.2? 


This is another big release for us with improvements and new features across the entire SAP Predictive Analytics portfolio. 


Instead of just listing the new features/functions, let’s take a look at how SAP PA 2.2 moves us forward in pursuing some of our core goals (note some features address multiple goals, but I’m keeping it simple here):


AA = Automated Analytics

EA = Expert Analytics

HANA = Native on SAP HANA


A better, smoother experience for data scientists AND business users:

  • (AA) Very wide datasets (up to 15K columns) support: Automatically handle very wide datasets to improve both the efficiency and effectiveness of your predictive models
  • (EA) Ability to share custom R and PAL components:  Enable other users to use your algorithms with ease.


Making data scientists more agile and efficient:


  • (EA) New Model Performance Comparison: Compare the performance of two or more algorithms and get a recommendation and detailed explanation for which one is the best to use.
  • (EA) New Model Statistics: Calculate performance statistics on datasets generated by classification and regression algorithms.
  • (EA) Support for R 3.1.2:  To make it possible to use the latest libraries
  • (EA) Support for multiple charts: Use more than one chart in your offline custom R components


Enabling customers to better leverage their existing data and investments:


  • (AA)Support for SAP HANA Views: Connect directly to SAP HANA Analytic and Calculation Views
  • (AA)Support for SAP BW on HANA: Use BW on HANA systems as a data source
  • (EA)Improved BW acquisition: easier and faster variable selection and handling of hierarchies.
  • (HANA)Updated Automated Predictive Library (APL): Now includes automated recommendation


These are only the biggies - check out the What's New Guide for SAP Predictive 2.2 for these and more.


My colleagues Antoine CHABERT and Didier MAZOUE have created an extensive and comprehensive post (Frequently Asked Questions - Downloading, Installing and Activating SAP Predictive Analytics) that is very well worth reading as well!



Flashback to a Key Feature of PA 2.1: Lumira Co-Existence!coexistence of messaging platforms exchange sharepoint lotus notes-resized-600.jpg


The new advances in SAP Predictive Analytics 2.x this year have generated so much excitement that it has kept our teams extremely busy – so busy I did not have a chance to publicize one of the most important features of SAP PA 2.1: Lumira co-existence.


YES! You can install SAP PA 2.1 (or later) on the same machine as SAP Lumira 1.25 (or later).  This was a heavily requested feature and now you can even have Automated Analytics, Expert Analytics, and Lumira running all at the same time.  Note that you will need to uninstall previous versions of SAP Predictive Analytics before installing the new versions that include co-existence.


You can find out more in this article: Lumira + Predictive Co-Existence: Good News Never Comes Single Handed!


How To Get Started?


  1. Download the trial!
  2. Check out the online materials and tutorials:
  3. Participate in the SCN Community: SAP Predictive Analytics
    • Learn, ask questions, get answers!


So, What’s Next For Predictive Analytics?



Our development team is really hard at work on the next version already and we are nailing down 2.3, 2.4 and even further down the line, so keep the feedback coming!  The Legal People don’t let me give too many things away, but here are just a few things that we are working on (** Note: these items are in a state of planning.  These can change at any time and are not statements of commitment or of future features).


  • Bringing our automated machine learning predictive services to SAP HANA Cloud Platform (HCP)
  • Continuing our innovation on Hadoop and Spark to supercharge our unique Big Data capabilities
  • Even better integration with other SAP systems and landscapes, including SAP HANA, SAP BW, and SAP BI
  • Continued UX progression as we bring Expert and Automated Analytics experiences closer together.


Four months ago, I announced PA 2.0 (Introducing SAP Predictive Analytics 2.0!) and “predicted” 2015 would be a BIG year for SAP Predictive Analytics and so far it has been a pretty wild ride.  I would encourage you to set up alerts for the SCN Predictive Analytics Community so that you are always up to date. Simply go to SAP Predictive Analytics and select, "Start email notifications" on the right hand "actions" menu.


Now, download SAP Predictive Analytics 2.2 and go predict something!



SCN: Ashish Morzaria

Twitter: Ashish C. Morzaria (@AshishMorzaria) | Twitter

Many questions users ask on our community are related to the download, installation and activation of SAP Predictive Analytics.

The intention of this document is to provide answers to these questions. The FAQ will evolve over time and take into account new questions & feedback from our user community.

We hope this will prove useful for you. Enjoy SAP Predictive Analytics!

Antoine Chabert and Didier Mazoue

PS: Heartfelt thanks to all our contributors and reviewers!




What is SAP Predictive Analytics?

SAP Predictive Analytics is SAP’s powerful predictive analytics software and the successor of SAP InfiniteInsight and SAP Predictive Analysis.


SAP Predictive Analytics combines SAP InfiniteInsight and SAP Predictive Analysis in a product.


In SAP Predictive Analytics, SAP InfiniteInsight is renamed to Automated Analytics and SAP Predictive Analysis is renamed to Expert Analytics.


Regarding the release version numbers:

  • The last released version of SAP InfiniteInsight is 7.0.1.
  • The last released version of SAP Predictive Analysis is 1.21.
  • The first released version of SAP Predictive Analytics is 2.0, the version that was just released is SAP Predictive Analytics 2.2.


What are the current names being used for the products? How do they map to the old names?

Current name

New name

SAP InfiniteInsight

Automated Analytics*

SAP Predictive Analysis

Expert Analytics*


Data manager


Model manager

InfiniteInsight Authenticated Server

Automated analytics server

*Dedicated user Interface in SAP Predictive Analytics.


Product Download

Does SAP proposes a trial version of SAP Predictive Analytics?

Yes, we do.


Go to this page: http://scn.sap.com/community/predictive-analytics/blog/2015/03/20/sap-predictive-analytics-20-30-day-trial-now-available, click on the Download button, and start downloading a 30-day trial of SAP Predictive Analytics.


The releases proposed for trial are the Windows 64-bit and Windows 32-bit releases of SAP Predictive Analytics, desktop version.


The 30-day trial starts right after the software is installed.


Does the trial version corresponds with the up-to-date release?

Don’t worry, as soon as we release a new version, we update the trial release as well.

At the time of writing, the trial version offered is SAP Predictive Analytics 2.2.


As a SAP Predictive Analytics customer, I want to download the latest releases I am licensed to. Where should I go?

That’s easy.


As a prerequisite you need to have a valid S-user identifier. For users & authorizations, please refer to the page here: Users &amp;amp; Authorizations | SAP Support Portal.

Then go to the SAP Service Market Place: https://support.sap.com/software/installations/a-z-index.html, click on the letter P (as in Predictive), and select the entry SAP Predictive Analytics.

Support Portal - Predictive.png

Support Portal - Predictive - 2.png

Support Portal - Predictive - 3.png

Support Portal - Predictive - 4.png

You will see the different products:


  • Your first interest might go to SAP Predictive Analytics. SAP Predictive Analytics comes in two deployment modes, a client/server one and a desktop one. The client/server mode has two download packages codenamed PRED ANALYTICS CLIENT 2 and PRED ANALYTICS SERVER 2. The desktop mode is the download package code named: PRED ANALYTICS DESKTOP 2.

  • An important thing to note is that our Expert Analytics user interface is only part of the desktop deployment. It is not part of the client installer.


  • Once your first models will be applied for business production purposes, you will need a Model Manager, to monitor the evolution of models over time and schedule model refreshes. Model Manager has the code name PRED ANALYTICS MODEL MGR 2. Model Manager was previously known as InfiniteInsight Factory. To be noted: Model Manager only connects to SAP Predictive Analytics Server.



For more details, please check our PDF guide: https://websmp110.sap-ag.de/~sapidb/012002523100009341912015E/pa22_architecture_spec_en.pdf


Your can also refer to our help portal: http://help.sap.com/pa


I downloaded and installed the product. I noticed the product is already activated. Do I still need to apply for a license?

The installation is provided with a 30-day key that is set to expire.

You must apply for your license as soon as possible.


As a SAP OEM Partner, I would like to embed predictive capabilities into applications. What is proposed to me?

If you are a SAP OEM Partner and you want to embed the functionalities of SAP Predictive Analytics 2.2 (Automated Analytics) in another application you are building, we have a dedicated product edition for you.


Go to the SAP Service Market Place: https://support.sap.com/software/installations/a-z-index.html, click on the letter P (like Predictive), select the entry SAP Predictive Analytics OEM.

Support Portal - Predictive for OEM - 1.png


Support Portal - Predictive for OEM - 2.png


Product Installation

How many installations of SAP Predictive Analytics are supported on a single machine?

Only one installation of SAP Predictive Analytics desktop is supported on a single machine.


Is there a document describing the installation of SAP Predictive Analytics Desktop 2.2?

For a summarized view on the installation process, please refer to this useful SCN post: http://scn.sap.com/docs/DOC-64662.

The detailed installation guide is found here: http://service.sap.com/~sapidb/012002523100009343372015E/pa22_install_en.pdf


Is there a document describing the installation of SAP Predictive Analytics Client 2.2?

Please refer to the installation guide here: http://service.sap.com/~sapidb/012002523100009341942015E/pa22_client_install_en.pdf


Is there a document describing the installation of SAP Predictive Analytics Server 2.2?

Please refer to the installation guide here: http://service.sap.com/~sapidb/012002523100009342702015E/pa22_inst_server_win_en.pdf


Is there a document describing the installation of Model Manager 2.2?

Please refer to the installation guide here: http://service.sap.com/~sapidb/012002523100009343402015E/pa22_model_mgr_install_en.pdf

This blog post nicely complements the documentation: Installing and connecting to SAP Predictive Analytics Model Manager


Is there a document describing the installation of SAP HANA APL 2.2?

Please refer to the guide here: https://websmp104.sap-ag.de/~sapidb/012002523100009346632015E/pa22_hana_apl_user_en.pdf

A nice blog post is also here: How to Install the Automated Predictive Library in SAP HANA


I am a former SAP Predictive Analysis user. What is the installation process to move from SAP Predictive Analysis to SAP Predictive Analytics?

You need to uninstall SAP Predictive Analysis and then install SAP Predictive Analytics – pick the desktop version so that Expert Analytics is installed as well.


Please note your current SAP Predictive Analysis key code will only enable the Expert Analytics of SAP Predictive Analytics, which is what you are looking for. If you want to access the Automated Analytic capabilities, you need to be licensed for it. Please contact your SAP Account Executive.


I am a former SAP InfiniteInsight user. What is the installation process to move from SAP InfiniteInsight to SAP Predictive Analytics?

You don’t need to uninstall SAP InfiniteInsight and you can install SAP Predictive Analytics on the same machine. You will need to get a specific key code to activate Automated Analytic capabilities.


I am a former SAP Predictive Analytics user. What is the installation process to move from SAP Predictive Analytics 2.0 or 2.1 to SAP Predictive Analytics 2.2?

You need to uninstall your previous release of SAP Predictive Analytics and then install SAP Predictive Analytics 2.2.


I love SAP Lumira and I love SAP Predictive Analytics. I cannot choose and I want to install both on my machine. Is it possible?

Yes, you can! This is possible since the 2.1 release of SAP Predictive Analytics and continues to be possible with SAP Predictive Analytics 2.2.

You have to install in the following order though: SAP Predictive Analytics should be installed first and SAP Lumira should be installed second.

It means that if SAP Lumira is already installed, you need to uninstall it first, then install SAP Predictive Analytics, then install SAP Lumira.


Installing SAP Predictive Analytics side-by-side with SAP Lumira without uninstalling SAP Lumira should be normally possible. In practice, it is not, due to a problem detected too late before release in SAP Predictive Analytics 2.2 code. Apologies for that, we will do a better job on this in future releases.


I would like to use R algorithms in Expert Analytics. What should I do?

Detailed instructions are provided in Expert Analytics User Guide here: http://help.sap.com/businessobject/product_guides/pa22/en/pa22_expert_user_en.pdf. Please refer to pages 12/13.


I have heard that I can integrate R code into SAP HANA. Where do I find more information?

Please refer to the guide here: http://help.sap.com/hana/SAP_HANA_R_Integration_Guide_en.pdf


Where can I find the product availability matrix?

The document that is applicable to the latest release can be found here: https://support.sap.com/content/dam/library/ssp/infopages/pam-essentials/Pred_Ana_20.pdf


Are there any restrictions associated with the release of SAP Predictive Analytics 2.2?

Please refer to the central SAP note http://service.sap.com/sap/support/notes/2165858. Our help portal is always available here: http://help.sap.com/pa.


Can I install the Desktop version in multi-user environments (like Citrix)?

It's possible to install the desktop version in virtualized environments.

However Expert Analytics is not officially supported, as only one user at a time can work with Expert Analytics in that scenario.

There is no support restriction for Automated Analytics in Citrix.


Product Activation

I do not really know if my company is licensed to SAP Predictive Analytics. What should I do?

Please get in touch with your SAP Account Executive.


I would like to get access to my key codes. Where should I go?

You should go here: https://support.sap.com/licensekey.

Request Licenses.png

  • You can request keys and monitor you key requests (in “What would you like to do today”)
  • You have access to a detailed how to – How to Request License keys - Analytics solutions from SAP.
  • This cool SAP Note (nr: 2180743) is explaining it all what are the options and how to create a licence.
  • If you are facing issues with the license request or creation, you can ask for support using the component XX-SER-LIKEY-BOJ.


I have a license for SAP Predictive Analysis. Does it translate into a license enabling SAP Predictive Analytics – Expert Analytics mode?



I have a license for KXEN or SAP InfiniteInsight. Does it translate into a license enabling SAP Predictive Analytics – Automated Analytics mode?

Yes. Notice you will have to ask and install a new license code.

Where should I input the Automated Analytics key code?

SAP Support usually supply a key code.


So it is left to the end-user to add to the proper file so that this is taken into account by Automated Analytics.

The usual location for Automated Analytics license file is the following: C:\Program Files\SAP Predictive Analytics\Desktop 2.2 (in case of a SAP Predictive Analytics desktop 2.2 installation).


The file that contains the code name is named License.cfg.


When a new key code is to be added, a new line should be added to the file content using the format:

# KeyCode<TAB><TheReceivedKeycode>

The file support multiple key entries.


As an example imagine I am licensed to the Modeler component of Automated Analytics, then later on I get licensed to the Engine components of Automated Analytics , I can get an additional key to activating this module and I will need to add this key as a new line in the file.


Where should I input the Expert Analytics key code?

Open Expert Analytics, go to the Help menu, and select the entry Key Code. Click on the button Enter Keycode and input (or paste) the key code that was given to you, then click OK.

Step 7.png


How many key codes do I need to get Automated Analytics and Expert Analytics working?

First of all, this question is only applicable to the desktop installation of SAP Predictive Analytics.


If you are licensed to both Automated Analytics and Expert Analytics, the key code you will receive has to be used in two ways:

  • Adding this code to the Automated Analytics license file (see Where should I input the Automated Analytics key code?)
  • Entering this code using the Expert Analytics user interface (see Where should I input the Expert Analytics key code?)


On my installation, I see the entry to trigger Expert Analytics being greyed and I cannot start the product. What can I do?

Expert Analytics.png

First of all check that your installation is a desktop one.


If it is, the workaround consists in starting the executable directly.

The executable is usually located here: C:\Program Files\SAP Predictive Analytics\Desktop 2.2\Expert\Desktop and is called SAPPredictiveAnalysis.exe.


You should contact SAP Support about the fact that Expert Analytics link is greyed. SAP Support will investigate the problem with you. At the time of writing, we miss understanding what is causing this behavior.


If you have installed a client/server deployment, it is expected that the Expert Analytics link is greyed as the Expert Analytics user interface is not installed by the client/server deployment.


On my installation, I see some of the entries to trigger Automated Analytics being greyed and I cannot start the product. What can I do?

Automated Analytics - Grayed.png

This is probably due to your license code that is not enabling all the Automated Analytic product capabilities. Please refer to the section of the document related to Product Activation.


Product Support

I have a problem, how can I create an incident with SAP Support?

Click on https://support.sap.com/kb-incidents/incident.html.


Click on Report an SAP Incident

Report An SAP Incident.png

Enter the information related to your incident - include precise description, screenshots and all possible means to help SAP Support to reproduce & investigate the problem.

SAP Incident Wizard.PNG

Please note the distinction that is made between SAP support vs. SCN:

The SAP Incident Wizard should be used to report technical issues related to your products or the SAP support applications. If you have a "How to" question, go to SAP Community Network (SCN) Discussion Forums where you can post questions to knowledgeable users and share ideas, opinions and information about SAP products and services.

Back in April of this year, we've started a new series of webinars, the Advanced Analytics Virtual Classroom.

Our goal was to offer free monthly webinars where experts would be discussing the topic of advanced analytics in a short format (30 minutes). The series is aimed at anyone interested in the topic of predictive analytics, so you don't need to be a data scientist to attend. Each webinar addresses a particular topic and at least 50% of the webinar is spent on showcasing (i.e. demo) our solution, SAP Predictive Analytics and its capabilities.


If you are new to the series, you can go back and binge-watch all previous sessions (listed below). We've also put together a handy timeline for each should you decide to go and watch a particular demo or topic.


Video Session #1: An Introduction to Advanced Analytics

00:64 SAP Predictive Analytics is about being Fast, Simple, and Everywhere

06:04 SAP Predictive Analytics Overview

08:17 Popular Predictive Use Cases

12:10 2 Modes: Automated Analytics and Expert Analytics

14:38 Social Network Analysis Demo: Building a Social graph

20:28 Social Network Analysis Demo: The Resulting Graph

23:57 Example of Recommendation for Customer - On-the-fly or Batch



Video Session #2: Data Preparation Made Easy

01:27 New Data Challenges, New Opportunities

03:17 The Advanced Analytics Process: Prepare, Model, Deploy, and Manage

04:37 Successful Data Preparation

09:05 Data Preparation Demo

11:49 Retrieving Additional Data Attributes

14:13 Deriving New Attributes from Data (dates; number of days)

16:36 Computing Aggregates from Transactional Data

23:36 Looking at Generated SQL

26:32 Creating a Classification Model

28:00 Overview of Generated Model

28:38 Contributions by Variables




Video Session #3: Putting Insight to Work

02:02 New Data Challenges, New Opportunities

03:42 The Advanced Analytics Process: Prepare, Model, Deploy, and Manage

04:50 Automated Mode - aka Automated Analytics

05:30 Demo: Selecting a Data Source

07:25 Demo: Selecting your Variables

08:07 Demo: Overview of Generated Model

08:36 Demo: Applying Your Model (output example: Excel file)

10:34 Demo: Applying Your Model (output example: database)

12:59 Demo: Generating the Source Code (example: Java code)

14:25 Demo: Saving the Model (example: text file)

15:00 Demo: Saving the Model (example: in-database; SAP HANA)

17:18 Expert Mode - aka Expert Analytics

18:18 Demo: Predict Room

20:03 Demo: Visualize Room

21:34 Demo: Writing Back to Database

24:08 Demo: Scoring on-the-fly (Stored Procedure)

27:05 Real-World Integration Example: SAP hybris

29:25 Real-World Integration Example: SAP Fraud Management

33:49 Demo: Model Management

41:26 Advanced Analytics Portfolio for SAP HANA



Video Session #4: When to Choose the Expert Mode

Catch it live on June 18. Click here to Register

For future sessions and events, please go to our SAP Predictive Analytics event page on SCN!


Filter Blog

By author:
By date:
By tag: