1 2 3 7 Previous Next

SAP Predictive Analysis

93 Posts

Predictive Analytics has recently seen a spike of excitement among many different business departments such as e.g. marketing or human resources who seek to better understand their customers or would like to look at how employees behave in their organization and improve the services offered to their clients. Unfortunately only very few business departments have access to Data Scientists and therefore often have only little experience in developing predictive models. This presents a real challenge since predictive analytics is fundamentally different from traditional reporting and without Data Science support you might find it hard to get started and feel confident in the results of your analyses. Luckily, SAP InfiniteInsight addresses this challenge directly and can be easily used by analysts since greatly reduces the complexity of data preparation and model estimation through a very high level of automation. This way you can focus on the business questions that matter and spend less time dealing with complicated IT solutions. This blog is geared towards analysts who want to understand how to get the most out of their data using SAP InfiniteInsight so here’s how you would get started with your predictive modeling initiative:

 

 

Step 0: Understand the predictive analytics process

Before actually getting started, you should familiarize yourself with the general idea behind predictive analytics and how it differs from traditional business intelligence (the folks over at Data Science Central have a nice summary). In short, when using predictive analytics we want to forecast the probability of a future event based on patterns that we find in historical data for said event: For example, to predict turnover (your target) we will need historical data on turnover along with a bunch of attributes that we can use to find relationships and patterns between the attribute and target variables. Once we have derived the historical relationship and built a valid model, we will use this model on new data to forecast turnover. The forecasted results can then be used to make various business decisions. Now, the actual flow may involve a few side steps (e.g. transforming your data so that it can be used) but in essence this is the high-level process that will be described here.

 

 

Step 1: Define your business objective

Whether it's wanting to predict which customer will buy your newly launched product or which employee might leave your companyyou need to define what your business objective is and clarify how you want to measure it. This sounds trivial but can provide a real challenge since you need to have historical data available for your target outcome that is sufficiently accurate to derive a statistical model in a later step not to speak of having your target variable available in the first place.

 

 

While it’s certainly possible to “just play around” and see what happens (sometimes referred to as exploratory analysis), you will gain better results if you focus your efforts on a single business question from the very beginning. You will also find it easier to gain end-user acceptance if you know what challenge your users are facing and how your analysis can help them solve it.

 

 

Step 2: Find & connect to the data

Depending on your business objective, you will now need to find the data to base your model on. You don’t need to have a sophisticated concept in mind but you’ll need a general idea what kind of data you are looking for – with SAP InfiniteInsight there is one simple rule: The more variables you have, the better since SAP InfiniteInsight will determine automatically which variables should be removed and which variables add value to the model. Getting the data from an operational system like SuccessFactors Employee Central or SAP CRM can be slightly more difficult than from a Business Warehouse but the granularity of data available in a BW may not be sufficient for modeling: With operational systems the data usually has the right granularity but is frequently distributed across many different tables and often companies restrict direct table access to users from IT. Therefore you may face some challenges when trying to get the data from the tables directly. BW on the other often has a wealth of data, nicely packaged and preprocessed but you may run into the issue that while the data may have all the attributes that you’re looking for, the data may be too aggregated to be used.

 

The rule of thumb for data granularity is: You need historical data in the same granularity as the concept you want to predict, i.e. if you want to forecast turnover on employee level you need to have the historical data on employee level as well. The good news is that you can always fall back on using a simple flat file with your data in SAP InfiniteInsight so if push comes to shove you can simply ask your IT department to download some data as CSV in the needed format.

 

 

Step 3: Derive & interpret the model

Once you have the data, you want to find the best model that has the best tradeoff between describing your training data and predicting new, unknown data: SAP InfiniteInsight can automatically test hundreds of different models at the same time and choose the one that works best for your data and purpose. Hidden in the background, SAP InfiniteInsight also performs many tasks automatically that Data Scientists usually do with traditional tools to improve the quality of your data and the model performance such as missing value handling, binning, data re-encoding, model cross-validation, etc. This way you can simply point SAP InfiniteInsight to your bucket of data, define which variable to predict and ask the tool to work its magic. All you need to do then is interpret the results (see this blog post to see how you can interpret a model based on a example from HR ).

 

 

Step 4: Apply the model

Great – now you have a working model! Next you want to predict new stuff with your model – usually this “stuff” sits somewhere in a database. SAP InfiniteInsight can either directly apply the model to new data (e.g. data that sits somewhere in a table or a flat file) or it can export the model to a database to allow real-time scoring. The first option is more for ad-hoc scoring or further model validation purposes while the second option can be used to continuously score new data as it comes into the database – this way one could include the scored results in some other application or make the information available to other users. However, in the case of in-database scoring you will probably need some involvement from your IT department.

 

 

Step 5: Execute on your insights

One of the most important questions of any statistical analysis is: What do you do with the results? How can you reap the benefits of “knowing the
future”? Having an idea about what is likely to happen is not enough – now your organizations need to adapt its behavior to either avoid the unpleasant
outcomes or gain the positive ones as predicted by the analysis. How this can be done depends heavily on your organization and the analysis context –
possible next steps include

  • making the results/model available to a larger audience (e.g. HR Business Partners, marketing managers, etc.) by exporting it to a database to enable real-time application of the model,
  • including the scoring algorithm in a business application (e.g. an SAP system like SAP CRM),
  • developing a one-time action plan based on the results, or
  • designing a larger process to use the analysis results in each cycle of the business process to which it belongs.

 

Remember to include those employees who are crucial for a successful execution (e.g. usually your business end-users) early in the process and make sure
they understand the results and how to leverage the insights. To be accepted, your analysis must be concise, clear, and trustworthy. Try to understand where
your stakeholders (e.g. managers, business users, etc.) are coming from and how to communicate the results of the analysis effectively in their business
language. A great analysis with great predictive power is only half the battle – whether your business will be able to profit from this will depend on your organization’s ability to close the loop to its operations.

 

 

Conclusion

At this point you may feel slightly overwhelmed at the sight of the different aspects that play a role when setting up a predictive analytics initiative. It is true – these things can get really complex but when using SAP InfiniteInsight they become much simpler compared to traditional tools due to the high level of automation. However, to get started quickly and get a feeling for the technology you don’t need to boil the ocean – you can easily take data that is already available to you and see what kind of relationships you can uncover (a trial for SAP InfiniteInsight is available here). You can use this blog post to see an example of how SAP InfiniteInsight can be used with HR data but the example and the steps described translate well to other business areas as well. Please feel free to leave any questions or comments!

Many HR departments are looking at predictive analytics as a hot new approach to improve their decision making and offer exciting new services to their business. Luckily, with SAP InfiniteInsight you don’t have to be a Data Scientist to find the valuable insights hidden in your data or build powerful predictive models. Combined with this, SuccessFactors Workforce Analytics provides clean, validated information bringing together disparate data from multiple systems into one place to enable decision making. Let’s see on a concrete example how you could use this combination to better understand your workforce and make predictions in areas that really matter to your business.

 

 

The Scenario

Meet John – he’s an HR analyst working for a large insurance company and responsible for supporting line of business managers with workforce insights. He’s been monitoring a concerning trend over the last year regarding the turnover of sales managers in the company’s regional offices – his turnover reports in Workforce Analytics have shown significant deviations from the tool’s industry benchmarks. Today, he has a call with Amelia, the global head of sales, to talk about headcount planning. John takes the opportunity to inform Amelia about his findings only to learn that Amelia has been made aware of this phenomenon a few weeks ago by a few of her direct reports: “You know, John – I’m fine with people leaving, a bit of turnover is healthy and keeps our business competitive but what I’ve been hearing is that we tend to lose the wrong people, namely mid-level sales managers with a great performance record. If an experienced sales employee leaves we take an immediate hit to our numbers so we naturally try very hard to keep them. Our salary is more than competitive and we offer great benefits so I have trouble imagining what could be the drivers behind this trend. Can you please investigate and let me know what I could do to reverse this development?”

 

 

The Data

John discusses his suspicions with some of the other analysts who have observed similar trends in other lines of business. Some of his colleagues hint that a lack of promotion or a general increase in the readiness to change jobs might have an influence on employees’ propensity to leave. So John decides to extend his analysis beyond sales and include other business functions as well. He prepares a dataset with all the employees in his company as of the end of his company’s last fiscal year (09/2013) and flags employees who have left the company voluntarily within the following 12 months (until 09/2014) to have a basis for his analysis. The dataset also contains a range of variable to assess their influence on turnover such as previous roles, demographics or performance. The 12 months period for tracking the employee will allow John to anticipate an employee at risk with sufficient lead time to give a manager the opportunity to react if required. Even though John has already some rough hypothesis what could drive turnover based on his reports in Workforce Analytics, he wants to keep the analysis broad to capture unexpected relationships as well.

 

 

The Analysis

John starts up SAP InfiniteInsight and decides to build a classification model to classify the employees in his dataset into those who would leave within the next 12 months and those who would still be with the company.

01-Landing_Screen.png

John connects to the SuccessFactors Workforce Analytics database and selects his dataset as a data source:

02-Select_Dataset - WFA.png

He clicks “Next” and instructs SAP InfiniteInsight to analyze the structure of his dataset by clicking on the “Analyze” button next.

03-Analyze_Data_Structure.png

John is happy with the suggest structure of the dataset – SAP InfiniteInsight has recognized all the fields in his dataset correctly and John doesn’t need to make any changes. He clicks “Next” to progress to the model definition screen:

04-Define_Model.png

John can use all the variables in his dataset except for the Employee ID since this field is perfectly correlated with the outcome John likes to model. Therefore he excludes Employee ID from the model definition. As target variable John uses the “Will leave within 12 months” flag from his dataset. This flag contains “Yes” for all employees who leave within 12 months and “No” for those who are still with the company. The analyst clicks “Next” to review the definition before executing the model generation:

05-Review.png

Since John is no Data Scientist and doesn’t want to deal with manual optimization of the models, he uses SAP InfiniteInsight’s “Auto-selection” feature: When “Enable Auto-selection” is switched on (by default), SAP InfiniteInsight will generate multiple models with different combinations of the explanatory variables that John has selected in the previous screen. This way the tool optimizes the resulting model in regards to predictive power and model robustness (i.e. generalizability to unknown data). Simply put: When using this feature John will get the best model without having to deal with the details of the statistical estimation process. He now clicks “Generate” to start the model estimation process.

 

 

The Results

Eight seconds later, SAP InfiniteInsight presents John the results of the model training:

06-Model_Overview.png

John reviews the results: His dataset had 19,115 records and 22 dimensions were selected for analysis. 9.02% of all employees inside the historical dataset (snapshot of 09/2013) left the company voluntarily between 10/2013 and 09/2014, i.e. within 12 months of the snapshot (=his target population), while 90.98% of employees were still employed. These descriptive results are in line with his turnover reports from Workforce Analytics.

 

John now looks at the model performance (highlighted in red) and sees that the best model that SAP InfiniteInsight has chosen has very good Predictive Power (KI = 0.8368 , on a scale from 0 to 1 with 1 being a perfect model) as well as extremely high robustness (Prediction Confidence: KR = 0.9870, on a scale from 0 to 1). Also, from the 22 variables John had originally selected, the best model only needs 16 variables: The remaining six variables didn’t offer enough value and have therefore been automatically discarded. Based on the model’s KI and KR values John concludes that not only does the model perform very well on his dataset – it also can be applied to new data without losing its predictive power. He is very happy with the results and clicks “Next” to progress to the detailed model debriefing.

07-Select_Debriefing_1.png

John decides to look at the model’s gain chart to understand how much value his model offers for classifying flight risk employees compared to picking employees at random (i.e. not using any model at all). So he selects “Model Graphs”…

08-Model_Graphs.png

The graph compares the effectiveness of John’s model (blue line) at identifying flight risk employees with picking employees at random (red line) as well as having perfect knowledge of who would be leaving (green line). Since the model’s gain (blue line) is very close to the perfect model (green line) John concludes that there is probably only very little that could be done to further improve the model since it is already very close to perfection (for more information on how to read gain charts see here). The analyst decides it’s worth looking at the individual model components to understand which variables drive employee turnover. He clicks on “Previous” and selects “Contribution by Variables” on the “Using the Model” screen.09-Variable_Contributions.png

John looks at the chart and can see that the top three variables contributing to voluntary turnover are “JobLevelChangeType”, “Current Functional Area” and “Change in Performance Rating”. He decides to look at them in more detail by double-clicking on the bar representing each variable.

10-JobLevelChangeType.png

The most important variable is “JobLevelChangeType” which describes how an employee got into his or her current position: The higher the bar, the greater the likelihood to leave within the next 12 months. John sees directly that being an external hire or having been demoted contributes significantly to turnover. He isn’t surprised to see “demotion” as a strong driver since his company had only three years before begun using this approach to make the organization more permeable in both directions and this has seen some resistance by employees. Based on the data, it seems that having been demoted drastically reduced employee retention.

 

Also, external hires seem to rather leave the company as opposed to looking at better opportunities within the company and John makes a note about this – he wants to discuss this with Amelia since he currently doesn’t see why external hires would behave this way.

 

Next, John looks at “Current Functional Area”:

11-Functional_Area.png

John immediately sees his suspicions confirmed: Working in sales contributed significantly to employee turnover – and this by a wide margin! He continues to the third variable “Change in Performance Rating”:

12-Change_in_Performance.png

The pattern John had observed in the first two variables continues – seeing one’s performance level decrease drove employees away while improving oneself helped the company retain employees. The company has introduced a stack ranking system where performance levels were always evaluated in relation to an employee’s peers to encourage grow and competition – especially in the sales department. However, as a consequence many employees see their performance decrease (12.8% of employees have experienced this during the period) while there may not necessarily be something wrong with an employee’s absolute performance: A previously high performing employee may see his or her performance rating decrease while delivering the same results simply because he/she is part of a high performing team where some of the other team members had a better year. The results of the model hint at an unintended side-effect of this system – instead of putting up with decreasing performance ratings and training harder, the company’s employees tend to quit their jobs and try their luck elsewhere. John finds this interesting and plans to discuss this with Amelia to understand whether these effects were welcome in her department.

 

John looks at the remaining 13 variables to understand the other drivers better. He observes a strong influence of tenure on turnover levels (especially among mid-level employees with tenure between 5 and 9 years) or not having had a promotion within the last three years. There also seem to be differences across countries, regions and demographic variables such as age or gender. The patterns that John sees in the model paint the picture that the company has indeed a problem keeping experienced employees, especially in the sales department – and the culprit seems to be new stack ranking performance evaluation scheme John’s company had implemented three years ago in an attempt to foster a more competitive and performance oriented company culture. This is supported by the data from the countries – those few countries where the stack ranking system hadn’t been implemented yet have significantly lower turnover. The story that emerges is one of an experienced, well-performing employee who is confronted with the new performance evaluation scheme, sees his or her performance ratings drop with pressures on the rise and then decides to leave.

 

John assembles the information into a presentation for his HR top management to address the topic. After having had a follow-up discussion with Amelia who confirmed his conclusions, he is convinced that the stack ranking system is not tuned to the volatile sales business and serves as a driver of turnover. In preparation of the meeting John decides to apply his model on current data to identify those employees from the sales department who are currently at risk of leaving.

 

The Prediction

John refreshes his dataset based on the most current data. Using the model’s confusion matrix John chooses a high sensitivity level to predict potential leavers. The confusion matrix compares the model's performance in classifying employees into leavers and non-leavers (=”predicted yes” / “predicted no”) against the actual, historical data (=”true yes” / “true no”). This way John can understand how well the model performs at classifying individual employees into leavers and non-leavers – every model makes mistakes but good models make fewer mistakes than bad models and the confusion matrix tells John which categories the model confuses with one another compared to the actual outcomes (hence the name “confusion matrix” – more info here).

13-Confusion_Matrix.png

Using this model on the list of sales reps should give John a list of employees of which statistically 56.72% (the model’s sensitivity score) would actually leave the company within the next 12 months. John applies the model on his new dataset:

14-Apply_Model.png

After applying the model, John looks at the resulting list: Out of 2,120 employees, his model has identified 473 employees at risk out of which he knows about 57% will actually leave within the next year (although he doesn’t know who exactly will be leaving). Since some of these employees perform better than others and are therefore more important to be retained, John filters the list of flight risk employees to only include experienced, well performing sales reps and ends up with a shortlist of 215 employees. From these employees’ sales data in Workforce Analytics he calculates that losing 57% of then could cost the company up to $60M in lost sales. Also, at estimated recruiting and training costs of a new sales manager of 150,000$+ this analysis could save the company up to 215 x 57% x $150,000 + $60M in lost sales = $78.3M.

 

 

John discusses the list of 215 employees with Amelia and they decide to go to the HR Leadership Team meeting together to address the urgency of finding appropriate measures to retain these employees. Amelia and the HR Leadership Team are very impressed with John’s work and, faced with the huge impact of doing nothing, decide to free up some budget for appropriate retention measures while at the same time initiating a discussion whether to get rid of the stack ranking evaluation system to reverse the trend…

 

 

 

...and how are YOUR employees?

Employee retention is an important topic with a big impact on a company’s bottom line. Seeing how simple it is to use SAP InfiniteInsight maybe you’d like to try out a similar analysis yourself? A trial version of SAP InfiniteInsight is available here:

 

http://global.sap.com/campaign/na/usa/CRM-XU14-INT-PREDLP/index.html?url_id=banner-no-homepage-analytics-predictive-free-trial-june14r2

 

Have any other great ideas around using predictive with HR data? Feel free to post your ideas or questions in the comments!

Here in this blog I have tried to consolidate all the information regarding SAP Predictive Analysis under one umbrella.  Main aim is to bring and assimilate the information relevant for SAP predictive analysis, say from system setup, to executing predictive algorithms, even for the beginners. I have also tried to retrieve information from other blogs as well.

 

SAP Predictive Analysis falls namely under two categories: Predictive Analysis Library and SAP Infinite Insight.


We can make use of Predictive Analysis Library (PAL) in mainly two ways:

  • Using HANA PAL libraries directly from HANA studio or
  • Using SAP Lumira Predictive Analysis Tool

 

HANA PAL

 

This is where it started. Once you get access to a HANA system (I hope you already have HANA studio in your system, if not please install HANA Studio), you cannot directly start working on PAL algorithms, there are certain prerequisites which you have to do to check whether HANA system is capable of executing PAL algorithms.

 

PAL Libraries are available from SAP HANA SPS06 onwards, but you can always go for the latest one, if it is available. With every upgrade, HANA team have tried to bring in lot of updates and features. HANA SPS08 has around 50+ PAL algorithms available. Basically PAL defines functions that can be called from within SQL Script procedures to perform analytic algorithms.

 

One can check if PAL libraries are successfully installed in your system or not by executing the following SQL statements in the SQL console.

 

SELECT * FROM "SYS"."AFL_AREAS" WHERE AREA_NAME = 'AFLPAL';

SELECT * FROM "SYS"."AFL_PACKAGES" WHERE AREA_NAME = 'AFLPAL';

SELECT * FROM "SYS"."AFL_FUNCTIONS" WHERE AREA_NAME = 'AFLPAL';

 

You will not see any results if PAL libraries are not installed in the HANA system which you are working on. You can contact your system administrator if the libraries are not installed or if you have system administrator access, you can follow the steps mentioned in this blog PAL Libraries setting up on HANA System.

 

Once this is done, then you need to give privileges to your user for executing PAL library functions. This can be done by executing the following statement:

 

GRANT EXECUTE ON system.afl_wrapper_generator to I068235;

GRANT EXECUTE ON system.afl_wrapper_eraser to I068235;

 

Here I have given my user name, but you cannot grant this privilege by logging in with your user (This is very important, you cannot give any kind of privileges to your own user after logging in with your user, always try to give privileges from a different user).


Once this is also done you are good to go.

 

You can check the SAP PAL Documentation for detailed description on PAL Alogrithms. All the PAL algorithms are explained with use case in this document.

 

This link always refers to the latest document regarding SAP HANA Predictive Analysis Library, and it will include the features of latest productive version of HANA.

 

You can see the examples of all the algorithms in the above mentioned document. It is very well explained. Only thing is that one has to select the best possible algorithm based on your use case and scenario. PAL Libraries/Algorithms are divided into 9 data mining categories. One frequently used category is Time Series algorithms. If you want to forecast any new values, these are the best algorithms available. There are five different algorithms under Time Series category.

 

E.g.: Double exponential smoothing.

 

You can watch the video on Double Exponential Time Series to have a better understanding about time series algorithms. This video has clearly explained the steps which you have to follow when you are working on PAL Time series algorithms. Similarly you can see the videos for other time series algorithms as well.

SAP Predictive Analysis and SAP Lumira

 

To avoid any confusion, SAP predictive analysis tool is altogether a different installation from SAP Lumira. If you have already installed SAP Lumira, you will have to uninstall SAP Lumira to install SAP predictive analysis.

 

You can download SAP predictive analysis from Service Market Place.

 

You need special privileges to download any software from service market place, normally most people don’t have it. You can ask for permission from the same page itself, it will go to your direct reporting manager for approval.

 

Once you install the tool, it will be a 30 day trial, license will get expired after 30 days.

 

You can watch the video on SAP predictive analysis tool setup to have an idea about installation and setting up of SAP predictive analysis tool.

 

In this video they have covered connecting to a HANA system from Predictive Analysis tool as well. While trying to connect to the HANA systems, try to give SAP HANA server as: lddb<system ID>.wdf.sap.corp

 

Once it is connected you can directly pull data from the tables as mentioned in the video.

 

If you have already tried out Double Exponential Time series from HANA studio, the next steps will be easy.

 

You can drag and drop the algorithms which you want in the predict tab. The screen will look like this now.

 

PAL Algorithms.png

Once this is done, you can change the properties of the algorithm by clicking on the settings button on the selected icon. Then select configure settings, and then a screen will come where in you have to enter all the mandatory values. See the screen shot below to see an example.

 

PAL Tool Properties.png

 

Once the configuration is done, run the algorithm from the same screen. You will see the results in a table, and if you go to Trend chart you will be to see the predicted values in a graph like below screen.

 

Double Smooth.png

 

Working with this tool is this easy, and once we get the results from the algorithm you have the option to write the data back to the HANA DB as well.

 

Most of the PAL algorithms available in HANA systems are available in SAP Predictive Analysis Tool except few. Selecting a particular algorithm is as easy as drag and drop. You don’t even have the additional overhead of creating signature tables for calling a PAL algorithm. (You will come across signature tables if you try to call any PAL algorithm from HANA Studio, we even have to create a result table which will store the result data once we successfully execute the algorithm). Here in this tool, everything can be maintained as properties for whatever algorithm which you have selected. Once you execute the algorithm the results can be displayed directly on a graph.

 

SAP Infinite Insight

 

SAP has bought KXEN to mainly deal with automated predictive analysis. Now with this acquisition SAP has renamed the software to SAP infinite Insight.

 

Infinite Insight can be downloaded from Service Market Place.

 

There are different versions available in service market place. Latest version will occupy 2.5 GB in space, we don’t have to install the entire setup, to make it easier one can download the object ‘IIWS7000_0-80000274.EXE'. You can give this as search term and download the file. Image below is the screenshot for searching the same so that .exe file comes in the search results.

 

Infiniteinsight.png

 

Once you install SAP Infinite Insight, you can directly start working on it. Once you open the software you will see a screen like this:

 

Infinite Inisght Home Screen.png

 

SAP infinite insight is a vast topic and there are lots of features associated with it.

 

SAP Infinite insight help on SDN will give you a fair idea about the tool and the features that its offering.

 

If you have any doubt regarding setting up of the SAP Infinite Insight Tool and connecting to a particular DBMS you can go to the SAP Infinite Insight Help Portal.

 

It gives in depth understanding of each and every topic and all the features like Explorer, Modeler, Social, Recommendation and Tool kit are explained in detail in separate documents.

 

There is already some interesting blogs written for Explorer and Modeler. You can read that as well.

 

Since it is very difficult to cover all the features in one single blog, I will try to write another blog exclusively for SAP infinite Insight considering one use case covering E2E functionalities.

 

Feedbacks are welcome

Hi everyone,

 

At long last, we now have a customer-facing website (Ideas Place) dedicated to Predictive Analytics & Infinite Insight !! 

 

Predictive Analytics: Home

 

Please use it to suggest product enhancements to our Advanced Analytics line.

 

Our Product Management are looking forward to your suggestions! 

 

Many thanks to Marc DANIAU  for making this happen.

 

Kind regards,

H

Revisiting the Technical Content in BW Administration Cockpit with SAP Predictive Analysis


The following blog post demonstrates how to use the technical content of SAP BW as a forecast data basis for a prognosis model in SAP Predictive Analysis. The aim is to show a smooth and straight-forward process avoiding additional modelling outside of BW as much as possible. In the described use case the Database Volume Statistics[1] have been chosen as an example.

 



The official SAP Help summarizes the Technical Content in BW Administration Cockpit as follows: “The technical BI Content contains objects for evaluating the runtime data and status data of BW objects and BW activities. This content is the basis for the BW Administration Cockpit, which supports BW administrators in monitoring statuses and optimizing performance.[2]

 

The Technical Content with its pre-delivered Web Reporting might look a bit old-fashioned nevertheless the variety, quality, and quantity of data which is “generated” at any time in the system is very useful and important for further analysis. The type of data has a strong focus on performance-related data (e.g. query runtimes, loading times) but also other system-related data like volume statistics are available.

 


 

BW on Hana and SAP Predictive Analysis[3] together are extending the possibilities how to see the data and what to do (potentially more) with it.[4]

Technically there are simply the following 3 steps to follow[5]:

  1. Expose cube information model to Hana (SAP BW)
  2. Adjust data types to PA-specific format (Hana Studio)
  3. Create forecast model (SAP PA Studio)

 

The Database Volume statistics in the technical content are designed with a simple data model consisting of just one cube with some characteristics (day, week, month, DB object, object type, DB table etc.) and key figures (DB size in MB, number of records etc.). Following the above steps with this set of data, choosing a certain type of algorithm, results in a bar chart shown below integrated with forecast figures for the past and some months into the future.

 

The blue bars represent the actual database size by month. The green line represents the calculated figures of the forecast model (in this case a Double Exponential Smooth regression) for the past 20 months and 10 months into the future.

1.png

 


Below are some technical details for each of the mentioned steps:

 

(1) Expose information model of Infocube 0TCT_C25 to Hana Studio[6]

  • Edit the Infocube in BW and set the flag for “External SAP HANA view”:

2.png

 

Immediately the information model is generated as an Analytic View and can be viewed in Hana Studio:

  • Content -> system-local -> bw -> bw2hana -> 0 -> Analytic Views -> TCT_C25

3.png

 


(2) Adjust data types to PA-specific format (Hana Studio)

  • The generated Analytic View of Infocube 0TCT_C25 looks like below:

4.png

SAP Predictive Analysis needs (currently) a specific time-ID column and the key figures must be of data type DOUBLE. The new Calculation View CV_TCT_C25_1 is created based on the generated Analytic View TCT_C25:

  • Column [Month] (PA_TIME_ID_MONTH) = <unique sequential number for each month>[7]
  • Column [Database Size] (PA_TCTDBSIZE) = DOUBLE(0TCTDBSIZE)

5.png

 


(3) Create forecast model (SAP PA Studio)

 

Creating a forecast model in SPA Predictive Analysis follows the standard tasks as for any other data source.

 

  • Select data source i.e. select prepared calculation view including (time) key id column and relevant key figures
  • Select and configure components for the model:
    • Use [Filter] component (if necessary restrict columns and rows like filtering the relevant database object types, time range etc.)
    • Choose adequate [Algorithm] component, in the following case a Double-smoothing algorithm (PAL) has been chosen for forecasting several months into the future

6.png

 

And finally the resulting trend diagram is shown (see above).

 

 

 


[1] Infocube 0TCT_C25

[2] SAP Help Portal -> Technology -> SAP NetWeaver Platform

[3] This post deals with SAP BW on Hana 7.40/SP6 and SAP Predictive Analysis 1.19

[4] The blog post is focusing on the technical aspects to get a forecast model successfully executed. The chosen algorithm might not be statistically appropriate.

[5] Assuming the technical content has been activated in SAP BW

[6] Unfortunately it’s not yet possible to expose the information model of a Multiprovider

[7] Data used is from April 2013 to November 2014. To get a unique ID the following calculation is used (in order to get a sequence starting from 1):

    (int("0CALYEAR") - 2013)*12 + int(rightstr("0CALMONTH",2)) - 3

SAP uses Advanced Analytics expertise to support the fight against Ebola

 

A team within SAP is developing an analytical application to help combat the spread of Ebola. The current outbreak poses a global health and safety threat and requires the help of everyone to be contained.

 

All hands on deck: The outbreak of Ebola in several West African countries, and the threat of it spreading to Europe and the United States have mobilized hundreds of volunteers around the world to combat its spread. Volunteers have ranged from individual healthcare workers to global companies like SAP, who have joined forces to develop a cutting edge advanced analytics solution to support the helpers in their challenging task of fighting the disease. Our goal is to provide large health organizations with this application to support their mission. This solution promises to be not only valuable in the field in Africa, but can also be used by state authorities to screen passengers of incoming flights from affected countries.

 

Our plan: We want to make an efficient and fast diagnosis of the disease possible, which is essential for medical personnel to make the right treatment decisions. The developed application will first enable doctors and helpers to gather data on infectious diseases. This information will be subsequently fed into a central database. Based on input from remote doctors and machine learning, the application identifies whether a patient may have been infected with Ebola.

 

Kevin Richards, Head of U.S. Government Relations at SAP interviewed the WHO and US State Department representatives to identify the key challenges that operators in the field are facing. It became clear that one of the biggest influencing factors is the ability to collect the patient data when in most cases there is no stable connection to the internet. Hence, the quality of collected data will be determined by the robust offline capabilities of the application, which then can be synchronised to an overall data hub as soon as an internet connection becomes available.

2.png

Data collection & Diagnosis: Whenever a doctor or a volunteer thinks someone may be showing signs of an infectious disease, they can open the application and navigate to the “Add Patients” Tab. The doctor can take a picture or make a video of the patient and report their symptoms. This data is then sent to a central database along with the doctor’s geo-location and submission time. The application is a cloud solution which can be accessed easily on any mobile devices. The collected data can be stored on the device and synchronised later, as soon as an internet connection becomes available. 

 

Once the data is uploaded, remote doctors are able to comment on each patient, help with the diagnosis and give treatment recommendations. Meanwhile SAPs Advanced Analytics solution InfiniteInsight clusters the described symptoms, patient data and the judgments of the doctors in the background of the application. This way it can be determined what reported symptoms are most highly correlated with an Ebola diagnosis. For example the symptoms of chills, blurred vision, nausea and vomiting, ulcer, severe headache, and unexplained hemorrhage are the symptoms that are the most important in determining if a particular patient may or may not be infected with Ebola. Upon further analysis into the contributing variables, it becomes clear that ulcers, chills, and blurred vision are the most commonly reported symptoms not associated with an Ebola diagnosis. Conversely, the contributing variables of nausea and vomiting, unexplained hemorrhage, and severe headache are associated with the disease. As the Algorithm determines which symptoms are significant indicators, the application is able to push a preliminary diagnosis to the helpers even in offline mode and an appropriate treatment can commence without any delay. Additionally, the application will allow the tracking of any mutations and subsequent symptom changes of the disease over time and geography.

 

3.png

Forecast: One of the biggest challenges is to understand how the disease will spread during the coming weeks. Hundreds of lives could be saved if we would be able to predict in which cities Ebola is going to break out next – With SAP’s Advanced Analytics we can provide a tool that will give the necessary insights into the future spread and development of the disease based on the data patterns of the collected incidents. Users will also be able to view an infographic in the app to see the current spread of Ebola and information about the appropriate safety measures.

Hi

 

In my previous blog SAP InfiniteInsight - Explorer , I demonstrated how you can create a data set for further analysis.

 

 

In this blog I will focus on the SAP InfiniteInsight Modeler to create a model on the data set from my previous blog.

 

In the previous blog we prepared data that comes from a garden retailer that has a coffee shop. We prepared the data so we could analyze in this blog what will influence someone visiting the garden shop to most likely have a Dessert or Cake at the garden retailers coffee shop.

 

So lets start...

 

From the welcome screen I will select "Create a Classification" under the Modeler section. As you can see different types of models can be created.

1.Modeler.png

Figure 1

 

 

 

I have now selected the data from the explorer. I have selected Analytical Record Set 1.

2.SelectData.png

Figure 2

 

 

By pressing next you will go to the next screen, will be blank until you press Analyze button. Then figure 3 will be displayed. At this step we can also view the data if we need to.

3.DataAnalyze.png

Figure 3

 

 

Now we will select the target variable which we want to analyze, the target variable is who bought cakes or desserts. We also exclude some variables. So here we are saying we want the model to determine what the other variables impact is on our target variable.

4.TargetVariable.png

Figure 4

 

 

The next screen will then show the summary of the model.

5.Summary.png

Figure 5

 

 

The model generation will start, also known as "Training the model"

6.Generate.png

Figure 6

 

 

The results of the model will be shown as seen in figure 7. It is important to know the following values shown and the meaning of the values.

  • KI - a measure of how powerful the model is at predicting. This is a number that ranges between 0 and 1. The closer the KI is to 1, the more accurate the model is.
  • KR - a measure of robustness, or how well the model generalizes on an independent hold out sample. KR should be a number ideally above 0.95.

 

So based on the above, our KI measure is poor. But will serve our purposes for the blog 

7.Results.png

Figure 7



We can now review the model results by selecting the appropriate options.

8.Display.png

Figure 8


By selecting "Contribution by variable" we can see that the following aspects influence the scenario. Firstly pets, then children, then the segment, then the age, etc.

9.ContributionByVariable.png

Figure 9




We can now take it further and analyze the age variables. Here we see that ages between 18-26 and 48-70 are likely to buy a cake or dessert. Individuals with the age 26-48 are less likely.

10.Age.png

Figure 10.



So this now tells us the coffee shop will have better success with cakes and desserts that are appealing to people with pets, have children and are between the age gaps identified. This will help deliver a more precise advertising if needed.


 

Hope the above shows how a predictive model is created by just clicking away and how the results can be a valuable tool.



Developers are rarely shy about sharing their views on new tools and technology. I appreciate their passion and healthy scepticism, in fact I seem to have developed my own slightly cynical perspective. So when I heard we’d added SAP InfiniteInsight (formerly from KXEN) to the SAP OEM offering (it’s my job to build OEM marketing content), I quietly wondered how relevant the solution was going to be for our OEM partners.

 

As I gathered solution information my sceptical attitude soon began to shift to one of pleasant surprise at how ‘cool’ the functionality was. I knew that predictive analytics was about looking at data and forecasting the likelihood of future events, and yes that is cool, but that’s not what impressed me. My own experience of working with in-house data scientists (dudes with PHDs in statistics and analytics) had shown me that creating a predictive model for optimizing campaign lead follow-up takes weeks, if not months. The process required the identification of predictive variables and development of a consistent model for using those variables to score prospects based on how likely they were to buy, and then involved lots of iterative testing.

 

InfiniteInsight_image.jpg


What I hadn’t expected was SAP InfiniteInsight’s ability to self-learn from historical data… and identify the predictive variables without a data scientist in the room. In fact, the software can continuously relearn and adapt its scoring based on current target audience actions.

 

Next I’m thinking, ok this would be great value-add for any partner building customer management solutions or operations software but it must be pretty tricky to integrate… and I was once again pleasantly surprised. SAP InfiniteInsight’s core functionality resides in 4 DLLs totalling just 1.5 MB with comprehensive APIs.


That means our OEM partners can relatively easily embed the technology, point the solution at an historical database and let it figure out the predictive characteristics and then use those variables to score a net new target individual or target dataset of many individuals. This can even be done in real-time so if someone is surfing my ecommerce site and has selected to purchase an item I can instantly offer up the next-best three items as suggestions – based on what others have typically bought with the 1st item.

 

This really gets the brain cells firing in terms of the all the potential scenarios where SAP InfiniteInsight might extend existing application value, and drive increased customer satisfaction and loyalty. Here are just a few of the scenarios that I thought were appealing.


For CRM related applications:

  • Optimize direct marketing campaigns to boost response rates
  • Analyze customers’ website touch points to improve their online experience
  • Target customers that have a high propensity to churn with new customized offers
  • Analyze customer purchasing histories to deliver targeted up-sell recommendations

 

For business operations:

  • Predict how market-price volatility will impact production
  • Foresee changes in demand and supply
  • Analyze streams machine data to build proactive maintenance schedules
  • Forecast customer demand and optimize inventory
    in real time

 

For finance solutions:

  • Analyze sales transactions to identify unsafe investments
  • Predict patterns of fraud within Big Data
  • Perform credit score analysis in real time

 

And I almost forgot, if you’re interested in turbocharging your predictive analytics performance you can also pair InfininieInsight with the in-memory computing power of SAP HANA for a real-time experience.


In the end, my mind set had completely reversed from one of skepticism to one of optimism but for those of you that have that skeptical bone in your body, I invite you to do your own investigation. I’ve included a couple of links to speed the process.

 

SAP InfiniteInsight home page

SAP InfiniteInsight Industry and LOB scenarios

SAP Predictive Analytics OEM eBook

SAP InfiniteInsight Introduction and Overview Blog

 

If you’re interested in learning more about…

  • building predictive models in minutes or hours, not weeks or months
  • integrating automated predictive modeling into your applications
  • increasing your application footprint at existing customers

then please reach out to our OEM team. Many SAP OEM partners are already using SAP InfiniteInsight to differentiate their offerings and open new revenue streams.

 

Get the latest updates on SAP OEM by following us @SAPOEM on Twitter

For more details on SAP OEM partnership and to know about SAP OEM platforms and solutions , visit us www.sap.com/partners/oem

Hi

 

I have not seen much posted regarding InfiniteInsight. I thought I would take some time to demonstrate parts of this product.

 

InfiniteInsight is predictive analysis tool SAP has acquired form the acquisition of the company KXEN.

 

This tool is designed to make the process of using a predictive tool easier and with less reliance on a data science. Also everything done is done by just CLICKING AWAY.

 

When you launch the product you will see Figure 1 as your entry point. In this blog I will focus purely on the explorer part. Explorer is used to get your datasets in a format that we can used to build predictive models on.

1. Explorer.png

Figure 1 - InfiniteInsight

 

 

 

So first step is to create explorer objects, will need to select the source of the data. In this scenario we are pulling from HANA.

2. Connect To Data.png

Figure 2 - Create or Explorer Objects


 

You can then create your datasets. In my example I have already created the datasets, all done by clicking and no code. I have created three types of data sets.

  1. Entity
  2. Time Stamped
  3. Analytical Record

3.DataSets.png

Figure 3 - data sets


 

I wont be showing how I created each data set as there is a few screens that would need to be captured and will make the blog too long. Here is a example of the entity data. Data that shows entity that will be analyzed.

4.Entity.png

Figure 4 - entity data



Example of time stamp data, here we just create time entries.

5.Timestamps.png

Figure 5 - Time Stamp Data

 


The analytical record we have basically taken the time stamp data and joined the entity data, when creating this we can choose what fields to keep or exclude.

6.AnalyticalRecord.png

Figure 6 - Analytical Record


 

You can create different versions of the types of data, here I have a second analytical record set. It is the same as the first one except we have added some calculation columns being a sum, count and count distinct. Once again created just with clicks and no code.

7.AnalyticalRecord2.png

Figure 7 - Analytical Record 2



I have also created a third analytical record where we have added extra columns that are pivoted so we can use to analyse even further.



As seen above, the explorer part allows you to get different sets of data and combine them, do counts, pivots and more. Once the data is arranged in desired format you can now move to the next section to predict data on it.

 

I will try cover that on another blog.

First some background about the issue:
      InfiniteInsight (II) is not letting you use your analytical views, calculated views and so on in the user interface


In the background, II will use the capabilities of the ODBC driver to get the list of "data space" to be presented to the user using a standard ODBC function.

Unfortunately, the HANA ODBC driver is not currently including the names of the analytical views, calculated views.

 

However this ODBC driver behavior can easily be bypassed in two ways:
- simply type in the full name of the calculated view (including the catalog name) like "PUBLIC"."foodmart.foodmart::EXPENSES"
- configure II to use your own custom SQL that will list the item you want to display.

This feature is used in II to restrict the list of tables for example when your datawarehouse has hundreds of schemas.

 

One file needs to be change depending on if you are using a workstation version (KJWizard.cfg) or a client/server version (KxCORBA.cfg) by adding the following content:

 

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog1="  SELECT * FROM (   "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog2="   SELECT '""' || SCHEMA_NAME || '""', '""' || OBJECT_NAME || '""', OBJECT_TYPE FROM SYS.OBJECTS WHERE OBJECT_TYPE IN ('TABLE', 'VIEW') AND SCHEMA_NAME NOT LIKE '%%SYS%%'   "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog3="  UNION ALL   "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog4="   SELECT '""' || SCHEMA_NAME || '""', '""' || VIEW_NAME || '""', VIEW_TYPE FROM SYS.VIEWS WHERE NOT EXISTS (  "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog5="         SELECT 1 FROM _SYS_BI.BIMC_VARIABLE_ASSIGNMENT A JOIN _SYS_BI.BIMC_VARIABLE v ON a.CATALOG_NAME = v.CATALOG_NAME AND a.CUBE_NAME = v.CUBE_NAME AND a.VARIABLE_NAME = v.VARIABLE_NAME  "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog6="         WHERE SCHEMA_NAME = a.CATALOG_NAME AND VIEW_NAME = a.CUBE_NAME AND ( MANDATORY = 1 OR MODEL_ELEMENT_TYPE IN ('Measure', 'Hierarchy', 'Script') )  "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog7="   ) AND IS_VALID= 'TRUE' AND VIEW_TYPE IN ('CALC', 'JOIN')   "

ODBCStoreSQLMapper.MyDSN.SQLOnCatalog8="  ) order by 1,2   "

 

The KxCORBA.cfg file (used in a client/server installation) itself is located on the InfiniteInsight server installation directory named:

     C:\Program Files\SAP InfiniteInsight\InfiniteInsightVx.y.y\EXE\Servers\CORBA

where x.y.z is the version you have installed.

 

If you are using a standlaone (a.k.a. Workstation), then the file to modify is KJWizard.cfg which is located in:

     C:\Program Files\SAP InfiniteInsight\InfiniteInsightVx.y.y\EXE\Clients\KJWizardJNI

where x.y.z is the version you have installed.

 

In this example I only include tables, views, calc and join views with no mandatory variables or 'Measure', 'Hierarchy', 'Script' variables at all.

 

You may need to adjust this configuration SQL if you want to list Smart Data Access objects.

 

You can notice here that we are changing the behavior for one ODBC DSN (MyDSN), so this value might need to be adjusted in your environment.

You can also replace it with a star (*), then this configuration will be applied to all ODBC DSN, which may not work on other databases.

 

Some functionalities in II may not work yet properly despite this workaround.


For example:

  • data manipulations requires the configuration file change
  • view placeholhers and in general views attributes are not properly supported
  • some type of aggregates are not "selectable by name" which mean that if used in a select statement in HANA Studio it will not be returned (select * vs select cols).

 

Hope this will save you some time

Hello !

This is my first post to scn, so, please, be generous)

 

I'm working with HANA PAL for 4 monthes. My domain is time series predictions, so I'm using *ESM functions collection, espeially TESM.

When I build my forecast models, I always want to visualise the results - that gives me the first understanding of whether I'm doing right or not. You know that - two charts are much less "readable" than one:

 

ScreenShot041.jpg

vs

 

ScreenShot042.jpg

 

When you look at the second one - you get very clearly that your forecast is not realy good, while looking at the first two you might think "Mmm?... "

 

 

So, what we want is to merge the input PAL table/view (let it be fact) and the output one - let it be prediction.

 

 

There would be no problem here if you had your data in the appropriate structure by default:

ScreenShot040.jpg

 

But usually I don't.

My raw data usually comes as PSEUDO_TIMESTAMP | DOUBLE table.

Where PSEUDO_TIMESTAMP may be of mm-yyyy, ww-yyyy, yyyy.mm, yyyy.ww and so on...

 

So, the question is - how to sort it in an appropriate way and then to numerate the rows?

 

  1. Sorting
    My solution is to transform any input pseudo_timestamp format to YYYY.[ MM | WW | DD ] with the help of DateTime and String functions. (1.7.2 and 1.7.5 in SAP HANA SQL and System Views Reference respectively).
    After you've done it, order by clause will work just fine.
  2. Numerating
    First I've tried to use undocumented HANA table's technical row "$row_id$" - but it works bad..
    The clear and fast solution is to perform the following code before PAL call:

    --assuming that fact table has two columns, timestamp and values. Timestamp is a primary key.

    alter table fact add ("id" bigint);
    drop sequence sequence1;
    create sequence sequence1 START with 1 increment by 1;

    upsert fact select  T1."timestamp", T1."values", sequence1.nextval from fact T1;


After that you can easily create table/view with {"id","value"} to feed to ESM, and then to left join with prediction results

ScreenShot043.jpg

on fact.ID = prediction.ID


Then you visualize the final table/view of your prediction in HANA Studio -> Data Preview -> Analysis



Hope that will help you

 

Precise forecasts to all of us

These are some brief notes with question and answer and polls from yesterday’s SAP webcast.  The usual disclaimer applies that things in the future are subject to change.

 

Also note I didn't stay for the whole session so I may have missed some points.

 

1fig.png

Figure 1: Source: SAP


The Speed of evolution has changed.  As Figure 1 shows, today we have the challenges and inefficiencies of current analytics landscape including complexity, speed, and cost (Source: @SAPAnalytics)

2fig.png

Figure 2: Source: SAP

 

SAP wants to democratize advanced analytics and make it easy, fast, and efficient as the slides shows.

 

They want to make it easy so you don’t need advanced degrees to do this work.

3fig.png

Figure 3: Source: SAP

 

Figure 3 shows you can embed Analytics so the user doesn't know it's underneath

 

Business Analysts are the lynch pin, want things easier to use said SAP’s Shekhar Iyer

4fig.png

Figure 4: Source: SAP

 

The above shows an overview of predictive analytics solutions from SAP

5fig.png

Figure 5: Source: SAP

 

Figure 5 shows bringing together lines of business and industries to make things “efficient and effective”

SAP says to consider the new analysis that is possible with predictive analytics & put our creativity to work

 

Question:

Q: What is biggest stumbling block?

A: Complexity, KXEN – Infinite Insight combines both

6fig.png

Figure 6: Source: SAP

 

Figure 6 shows the results of attendees poll responses.  Most of us aren’t using any predictive analytics solution.

7fig.png

Figure 7: Source: SAP


A customer example is eBay. They saved millions by finding an attribute that contributed to a lack of pipeline (Source: @SAPAnalytics)

8fig.png

Figure 8: Source: SAP

 

Analogy was made that InfiniteInsight is the espresso machine & Predictive Analysis is the barista

 

Learn more about InfiniteInsight at ASUG Annual Conference, where the data modeler for the 2012 Obama Presidential Campaign discusses Using Analytics to Help Win the US Presidency

9fig.png

Figure 9: Source: SAP

 

The above shows an overview of the HANA Predictive Analysis Library (PAL)

 

Learn how a customer is using PAL – see Predictive Analytics for Procurement Lead Time Forecasting at Lockheed Martin Space Systems Using SAP HANA, R, and the SAP Predictive Analysis Toolset at ASUG Annual Conference next month.

10fig.png

Figure 10: Source: SAP

 

The above shows an overview of SAP R Integration for predictive analytics

 

Question:

Q: Can you contrast solutions – R with HANA PAL

A: Algorithms with HANA PAL are a subset of R, optimized to run in R

SAP will continue to invest in #HANA PAL, R Integration

Continue to invest in PAL. Added 100 engineers in this area

 

Q: How see algorithms in KXEN?

A: Not algorithms in KXEN/InfiniteInsight – they are functions

What you see in InfiniteInsight are functions, sorted by category vs. the algorithms

Proprietary algorithms in KXEN -/ II but they do share details

11fig.png

Figure 11: Source: SAP

 

Attendees said the biggest barrier to adopting predictive analytics is skills shortage.  Second is cost.

12fig.png

Figure 12: Source: SAP

 

Figure 12 shows the smart vending example of “Smart operations”

 

Asset management is used keep things cold

 

It also helps personalize the experience

13fig.png

Figure 13: Source: SAP

 

The customer in Figure 13 went from 4 days to 3 hours breakdown time on the Smart Vending example.

14fig.png

Figure 14: Source: SAP

Figure 14 shows a Cox case study.

 

Question and Answer

Q: I’d like to understand predictive and stochastic capabilities and how it understands unstructured data

A: address any model data

Unstructured – when build predictive models, need to structure data in some ways

Use SAP HANA libraries, data services, InfiniteInsight to structure data

 

Q: How often do you switch models out?

A: It depends on business problem and data

Tool to manage models is Infinite Insight –Factory, which lets you reconstruct original data set on the fly. Model management is a big piece

15fig.png

Figure 15: Source: SAP

 

Figure 15 asks who is using predictive analytics to build models in your organization?  Looks like it is mostly the business analyst

16fig.png

Figure 16: Source: SAP

 

Figure 16 is an overview of future direction/roadmap of predictive analytics solutions from SAP. For more details attend ASUG Annual Conference Session Predictive Analysis Roadmap with SAP’s Charles Gadalla.

 

 

If you missed yesterday’s session and you can register for today’s 7:00 PM session http://bit.ly/RMHgEm

 

Other (source: @SAPAnalytics):

 

  • If you are interested in test driving SAP Predictive there is a free trial available at http://bit.ly/1sqaowj
  • SAP offers Rapid Deployment Solutions to speed up deployments
  • It can use HANA smart data access feature. You can use HANA as overlay to federate the data into Predictive Analysis.

 

ASUG Annual Conference

Preview of ASUG Annual Conference 2014: Focus on Analysis Office/OLAP/Predictive

 

Share your Story: Call for Sessions for ASUG at SAP d-code (former TechEd)

 

You are invited to submit a proposal to share your experience and expertise with your colleagues to speak at SAP d-code to be held October 20-24 in Las Vegas.  Others will benefit from your experience while you make a valuable contribution to the profession's field of knowledge.


Follow this link to create a speaker account where you can formally submit your proposal, review important deadlines, and other general information about SAP d-code.  The deadline to submit your abstract is May 25. If you have any questions, please e-mail sapdcodespeaker.info@sap.com


Upcoming ASUG Analytics Webcasts:

 

May 15: Lumira Self Service for Business User

May 21: SAP Lumira Question and Answer Session

June 23: Predictive Analysis Roadmap

September 15: Design Studio and Analysis Scenarios on HANA

This is part 2 of today’s ASUG webcast with SAP's Charles Gadalla.

 

Part 1 is Predictive Analysis - KXEN is not a Radio Station -  ASUG Webcast - Part 1

1fig.png

Figure 1: Source: SAP

 

Figure 1 shows the popularity of R, with a “hockey stick from 2011 and up”

2fig.png

Figure 2: Source: SAP

 

Figure 2 shows an example of editing a custom component with R inside Predictive Analysis.

3fig.png

Figure 3: Source: SAP

 

Figure 3 shows an example of "live editing" of the Custom R component inside Predictive Analysis.

4fig.png

Figure 4: Source: SAP

 

Figure 4 shows upcoming sharing options.

5fig.png

Figure 5: Source: SAP

 

Figure 5 shows building the deployment, solution set, extend it through the organization

6fig.png

Figure 6: Source: SAP

 

An example of embedding is shown in Figure 6 - no one knows this is Predictive Insight, it is part of the module

7fig.png

Figure 7: Source: SAP

 

Figure 7 shows RDS content and it is “free”

8fig.png

Figure 8: Source: SAP

 

Figure 8 shows Predictive Analysis and KXEN are converging over time (subject to change).

 

Question & Answer

Q: What Predictive Analysis capabilities are available in ECC without HANA?

A: SAP InfiniteInsight EXPLORER

A: APO, BW modules, if not use HANA you can use Predictive and KXEN - not dependent on HANA.

________________________________________________________________

Q: Quite a few client tools. Is there a guide to know when to use which tool?

A: Yes, a few client tools.

A: Predictive & Infinite Insight sold as Infinite Insight Modeler. Data  scientists -  Lumira is a visualizaiton tool - 2 algorishm.

________________________________________________________________

Q: Any plan on having SAP Lumira to be a thin client?

A: SAP Lumira is available in the Cloud cloud.saplumira.com

________________________________________________________________

Q: Have you seen any successful models used in healthcare that predict patient outcomes (micro) or hospital admits (macro)?

A: Health care  - SEPSIS / influenza analysis

A: Yes SEPSIS, hospital management, research etc

________________________________________________________________

 

Q: Are there any projects / RDS to use HANA to speed up pricing rebuilding

A: Price optimization - complicated module- sister line - retail product lines using Hybris / Customer Engagement Intelligence.

_______________________________________________________________

Q: Can the tool extract data from external sources such as websites/partner portals (maybe usig RSS or other feeds), and include in my data assessment/analysis?

A: Yes, typically have intermediary of Hadoop

________________________________________________________________

Q: What are the client tools scheduled to be running in 64 bit soon and in in-memory?

A: Predictive and InfiniteInsight running in 64 bit

________________________________________________________________

Q: With regard to Lumira Server, currently the artifacts look to be persisted on HANA, what are the plans to integrate these into Business Objects Enterprise or is the idea to position Lumira Server as a lightweight content repository?

A: Lumira Server is Lumira on HANA and will integrate with BI Platform > will standardize as one on BI framework

________________________________________________________________

Q: Pricing question restated - I have pricing programs that must rebuild prices based on commodity market input and it has difficulty completing overnight.   Any projects to apply HANA to this problem?

A: If look at pricing on market input, projection, trend, can do this with HANA - PAL library algorithms, Monte Carlo that would help with simulations

________________________________________________________________

 

Q: looks like the biggest use cases are currently in market forecast and customer analysis. Are there any for supply chain?

A: Yes - Demand Signal Management, APO, - 150+ use cases and growing.

 

Related:

Join Us at ASUG Annual Conference

 

Upcoming ASUG Webcast next month:

SAP's Charles Gadalla provided this webcast today.

 

1fig.png

 

Figure 1: Source: SAP

On the left of Figure 1, high skill sets are needed to be a data scientist, with a masters in statistics.

 

On the right side, you have business users

 

Consumers take output from data scientists and take an action.


In the middle: data analysts/business analysts – do more than basic reporting – segmentation, forecasting, in a more sophisticated manner

2fig.png

Figure 2: Source: SAP


Data scientists on the far right of Figure 2 are already well served.

 

SAP is interested in group in the middle, including embedding the analytics inside the workflow

3fig.png

Figure 3: Source: SAP

 

Figure 3 shows a paradox that there is a lot of “big data”.

 

We are using more data today and decisions are made in a much shorter time scale, with a huge increase in speed of algorithms

 

Every business is being asked to make decisions faster with more data

Why should I care?

 

highlights.png

Figure 4: Source: SAP

 

Figure 4 shows that back in December, SAP released a survey, showing competitive "ROI"

sap track record.png

Figure 5: Source: SAP

 

Figure 5 shows Mobilink going through 900TB call data records for communities – 6M communities from these calls

 

MONext – decision on fraud transactions in milliseconds

why acquire kxen.png

Figure 6: Source: SAP

 

It was on this slide that Charles said "KXEN doesn't stand for a radio station...it means knowledge extraction engine".  I did not know that.

advanced solution insight to action.png

Figure 7: Source: SAP

 

Insight from KXEN to view thousands of fields of data; Predictive Analysis was built inside SAP

 

Charles used as an example if you drink a diet cola on Tuesday at that means you had chips on a Sunday

 

Another example is to integrate and tell story as Predictive is built on Lumira

 

hana analytics portfolio.png

Figure 8: Source: SAP

 

Figure 8 shows data comes in from any of the channels

 

PAL is on HANA is the implementation on HANA R – maintained by universities and consortium – popular algorithms to use and reuse – execute in memory,

 

It is based on an open source language

 

Client tools on top left of Figure 8.

 

SAP combined Predictive Analysis with Insight in a tool called Insight Modeler

 

It also includes a line of business application – like Fraud Management, etc.

 

SAP has RDS solutions using Predictive

 

They partner with ESRI, SAS

 

Charles has special speaker from Obama campaign presenting on how the Obama campaign used KXEN to win the 2012 US Presidential election.

predictive analytics portfolio on HANA.png

Figure 9: Source: SAP

 

With SAP embedded on HANA, you are not getting the SAS algorithm

 

You can see the two ways to access HANA in Figure 9

predictive and kxen.png

Figure 10: Source: SAP

 

Three options:

1) Client side – PA/KXEN – Java based and R based predictive  - connect to relational database and CSV

 

2) Server – Infinite Insight Explorer – connect to database (say Oracle)

a. Factory – model management – how data looked 1-2-3 months

b. Factory scheduling to refresh

c. Infinite Insight Social – trying to detect similar/like-minded people

d. Recommendation engine – buy brown shoes, likelihood to buy belt

 

3) Third option is HANA – with PAL in memory, connected to R

 

More to come...

 

Related:

ASUG Annual Conference has the following SAP Predictive sessions:

Session IDTitleStart Date
202Predictive Analysis Roadmap6/3/2014
203Using Analytics to Help Win the US Presidency6/3/2014
204Predictive Analytics for Procurement Lead Time Forecasting at Lockheed Martin Space Systems Using SAP HANA, R, and the SAP Predictive Analysis Toolset

6/3/2014

 

Charles is presenting the Predictive Analysis Roadmap and co-presenting "Using Analytics to Help Win the US Presidency".

 

Join us in May for ASUG Annual Conference   - Pre-Conference SAP BusinessObjects BI4.1 with SAP BW on HANA and ERP Hands-on – Everything You Need in One Day June 2nd

 

Register at: ASUG Preconference Seminars

 

 

Share your Story: Call for Sessions for ASUG at SAP d-code (former TechEd)

Share your knowledge with others and submit a proposal to speak at SAP d-code. Selected proposals will be part of the ASUG and SAP d-code: Partners in Education program, providing attendees with interactive learning experiences with fellow customers.



View the education tracks planned this year.  If selected, you will receive a complimentary registration for the conference and it will give you valuable professional exposure.


Follow this link to create a speaker account where you can formally submit your proposal, review important deadlines, and other general information about SAP d-code.


The deadline to submit your abstract is May 25. If you have any questions, please e-mailsapdcodespeaker.info@sap.com

How well will you do tomorrow? How can we be sure?

 

Algorithmic and biomedical advances are now producing sports coaches, mangers and team owners the tools to predict which players have picked and which ones have their full potential ahead of them.


I don’t use much of quantitative methods when it comes to sports. I think it takes away my excitement.

 

http://content.intweetiv.com/view?title=SAP+Uses+Own+Big+Data+Analytics+to+Project+Super+Bowl+Winner&iframe=http://www.eweek.com/enterprise-apps/sap-uses-own-big-data-analytics-to-project-super-bowl-winner.html/

 

After the Super Bowl game finished – I saw on twitter that SAP had predicted that Denver will win over Seattle in a close match. As it turned out – Seattle won a rather one sided match with a very young side.

 

I didn’t work on the predictive Analytics solution that made the prediction for Super Bowl and I am not authorized by SAP to provide a response. But I wanted to share my personal views on this matter.


Then I saw Vijay Vijayasankar’s discussion about the perils of predictive analytics. He makes the crucial points:

 

Predictive Analytics in general cannot be used to make absolute predictions when there are so many variables involved . In fact – I think there is no place for absolute predictions at all . And when the results are explained to the non-statistical expert user – it should not be dumbed down to the extent that it appears to be an absolute prediction .

Predictive models make assumptions – and these should be explained to the user to provide the context . And when the model spits out a result – it also comes with some boundaries (the probability of the prediction coming true , margin of error , confidence etc). When those things are not explained – predictive Analytics start to look like reading palms or tarot cards . That is a disservice to Predictive Analytics .

If the chance of Denver winning is 49% and Seattle winning is 51% – it doesn’t exactly mean Seattle will win . And not all users will look at it that way unless someone tells them more details .

In business , there is hardly any absolute prediction ever . Analytics provide a framework for decision making for the business leaders . Analytics can say that if sales increases at the same historic trend , Latin America will outperform planned numbers next year compared to Asia. However , the global sales leader might know more about the nuances that the predictive model had no idea of, and hence can decide to prioritize Asia . The additional context provided by predictive Analytics enhances the manager’s insight and over time will trend to better decisions . The idea definitely is not to over rule the intuition and experience of the manager . Of course the manager should understand clearly what the model is saying and use that information as a factor in decision making .

When this balance in approach is lost – predictive Analytics gets an unnecessary bad rap.

Actions

Filter Blog

By author:
By date:
By tag: