Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

Using a Crystal Ball with SPS09: PAL: 75. Time Series - Forecast Accuracy Measures

Fortune tellers use their people skills and experience to gather data then make predictions for their clients. Like Sherlock Holmes they take small pieces of data, analyse it in context then make educated guesses about what could happen next.  In this tutorial Philip of the SAP HANA Academy explains how to use existing smoothed data to make forecasts.

This video focuses on the new forecast accuracy measures that are available from SPS09 of the SAP HANA Predictive Analysis Library (PAL).  It refers back to the previous video on exponential smoothing and how to access these forecast accuracy measures for use with a statistics table.  Philip then suggests that you may not actually be doing a time series analysis. You may already have data that has been smoothed and want to calculate forecast accuracy measures for it.  He goes on to introduce a new algorithm that has been created called FORECAST ACCURACY MEASURES which allows you to work with smoothed data to make predictions.  The documentation for this is in the PAL Guide under Time Series.

The extraction and loading are made very simple by the fact that only two columns are needed, the actual values and the smoothed data with no need for an ID column.   These two columns can then be used to work out the different forecast accuracy measures.

The parameters that can be used are the same as those from the previous video on exponential smoothing.

Again the output table is the same the statistics table as for exponential smoothing.  This consists of two columns with the measure name and the result.

The rest of what you do regarding syntax and code is the same as everything else in PAL.  The tutorial demonstrates how to do this step by step using data on exponential smoothing from the previous video.  This is a nice touch as it encourages developers to ensure that they relearn these skills in order and builds on prior achievement.


The tutorial shows how you can calculate the forecast accuracy measures directly in the statistics table by using the code below.

You can see a sales amount column and a predicted smoothed column.

However the Calendar ID is also provided which as mentioned previously is not needed. The code used only includes the two columns you need.

You can see the input data and the forecast accuracy measures data with the sales amount and the sales amount predicted as double.


You also have a signature table, input data, parameters and as always the statistics table as output. The structure of the table can be exactly the same as the one used in the exponential smoothing examples in the previous video.


Next you call the wrapper procedure to create specifying the forecast accuracy measures algorithm.  In SPS09 you need to include the name of the schema you want your stored procedure to be created.


You need to create a view with just the two columns: sales amount and the predicted amount.


Once you create your statistics table and set up your parameter table you just need to list any or all of the nine of the different measures that are available.

After calling your algorithm you should have the output below if you have used all nine measures.


The Crystal Ball

The tutorial concludes by reviewing how to create or capture forecast data when you already have actual and smoothed data is new in SPS09.

Even if you haven't already calculated these measures you can do it after the fact using the forecast accuracy measures algorithm which is new in SPS09. 


So what's next in forecasting?  Fortune tellers and detectives subconsciously weigh data in context to re-rank it according to past experience and use "intuition" as a deciding factor. Moving averages do this for one set of data at a time. Recalculating after the fact is a small step away from software learning and contextualizing data so that different pieces of data both raw and recalculated can be re-ranked for new circumstances based on previous contexts.  This analysis like Freakonmomics will provide insights and surprises for developers and business leaders alike.

Labels in this area