SAP for Chemicals Blogs
Explore blog posts by chemical industry leaders and SAP experts, showcasing transformative solutions powered by SAP. Contribute a blog post of your own.
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member182149
Contributor

As companies move for just reporting the past (lagging or trailing indicators) they to look at predicting the future. So they start looking at implementing some form of predictive analytics. As I posted in http://scn.sap.com/community/chemicals/blog/2014/02/05/predictive-analytics-do-you-still-need-a-phd things are getting easier in developing the models needed to support the predictions. Thus I was interested to run across an article by McKiney & Company on the benefits and limitations of decision models.

With the the current capabilities to collect &  process, large amounts of data so that meaningful insights are produced, there is a temptation to forget the roots of the predictions.  Being able to bring the modeling down from the domain of experts into the hands of the users opens up a wide range on new capabilities that can generate great benefits to an organization.

But there are somethings that need to be considered before every one goes off an develops and implements a model.

Have you identified where a model will work well? Is there enough unbiased data that can be acted upon? Are people putting their own spin on the data  before it gets to the model, if so the benefit of modeling (that of looking at all data evenly and objectively, and if a decision is made to weight a factor, the weighting is applied consistently) is greatly reduced.

Can you control the outcomes? Can you influence the out come of the model? Normally this influencing is something to be avoided. But in the areas that  I concentrate on (manufacturing, operations, and maintenance), this can be a good thing. After all in trying to predict failures & breakdowns we are trying to avoid the outcome of a failure or breakdown. And if we put process in place to reduce failures, which is the our goal, is the model incorrect when the failure does not occur? And how can we really measure the model accuracy when we are doing our best to avoid the predicted outcome.

Improving Models Overtime: part of the process for using models is to compare the models result with reality, And if possible to adjust the model so its results more closely reflect what really happened.  This feedback loop should happen rapidly.  McKinsey states that "the observation of results should not make any future occurrence either more or less likely"  and I agree with them in a lot the examples they give (e.g. customer behavior, predicting the weather). But my focus as stated previously does tend to be on reducing or avoiding altogether what is being predicted (breakdowns and failures). By observing the prediction, and acting upon it, we should be reducing the likelihood of the future occurrence.I feel that over time operations will not only be continually updating the  model (as McKinsey states) but in many case replacing the model, as operations are now running in new state, and we need to take another look at the data (and maybe collect other data) to continuously improve operations.


Please let me know your thoughts.

For those of you who would like to read the full article that triggered my thoughts please use this link; http://www.mckinsey.com/insights/strategy/the_benefits_and_limits_of_decision_models

As a side note, for some time now, I have been posting on topics that interest me, and I hope interest you. And I am wondering if you find them of value. If you are finding them valuable, could you please remember to "follow" me.

Labels in this area
Top kudoed authors