Supply Chain Management Blogs by SAP
Expand your SAP SCM knowledge and stay informed about supply chain management technology and solutions with blog posts by SAP. Follow and stay connected.
cancel
Showing results for 
Search instead for 
Did you mean: 
thomas_engelmann
Explorer


The VSR optimizer uses internally a so called Meta-Heuristic. Concrete, it is very similar to iterated local search. In very few words, it works like this: We generate some plans. In a next step we work with improvements operators on such a plan, as long as we can improve it (that’s the local search part). Afterwards we do a change to the plan, which may decrease the plan quality. From there we again use the improvement operators to find the next local optimum for this plan. So we keep repeating this sequence again and again: change, local improvement, change, local improvement, ….

Sometime we interrupt this sequences to do some very large changes (we call it random work) in order to explore other regions of the search space. There we do the same again:  change,local improvement, change, ....

You can find more information about iterated local search in many publications. Wikipedia could be a good starting point for own inquiry. The final solution in VSR has some additional knowledge-based enhancements to explore the search-space more efficiently and we work on populations of solutions. Because of that we call it Evolutionary-Local-Search (ELS).



Such Meta-Heuristics like ELS have the advantage that you have very soon feasible and good solutions and over time you get even better solutions. But you never can state if you already found the optimal solution or even how far you are away from this best solution. Because of that, you have to decide how long the optimizer has to search and when it shall stop. The disappointing truth is that there is no simple or general rule. It is nearly not possible to estimate the runtime for a scenario up front, because it depends very strong on the constraints which are most critical for creating the plans and the current bottlenecks.

The only good remaining answer is the following: For a given scenario the runtime is stable over time. So if you run your optimizer every day in the same supply network and with nearly stable amount of freight units, the expected runtime is nearly the same for every day.

This means, for setting up batch runs for the optimizer, you should somehow set up a worst case scenario (one with the most freight units inside the planning horizon). Then you detect how many runtime the optimizer needs to solve it in a good quality and you use this runtime for the regular batch runs.

For doing so you select a runtime which is much higher as you expect it. You enable the explanation protocol and then you start the optimizer. After the run you have a look into the explanation tool into the section Result->Solution Details->Progress. Here you have a list of the solutions, which the optimizer found as improvements compared to the previous solutions. You should focus on the columns for time and total costs. The total cost represents the overall solution quality. In the first part of the runtime you usually see much improvement here (decreasing total cost). But with increasing runtime the speed of improvement usually slows down. Now you can select, after which runtime an acceptable solution quality is reached. Add some addition buffer (for example 20%) and you should have a good runtime for your scenario.

If you want first class transportation plans you should repeat such tests regularly, especially if you changed your model or increase the planning volume.

If you use the optimizer during manual planning as automation of a planning step, the volume for the optimizer is changing with every call. That means the described approach to determine the runtime for batch processes is not helpful at all. Therefore we have two alternatives to control the runtime for a more interactive approach:

My favorite is the automatic runtime: Internally it is measuring the time needed to build the initial plan. Based on this time it is internally setting the time for the iterative local search process. You can select between some predefined values, depending if you want fast reaction times or an intensive optimization resulting in high quality of the plans.

This approach is automatically adapting to the volume and to some critical constraints, as both have already effect during creating the first initial plans.

Alternatively we offer a possibility to stop the optimization run, if the duration to find new improvements falls below a defined threshold. You define the threshold for the time for improvements per seconds and freight unit. The optimizer stops as soon as the last improvement is longer ago than this threshold multiplied by the number for freight units. This parameter requires a lot of expertise. The improvement rate may depend not only on your scenario and the optimization process, but also on the usage of the optimization server by other processes. Because of that I personally prefer the automatic approach.

To be complete about the possibilities to stop the optimizer I have to mention a completely different approach: In the transaction rcc_session, you can see all running optimization processes. Here you can select one and press the stop button, to stop the optimization process and to return with the best found solution so far. Or you can end the process at once. In this case the optimizer will not write any result into the TM system but will stop immediately.

I described several approaches to find the right runtime. But what can you do, if the needed runtime is too large?

In a first step you should always analyze your scenario and your business process, if it is possible to split a large run into several smaller optimization runs. A VSR optimization run is nearly never linear, as an increasing number of objects leads to a much faster increasing number of valid alternatives. Because of that dividing large scenarios into smaller runs has the potential of super-linear speed-ups.

In a next step you can think of using several CPU-cores in parallel. You can do that easily by selecting the right number in the optimization profile. In this case different threads work on different plans in parallel. In every thread we can do the steps of our iterative local search nearly independently.

The most potential to reduce the runtime is always in the model: Can you reduce the alternatives, perhaps by reducing the number of different vehicle types or routing alternatives? Can you simplify the cost definition in such a way that the optimizer can detect very early, which alternative is good and which is worse? Can you simplify or delete some details to simplify the computations, for example detailed breaks, driving constraint or complex incompatibilities. SAP expert consulting can help you to identify such potential.

Additionally we in optimization development are always interested to receive performance critical scenarios. If you expect the optimizer should solve your problem much faster, then this scenario has the potential that we can learn from it, to find new ideas and techniques to speed up the optimizer. So if you don’t have any more ideas, send us the scenario via a ticket. We will feel free, to redirect it to expert consulting if it would be consulting only.

If the optimization takes too much time, there also may be reasons outside the optimization engine itself. It might be, that the runtime is spent in the pre-processing reading the data for optimization.  To check that you might have a look into the optimization traces ( transaction rcc_log, Shift+F5 on your selected optimization run).

You can search for ""PLN_OPT_INPUT". In the differences of the time stamps in the trace lines you see the consumed time of the pre-processing and the transfer of the data. Starting with release 9.1 we have some additional output. You fin it some lines below under the keyword ET_RUNTIMES. Here you can see runtimes of different pre-processing steps in milliseconds.

...


<i> 03:00:27 core_supervisor.cpp(679) 'SuperVisor' <M> Invoking module 'VsrMG' [2]->download


<i> 03:00:27 core_msgmgr.cpp(589) 'MsgMgr' * Sending progress number 200 to OutputInterface from [VsrMG]


<i> 03:00:27 vsrmg_module.cpp(318) 'VsrMG' ***** Read input data via RFC *****


<i> 03:00:27 rfc_connection.cpp(653) 'MsgMgr' <rfc> calling function module /SCMTMS/PLN_OPT_INPUT


<i> 03:00:30 vsrmg_module.cpp(474) 'VsrMG' ***** Dump 91 rm input data *****


<i> 03:00:30 rm_fileconnector.cpp(97) 'MsgMgr' <RM> saving tables as TEXT to vsr20140321_030027_34c8.in


<i> 03:00:30 rm_fileconnector.cpp(135) 'MsgMgr' <RM> finished saving tables as TEXT to vsr20140321_030027_34c8.in [71 tables]


....


<i> 03:00:30 rm_helper.cpp(524) 'MsgMgr' Received the following runtime values:


<i> 03:00:30 rm_helper.cpp(515) 'MsgMgr' # converted by $Id: //apo_tc/o10n/dev/ops/common/relatiom/rm_textconverter.cpp#17 $ $Change: 403668 $


// DATE 2014/03/21


// TIME 03:00:30


//


// TABLE ET_RUNTIMES


// STEP_IDX STEP_NAME                                           STEP_RUNTIME


// INT      STRING                                              INT


   1        CL_PLN_OL_MAIN=>VSR_PRE                             3243


   2        CL_PLN_IL=>OPT_PRE_PROC::OPT_BADI->BERFORE_PRE_PROC 0


   3        CL_PLN_IL=>OPT_PRE_PROC                             1079


...


               Trace with a pro-processing runtime of approximately 3 seconds

In rare cases the post-processing may cause long run-times, too. To check that you have to compare the last time stamps out of the trace with the end of the complete optimization process. This might be the end of the batch run or the moment when interactive planning is ready.

4 Comments