I've created a Wiki with a list of DP related transactions.


You can check the list in the link below:




If you have any suggestion, please feel free to comment


Best regards,


Georgia Vanin

This article explains the use of “Activate Livecache lock” in a custom SNP planning area.



We faced below issue after APO upgrade from version 7.0 to version 7EHP2. A number of SNP copy  jobs  /SAPAPO/RTSCOPY  started failing . SAP informed that locking logic has been improved for SNP (note  1318667 - Incorrect lock entry for SNP aggregate 9AMALO). In our case the note got implemented as a part of the upgrade and after upgrade jobs in our quality system  failed with message "Lock table overflow".


The number of locks in SM12 are controlled by profile parameter  enque/table_size.This is maintained by BASIS team. The parameter value shouldn’t be increased beyond a certain limit.


Possible solutions to this problem can be

1. To limit the selection of copy. Each large step has to be replaced with smaller selection steps . This can lead to more maintenance issue as the selections need to be updated regularly.

2 Use a macro to copy – If the system has large number of time series KF  then a unique macro needs to be created for each copy. Macro will have more run time.

3 Use the “Activate LC lock “option for the custom SNP planning area.This option was implemented and details are below.


Consider a Custom planning area with three time-series KF as shown below.


Case 1 : "Activate livecache lock" is unchecked.


/SAPAPO/RTSCOPY was run with below selection.

SM12  shows a very large number of locks . This is a representative screen shot and in our actual test the job failed with table overflow.

Case 2 :"Activate livecache lock" is checked.

SM12 locks show only one entry with reference to LCA_GUID_STR .

Report  /SAPAPO/TS_LC_DISPLAY_LOCKS   shows the below and more information is visible in the details button.


SAP note :1988958 ( Data is locked: How to find out locked planning objects when using liveCache locking logic) explains the above report.


If the planning area has inconsistency then the report fails with the below message . Same can fixed by running consistency reports and OM17.


The change in locking logic of planning area can have huge impact and must be tested thoroughly. Some of the scenario which we tested are :

  1. Same selections + Same planning version : Locking logic is working as expected  and second occurrence is locked out . The first user was in change      mode and second user got "selection locked" message.
  2. Same selection + Different version : No Locking  message for 2ned user as expected
  3. Different selection + Same version : No Locking message  for 3 users as expected.

Conclusion: "Activate Livecache lock" is a good option for SNP planning area  and will prevent table overflow error.

This blog explains how to create a macro which will make row "Total" not editable in transaction /SAPAPO/SDP94.


Based on the following true table:


10No Edit


  • Use function "ACT_LEVEL" inside the "If" condition. If ACT_LEVEL (  ) > 0, this means that the level is not the total level.


  • Then choose the Row that you want to use this functionality and change the attribute.


  • Inside that you will use function ROW_INPUT ( 1 ). If X = 1, the status of the row is set to Ready for Input.

  • Then use ELSEIF Agg_level (Your characteristic)  = 0. This means you will check if you are on the detailed level of a selection. In that case you should be able to edit.


  • The last step is to use the ELSE condition. Put Row as an attribute.


  • Inside that you will use function ROW_INPUT ( 0 ). If X = 0,  the status of the row is set to Read-Only (write-protected).


Note that I also used the function ROW_BG to change the background color of the row. See example below:




You should put this macro in the events "Level change" and "Start" because when you open /SAPAPO/SDP94 and when you change the level the macro will be executed.



And here is the result in /SAPAPO/SDP94 transaction. The gray background color means that you can't change the row. The green background color means that the row is ready for input:


a) When you open /SAPAPO/SDP94 and load only one product (the caracteristic that I've insert on the ELSEIF), you will be able to change the total:


b) If you open /SAPAPO/SDP94 and load more than one product you will not be able to change the total:




c) If you select details (all) you will not be able to change the total again as expected:


Hi all

We always have problem when Process chain BG job is red but job in SM37 is green.

There is not best and easy way to continue and make process chain process step as green to continue with next steps.


Please find below the steps which could help you to overcome this problem. It´s not the ideal way but it works :-)


1. Get the “Varaint” and “Instance” from Display Message of process step from process chain.





Put the above “Variant” and “Instance” and the date of the job when it ran as seen below





Step 3: Got SE37 and use the functional module RSPC_PROCESS_FINISH




Enter the below details which we can get from the table:-

Table                     Se37

LogID                     I_LOGID

Process type      I_TYPE

Variant                 I_VARIANT

Instance               I_INSTANCE

And put Status : G




This will change the status of process chain status from Red to Green.


I think these steps are really helpful for support projects.

You can also create small rapper program using the above functional module where you can give only variant and program will change the status to Green.


I hope this information would be help to you all.




In regard to following macro Macro - To Create Pop-Up for Input here is an example how to use macro for input and calculation (pop-up). Continuing from the example, let's define further macro:


Use functions COLUMN_MARKED & LAYOUTVAR_VALUE to explore this functionality. In our case we will mention value X=1 which means if columns selected then perform following action.

Macro 1.JPG


Activate & save the macro. Let's go to planning book. Here we see the same icon for this macro. In below case both forecast are same as of now.


Macro 1.JPG


Select three columns as shown below:


Macro 1.JPG


Now click the icon for the macro. This will give a pop-up for value. Now, in below example I will put 150% for calculation and press continue.


Macro 1.JPG

As you can see, below the production forecast is 1,5 times of composite forecast. Save the data in planning book.

Macro 1.JPG


Hope this helps in your day to day projects.

This macro explains how to create macro which will trigger pop-up for any input. With this pop-up you can put any value etc. for calculation etc. For the same go in /SAPAPO/ADVM.


Drag macro element in the work area. Assign a push button to the same. This would be later visible in planning book for which we are defining this macro.


Macro 1.JPG


Using functions LAYOUTVARIABLE_SET and  NUM_VALUES_INPUT define below macro to have a pop-up in interactive planning screen.


Macro 1.JPG


Check and save the macro. Now let's go to planning book to see the result.


In planning book we see the same assigned icon Macro 1.JPG. Click the same icon. As soon as we click this we see a pop-up for input as an example.


Macro 1.JPG


I hope this gives insight for creating a pop-up with the help of the macro. Check below macro with usage of this pop-up.


Macro - Pop-Up Usage for Input & Calculation

Hi all

Please find below all the steps with respective reports to carry out before you delete product locations from APO.


a) Remove product from integration model

b) Delete all orders for this product-location which still exist in the APO system by running the reports /SAPAPO/RLCDELETE or /SAPAPO/DELETE_PP_ORDER (this report is not recommended to be used in product system!!) or transactions /SAPAPO/RRP4 or /SAPAPO/RRP4.

c) Use transaction /sapapo/ccr to delete all objects in APO that are integrated with R/3. The /sapapo/ccr will show the orders as 'Not in an active integration model' and offers to delete the orders in APO

d) Delete the product from ATP tables by running the report /SAPAPO/SDORDER_DEL with the selection according to the product being deleted. (Orders from DB & live cache)

e) Delete all related product substitution and location product substitution using the transaction /SAPAPO/RBA04;

f) Delete the product from all PPMs through the transaction /SAPAPO/SCC05;

g) Remove the product from external procurement using the transaction /SAPAPO/PWBSRC2;

h) Remove the product from transportation lanes through the transaction /SAPAPO/SCC_TL1 or /SAPAPO/SCE; Note: If the lane was created with option 'All products', you should not care about this condition.

i) Remove the product from quota arrangement through the transaction /SAPAP/SCC_TQ1;

j) Remove references from the product master or the location product master using the transaction /SAPAPO/MAT1, tab 'Classification';

k) Remove the planning package from the location product master using the transaction /SAPAPO/MAT1, tab 'PP/DS';

l) Remove the product from RTO;

m) Remove the product from product split.


  1. 2. Mark deletion flag

a) In transaction /SAPAPO/MAT1, select the product (and location in case of location product);

b) In the menu bar, choose 'Product' and go to 'Mark for deletion' (Shift+F6);

c) In the popup 'Set deletion Flag' set a flag in the appropriate field: Leave the field 'Product' empty unless you want to delete the product master, i.e. in case a location product must be deleted, the flag has to be set on location only;

d) The deletion can be executed directly or be scheduled for another time.

Execute Report /SAPAPO/DELETE_PRODUCTS for deletion which will delete all the product locations which are marked with deletion flag.

This macro will give you an overview how to use auxiliary row in a macro. Auxiliary table is used to retain intermediate results for calculations. This means we can collect data for calculation in the background while interactive planning.


We will define a macro which will calculate absolute value by comparing two key figures (basically %). This will hold the value in an "auxiliary key figure". Thereafter we will write condition if "auxiliary key figure" is greater then 25 then color of one key figure changes to red else it will be green. For this we will use mathematical function ABS & function CELL_BG.


Write a default macro as shown below where data is stored in auxiliary key figure and then compared to absolute number through operator/function.


Macro 1.JPG


Let's see the result of the macro. As we can see below if the difference between production forecast & composite forecast is greater then 25% then cell becomes red else it turns green. Though we don't see any other key figure but this being done with help of auxiliary key figure.


Macro 1.JPG

Hello experts...


I've implemented the BADI SMOD_APOCF005 to maintain some values automatically to the MATLOC table, but i am having some problems.

The CIF contains more than 450 products, and the process of CIF is being divided in 2 executions.

Only the second execution is able to save the values. In the first execution, the values are passed and saved to the MATLOC table, but the product master is not updated.

Can you help with this issue?


Thanks a lot!!

Problem Definition

Bottleneck resources are a common constraint in planning & manufacturing process in most businesses. Therefore, it is essential for supply network planners to have an accurate view of the capacity situation (Available Capacity and Capacity Consumption) on these bottleneck resources to enable them to plan efficiently in order to meet demand within a stipulated time frame.


APO has two different planning applications – Supply Network Planning or SNP for planning in the medium to long-term horizon and Production Planning & Detailed Scheduling or PP/DS for planning in the short-term horizon. The areas of responsibility between these two applications are segregated based on the horizons defined in the master data. While SNP plans outside the SNP production horizon or the short-term horizon PP/DS plans inside the PP/DS horizon or short-term horizon. Additionally, both these applications use separate set of master & transaction data. For example, SNP application uses SNP-type orders, Resources with bucket capacity and SNP-type Production Data Structure or PDS and PP/DS uses PP/DS-type orders, Resources with continuous capacity and PP/DS-type PDS.


In typical business scenarios, planning system APO is integrated either with a Legacy system or an ECC system for execution of the plan output from APO. Since plans are executed in the near-term or short-term horizon, therefore, planned or production orders transferred from either a Legacy or ECC execution system to APO system are always PP/DS-type orders and not SNP-type orders. However, during SNP planning run, APO creates SNP-type planned orders which consume bucket capacity of resources using SNP-type PDS. Similarly, during PP/DS planning run, APO creates PP/DS-type planned orders which consume time-continuous capacity of resources using PP/DS-type PDS. Therefore, while resources exist only once physically on the shop floor their capacities are represented twice as PP/DS-type time continuous capacities and SNP-type bucket capacities.


Since there is no direct link between a SNP-PDS and a PP/DS-PDS and no generally valid logic to calculate bucket consumption by PP/DS-type orders on SNP resources in standard SAP APO, therefore, there is no capacity consumption relevant for SNP by PP/DS-type orders that exist in the system.  This poses a problem for the supply planner since he does not have visibility to PP/DS-type orders already scheduled and hence consuming capacity on his physical (SNP) resources with bucket capacities in the interactive planning screen. This lack of visibility to PP/DS orders creates an inflated capacity situation on the resources in the near to medium-term during supply planning and hence can result into inaccurate capacity planning.


This inaccurate capacity planning may affect the whole supply chain planning process of business in a real scenario. To overcome this situation, SNP planning must take into account the capacity consumption of PPDS-type orders on mixed (combination of time-continuous and bucket) resources during SNP planning.


Business Implications

In order to understand business implications of the problem defined above, we need to look at two different business scenarios:


Scenario 1: APO is integrated with a Legacy execution system


When APO is integrated with a Legacy execution system, the planning output from APO to the execution system is usually integrated on a periodic basis e.g. weekly or monthly while the detailed schedule and everyday shop-floor related changes from the execution system are integrated back to APO on a more frequent basis e.g. daily. These scheduled orders or production orders from the execution system are created in APO as PP/DS-type orders that won't consume capacity on SNP bucket resources by standard. Since these orders are not consuming capacity on SNP resources, therefore, available capacity on these resources will be inflated which will lead to scheduling of SNP planned orders on these resources where no capacity actually exists unless the supply planner takes manual corrective action. This may lead to an inflated plan from APO passing back to the Legacy execution system at the end of next period e.g. week or month or a lot of additional manual effort from the supply planner in order to correct the plan.


Scenario 2: APO is integrated with ECC execution system


When APO is integrated with an ECC execution system, the transactional data between APO and ECC is usually integrated on a real-time basis i.e. whenever a change occurs in APO, the new plan is visible in ECC and vice-versa. This means that while all changes to the scheduled orders or production orders in the execution system are immediately passed to APO as new or changed PP/DS-type orders they will still not consume capacity on SNP bucket resources and hence do not impact SNP plan in reality. This implies that the supply planner will be blind to all such changes unless he uses one of the PP/DS transactions in APO to check the situation on the shop floor. This may lead to an inaccurate capacity plan immediately passing back from APO to ECC or a lot of additional manual effort from the supply planner in order to correct the plan and hence necessitate only periodic integration of the plan from APO to ECC.


In both scenarios, inaccurate capacity plan on bottleneck resources may lead to one or more of the following business consequences:

1. Products not being manufactured on time affecting customer service levels.

2. Ad-hoc adjustments to production schedule leading to excess or out-of-stock situations.

3. Inefficient planning of labor attached to bottleneck resources leading to cost overruns.

4. Skewed capacity bottleneck situation affecting capacity expansion plans.

5. Half-baked outsourcing decisions leading to financial implications.


Solution Overview

At a high level, APO is used for long-term planning, medium-term planning and short-term planning. The long-term, medium-term and short-term planning depends on planning horizon as provided by the business. For long-term and medium-term planning Supply Network Planning (SNP) is used, which is bucket-oriented planning for material and capacity requirements whereas for short-term planning Production Planning and Detailed Scheduling (PP/DS) is used, which is time-continuous planning. In order to work with both SNP & PP/DS planning on the same resource, the resource should be Single/Multi-mixed resource type in APO. These types of resources can be managed for both time-continuous and bucket oriented planning.


For planning in APO, some basic master data such as location master, product master, resource and Production Data Structure or PDS needs to be maintained as a prerequisite. Out of these, the Production Data Structure (PDS) is one of the most important master data. This is because while planning in APO, planning engine reads Production Data Structure (PDS) to create planned orders. To work with SNP planning, SNP PDS needs to be maintained in APO and for PP/DS planning, PP/DS PDS needs to be maintained in APO.



Unlike in the ECC system, time-continuous resources and mixed resources can be defined separately in APO that can be used for only time-continuous and bucket-oriented as well as time-continuous planning respectively. Due to the fact that bucket capacities are calculated for these mixed resources even when a PPDS-type planned order is created, bucket resources are required to be maintained in Production Data Structure (PDS) as well.


Mixed resources allow SNP and PPDS to have a shared view of the resource schedule. The capacity commitment from PPDS-type orders can also be displayed for SNP on mixed resources. To enable this, the PPDS capacity commitment is converted to a bucket resource reservation.


Interactive supply network planning book and data view functionality can be leveraged for efficient planning of resources by SNP planners. The data view is the most important tool for the SNP planner. Traditionally, SNP planning books are not designed to view PP/DS orders / capacity consumption by PP/DS-type orders because SNP planning works on bucket capacity so it shows SNP orders / capacity consumption by SNP-type orders. In order to make it work for PP/DS orders some additional configuration at master planning object structure (MPOS) & planning book level is required. We will see additional configuration steps in the Solution details section.

Solution Details

·      Master data


Introduction to the PDS in APO


The Production Data Structure (PDS) is the key master data for all kinds of production planning related processes. The PDS is supported by the applications PP/DS, SNP, CTM and DP and can be used for Multi-level ATP and CTP as well. Like the PPM, the PDS corresponds to the production version on ECC side. Though both PPM and PDS are still available as alternatives, there is no further development for the PPM since SCM 4.0.


PDS-Types and Applications


The following figure provides an overview about the different PDS types. Analogous to the PPM, different PDS objects exist for the applications PP/DS, SNP and DP. As a difference to the PPM, the PDS for SNP is transferred directly from ECC and is not generated from the PP/DS master data.



Figure 1: PDS Types and Integration to ECC


  • PDS for PP/DS


The PDS for PP/DS is created from the R/3-production version like the PPM. Most of the developments are focused on the PP/DS functionality of the PDS.


Note: The PP/DS PDS does not have bucket consumption, so there is no capacity consumption relevant for SNP. By SAP standard, only continuous consumption is maintained in the PP/DS PDS by the Core Interface (CIF), the bucket consumption (which is how SNP views capacity) is missing.


Figure 2.png

Figure 3.png


Figure 4.png

Figure 5.png


  • PDS for SNP


Differing from the PPM for SNP, the PDS for SNP is transferred directly from ECC and is not generated from the PP/DS-PDS. The structure of the SNP-PDS is simpler than the structure of the PP/DS-PDS. The structure of the SNP-PDS is used by the applications SNP, CTM and DP.


In a production version it is possible to maintain routings for detailed and rough-cut planning. For PDS transfer from ECC to APO, it is possible to select from which routing the PDS will be generated (differing from the PPM transfer where always the routing for detailed planning was selected). This might be a modelling option for the transfer of SNP-PDS.

Note: Make sure to create a SNP-PDS by transferring through Core Interface. The reason is that while using 'show dependent objects’ option in the SNP planning book only SNP master data is checked.

Figure 6.png



  • PDS for CTM


In CTM, it is possible to choose whether PP/DS or SNP master data should be used. Both cases are supported by the PDS, though in each case the structure of the SNP-PDS is used. The content of the PDS will however differ, depending on whether PP/DS or SNP master data is selected as a basis for CTM. If CTM planning is to be carried out with SNP master data, the regular PDS for SNP is used. If CTM planning is to be carried out with PP/DS, a PDS of the type CTM is generated from the PP/DS-PDS at the point in time of the transfer of the PP/DS-PDS. This CTM-PDS contains a subset of the PP/DS-PDS and uses the structure of the SNP-PDS.



Figure 2 visualizes the creation of the PDS for CTM.




SNP Planning Book and Capacity Data View


Figure 8.png


  • PDS for PP/DS after BADI Implementation

Resource variable consumption populated for every operation into PP/DS Production data structure (PDS).


Figure 9.png


SNP Planning book and Capacity Data View after BADI Implementation



After Badi activation and completion of functional configuration - Available capacity, capacity consumption by PPDs-type orders and dependent object details are visible.

Figure 10.png


·      Configuration


Additional configuration as per below mentioned details is required to see order details for PP/DS orders in SNP capacity planning view:

  1. a. Assign key-figure 9AFPROD to aggregates 9ARE, 9AMARE and 9AREPR in the planning area. Only then the confirmed production quantities will be shown in the second grid of the capacity view.
  2. b. Assign the key-figure 9AFPROD to the planning book grid 2 and make it visible via macro 'layout attributes for resource type'.
  3. c. To display dependent objects details, SNP-PDS must be maintained in APO. The reason is that while using 'show dependent objects' option in the SNP planning book only SNP master data is checked.


· Enhancement/BADI


The PP/DS PDS does not have bucket consumption, so there is no capacity consumption relevant for SNP. In order to populate bucket consumption field into PP/DS Production Structure through standard CIF process we need to implement business add-in (BADI) /SAPAPO/CURTO_CREATE & method CIF_IMPORT and use the example coding of note 657185 (RTO: Sample code for calculating bucket consumptions 4.0.)


Sample Core Interface Setup

  • Integration

Create integration models for the resources, PPM/PDS using the production version of the routing as shown below:



  • Activate the Integration model.





Efficient capacity planning of bottleneck resources is one of the primary objectives of every SNP planner to effectively support supply chain planning process of their business organization. With standard SAP configuration and master data set-up of SNP and PP/DS planning applications, it is difficult to obtain a unified and accurate view of bottleneck resource capacities in the SNP capacity planning book and data view used by a supply planner, especially in the short to medium-term horizon. After reading this white paper, consultants as well as business users can gain a better understanding of the additional configuration, master data set-up and simple enhancement/BADI that can be used effectively to provide a single, combined and accurate view of bottleneck resource capacities in SNP capacity planning view to enable efficient utilization of such resources used in the manufacturing processes. Consequently, it will help users to make informed and better business decisions and enable usage of the appropriate method of capacity leveling which is best suited to meet the business requirements.


Abbreviations / Acronyms


Advanced Planning & Optimization


Core Interface


Capable to Match


Enterprise Core Component


Production Data Structure


Production Planning & Detailed Scheduling


Production Process Model


Systems, Applications & Products in Data Processing


Supply Network Planning


Hi All


SAP has come up with a add-in for excel in 2014. All the  documentation is available in the link SAP Advanced Planning and Optimization, Demand Planning Add-In for Microsoft Excel 1.0 – SAP Help Portal Page


Kindly go though the  "Administrator’s Guide" and the "Application Help" ( market place log in needed ) .The below blog is just additional screen shots /tips /tricks/ which may help consultants.The screen shots are from a test system and for actual system the security aspects like firewall settings ,HTTPS should be considered.


The excel add in will work only on SCM7.0 EHP3 and on higher versions of EHP2.At the client side it supports Office 2010 and higher as explained in the guide


1.Open the transaction SICF. The Maintain service screen opens.

2.Click the Execute button or press F8.


3.Expand default_host in the service list.

4.Expand sap -> scmapo -> rest-> epm.

5.Double-click demand_plan_srv. The Create/Change a Service screen opens.


This is only available in  SCM7.0 EHP3 and on higher versions of EHP2 ( please refer the guide )


6.If the service is active, you can see Service (Active) written next to the service name. If you cannot see this text displayed, continue with the next step.

7.Click the Back button.

8.Right-click demand_plan_srv in the service list.

9.Select Activate Service from the context menu that appears.

10.Choose Yes in the dialog box that opens.


Now test the service




.A new window should open with a logon screen



The Url will be in the format http://[host]:[port]/sap/scmapo/rest/epm/demand_plan_srv/


If the window does not appear then few ICM setting are missing in RZ10 and this has to be checked by Basis team

The three ICM seting are shown below


rz10_for this.PNG


At the client side

The excel add in  available  in marketplace and download access will be needed to get this file. It has multiple prerequisites like specific version of dotnet and MS office  as explained in the guide.



After installation , we can see a new add in in the excel


Click on  log on button

excel first.PNG

create a new connection for the required planning book and view as shown



Data in the planning book



Now based on the layout needed ,click the format and adjust the same


The row , column and page axis needs to be sent as per the requirement








After all this , the report should be able to pull data from planning book as shown below .


If planning book numbers need to changed from excel then active the below option


After this if numbers are changed in the excel and "upload data" button is clicked then numbers are changed in planning book as shown below

update data1.PNG

update data2.PNG



There are large number of options explained in the guide such as read only settings, color formatting in excel and formulas.They need to be configured as per business requirements.



Most of the demand planners work in excel and will prefer to work in excel than planning book




Supported only in SCM7EHP3 and higher versions of EHP2

Planners may expect all the data in excel and may try to avoid planning books altogether.

The impact of all the default/ start / level change /exit  macro's needs to be tested

The security aspects may be involved .If we compare this with BW-Bex report then it gives access only to read data but here it is read /write.

Performance issues with large selections need to be tested 

Managing forecast consumption settings can be a tricky, error-prone activity. We asked SCM 2015 speaker Sean Mawhorter of SCM Connections to host an online Q&A to address users' questions and share tips on optimizing forecasts for more accurate demand and supply planning results. Check out an excerpt from this Q&A session and see whether your questions were answered.


Sean will be one of the featured speakers at SAPinsider’s SAP SCM 2015 conference in Las Vegas, March 30 – April 1. For more info on the event visit our website at Logistics & SCM, PLM, Manufacturing, and Procurement 2015 and follow us on Twitter @InsiderSCM

Comment from Rohit

We changed the packaging specifications for a couple of SKUs and gave them a new material number. How do we forecast for these new materials using the forecast of the old ones? We have tried life cycle planning, but the demand planners think it is too tedious to maintain all the entires. Is there any other option other than life cycle planning and realignment/copy?


Sean Mawhorter: Unfortunately, there are not too many more options to address these requirements (although they are fairly common). One option is to copy the sales history from the old item to the new item, but that can throw off your aggregate sales history. Plus, it’s a manual process.


Comment From Pavani

Can we assign a custom category to TLB order STO in APO system, so that we can influence consumption of forecast by this custom category?


Sean Mawhorter: It is possible to set this up in the consumption configuration. It is a combination of the requirements planning strategy and ATP categories and assignments.


Comment From Anna

Do you have any recommendations on when to use backward vs forward consumption methods and the time frame for the orders to be considered?


Sean Mawhorter: This is a great question, but would require a long answer. Short answer is that a combination of the sales order volume/quantities, order drift, and forecast bias should lead to segregation of your materials and/or material-location combinations and backward/forward settings should be established and managed using these groups.


Comment From Rangarajan

Can you explain order drift?


Sean Mawhorter: Order drift is the propensity for an order to be in a different bucket that its intended forecast. An indicator of this can be where the forecast accuracy for a single bucket (month) is usually much lower than the forecast accuracy of a multi-bucket window (e.g., 3 months). So timing is more the issue than the quantity itself.


Comment From Rangarajan

Can we maintain forecast consumption settings in SNP different for each material?


Sean Mawhorter: This is standard functionality. The consumption parameters are set in the product master at the product-location level.


Comment From Suat

Consumption of different forecasts: If we have more than one forecast line (50 pcs and 100 pcs for example) in SNP relevant for consumption by a sales order of 120 pcs, how we can find a way to consume fully the first one and partly the second one? The reason why we have more than one forecast figure is because we want to add specific characteristics to represent priorities, for example.


Sean Mawhorter: Have you investigated the use of extra forecast characteristics for this one (i.e. adding priority as an additional characteristic to the customer to be used in the consumption)?


Comment From André

How can we use forecast and safety stocks together to increase the SNP plan quality?


Sean Mawhorter: This is an area where many are confused as to what dial to turn when...
Safety stock is really the dial to turn to adjust inaccuracies in the forecast quantity itself, versus consumption settings, which are used to address inaccuracies in the timing of that forecast.
There are many variables that can affect these; the key is knowing which dial to turn when. ;-)


Comment From Rangarajan

Can forecast consumption work other than the Active 000 Planning version?


Sean Mawhorter: It can, but you need to use the save_multi bapi to create the sales orders in the other version. Sales orders are normally sourced from ECC and only populate version 000 by default.


To view the rest of the transcript, click here.

Product Split:

The explanation, with no doubt, is more detail at SAP help. Quick points to remember (also there in SAP Help):


  1. This is basically used to replace the demand of one product to another (or others) during the release.
  2. You can have this split location-specific or cross-location (for this, do not maintain any entry for ‘Location’ in the product split table).


The date entered in the ‘Supply Date’ field gives the date from which the system is to only take stock of the new product into account. But anyways, this can be controlled by supersession chain under ‘Product Interchangeability’ – my personal recommendation: Do not maintain this field!



Location Split:

You don’t have ‘Location’ as a characteristic in Demand Planning, but you will have to release the demand at product/location level – as supply plan can only happen considering location along with product. It is in this case you use Location split.


In fact, this can also be used in case you don’t use 9ALOCNO as your location characteristic, but some custom info object. This means, you will have to define this info object in release profile (or transfer profile).


  1. If the split has to be maintained for all the products at a specific location, you can set Product = <<blank>>.
  2. The “valid to” field can be used in case you do not want to go for the split after a certain date.
  3. The proportion maintenance should be between 0 and 1, by the way.

loc split.JPG

Period Split:

This, you use if you have different storage periodicity between DP (say, months) and SNP (say, weeks). During the release, the monthly quantities of DP are disaggregated to weeks as per the proportions you maintained in the distribution function.


First, you create a distribution function (/SAPAPO/DFCT) and then you use this in period split profile (/SAPAPO/SDP_SPLIT).



The period split has below options (all self-explanatory).


This Document will give an idea for developing programs to automate creation of profiles and maintaining mass assignment.



It becomes very time consuming and cumbersome to create the Time series ID, Like ID and then maintain the data in the mass assignment table.


The programs are created to upload the data in to the below tables with an option of Full upload and Delta upload. Here the file is placed in to the data loads the program picks up these file and updates the tables.

The main focus of the program is automation of the profiles maintenance so as to improve ease-of-use rather than changing the feature functions itself. 

The basic characteristics against which the maintenance will be occurring is product and location. Maintenance against other characteristics levels such as material
group, forecast area is not anticipated. Note that if changes and maintenance is anticipated at conceptually a “higher” level than product and location, then
system settings can be adjusted. However, the program assumes the maintenance at product location level.


The Program uses the input from the spreadsheets to generate the following profiles:


  • Generate phase-in and/or phase-out profiles with a direct upload in to table.
  • Generate like profiles
  • Create, modify and delete assignments for product location combinations for the demand
    planning area with a direct upload in to mass assignment table.


  • 3 separate program needs to be developed.


During the upload process, the system should perform several validation and consistency checks, and error and warning report need to be generated.




Example Formats of File to be maintained:








  • Start date and end date should be in DD/MM/YYYY Format
  • The Profile Name (Column B) should be in CAPITAL Letters.
  • Phase In: Should be - "Before start date, apply constant factor" , the % part should be"ZERO"
  • Phase out :- Should be " after end date, apply constant factor" and the % part should be"ZERO"
  • Maximum Upload Limit = 60,000
  • Total Characters: Time series column = 22 and Description = 40
  • Upload to be is Full upload as Delta upload creates lot of inconsistency.
  • Either % or Values for phase-inand phase-out profiles can be specified but not both. If both are specified,
    the program uses the values and ignores the % after generating an error/warning message during the upload step.




2. LIKE:




  • The Like Profile (Column C) should be always in CAPITAL Letter.
  • Can upload combination of reference values in one profile (in the like profile definition) Maximum is 10 reference values.
  • Maximum Upload Limit = 60,000
  • Total Characters: Like Profile  column = 10 and Description = 40








  • During the upload to mass assignment table, its always better to clean the data first and then do a full upload. (There could be inconsistency during delta upload)
  • From and to date should be the same as used in the Phase in_out file.
  • Maximum upload limit = 60,000



Data gets uploaded in to the following tables:



Like Tables


/SAPAPO/T445LIKK – Header Table, Creates a LIKE ID


/SAPAPO/T445LIKE – LIKE ID Created Above is linked to the Like Profile


The link between these 2 tables is the GUID (LIKE ID)



Phase in out Tables: (These does not contain any GUID)

/SAPAPO/TIMETEXT – Header table.


/SAPAPO/TIMESERI – contains the Factors.


/SAPAPO/TSPOSVAL – Contains the Values


During Direct upload it is very important to read the created on and by and changed on and by in the table.








Here you have the Like ID (GUID) assigned to the like profile.
The Program should link the LIKE ID with the LIKE GUID from the table
/SAPAPO/T445LIKK and write in to the mass assignment table.


  1. Use FM /SAPAPO/TS_PLOB_LIST_GET to read CVC values from POS.
  2. Map the Like ID from the /SAPAPO/T445LIKK Table while generating the
    mass assignment profile





Example of the screens of the programs:















Please note that you should run these programs in the following sequence.





Well, let's first see the mystery - you have the below content in the planning book currently which you have downloaded to a file:

Figure 1


In the mean time let's assume you have changed the prod1 value to 60, thus making the total 120.

Figure 2


Ideally, you download the file to make some changes and then to upload. Let's assume you didn't change anything for this example (for easy understanding) and start uploading the values from the file to the planning book. You expect the values in Figure1 to get updated in the planning book. But with what you end up is something like below:


This is the mystery I subjected, and yes - you will understand this and will be able to resolve this in a short time .


The fact is: the behavior is correct - the note 1404581 explains this - and let me put the note explanation in simple way. The sequence how the behavior goes as per the note is:


  1. The total value is first compared between the file and the planning book, and in case of any discrepancy - the value of the file updates the internal table which finally updates the values to the database for the 'Total' value.
  2. Considering the new 'Total' value, the internal dis-aggregation to the product level (considering our example) happens as per the current situation of the planning book (and of course considering the calculation type of the key-figure: but don't get it here, and understand this as 'pro-rata' for now for easy understanding). These dis-aggregated values are stored in temporary internal tables but not in the database.
  3. The comparison now happens for the detailed level between the file and the current planning book values. The values are as well updated to this internal table.


Once these three activities are completed, the internal table is finally committed and the values are updated in the data base. It is this sequence which creates the confusion for the consultants. Let's understand this pictorially:

Point 1,2: File has 100 as total value while planning book as 120. So, the change happens from 120 to 100. Since the total is now changed, the dis-aggregation to the detailed level happens considering the new value of 100 as per the previous dis-aggregated situation.



Point 3:

Current situation of planning book says Prod1 = 60, and Prod2 = 60.

File says Prod1 = 40 and Prod2 = 60.


So, the value changes for Prod1. This change occurs still in the internal calculation. And this means, the total value is also impacted. Below picture says it:


And it is these values which get committed finally, creating a confusion for the guys who do the file upload to a planning book.


A point to note: File upload always happen to the key-figures which are not read-only. Any read-only key-figure will not be impacted by the file upload. By default, the 'Total' in the data view is always in edit mode. But with macro functions, you can make it read-only.

Let's assume we made it read-only in the above situation. But yet, during the file upload, the 'Total' is considered and changed. This is SAP bug. The fix for this is the note 1975441 which makes sure that the key-figures which are read-only are not considered during the upload.

If you can just try to understand the above scenario in case the 'total' is not considered during the upload, all goes well and file upload happens as per the expectation. In fact, what is not understood is the reason why SAP has given us the option of overwriting only 'Total' and 'Aggregated level' during the upload of the file but not 'Detailed level' - in case they had given this option, the workaround we would have asked our DP planners who regularly upload files is to simply select the 'Detailed level' radio button during the upload.


By the way, in case you are not able to understand what 'Aggregated Level' means - it's just that the point 3 doesn't happen .


Filter Blog

By author:
By date:
By tag: