Managing forecast consumption settings can be a tricky, error-prone activity. We asked SCM 2015 speaker Sean Mawhorter of SCM Connections to host an online Q&A to address users' questions and share tips on optimizing forecasts for more accurate demand and supply planning results. Check out an excerpt from this Q&A session and see whether your questions were answered.


Sean will be one of the featured speakers at SAPinsider’s SAP SCM 2015 conference in Las Vegas, March 30 – April 1. For more info on the event visit our website at Logistics & SCM, PLM, Manufacturing, and Procurement 2015 and follow us on Twitter @InsiderSCM

Comment from Rohit

We changed the packaging specifications for a couple of SKUs and gave them a new material number. How do we forecast for these new materials using the forecast of the old ones? We have tried life cycle planning, but the demand planners think it is too tedious to maintain all the entires. Is there any other option other than life cycle planning and realignment/copy?


Sean Mawhorter: Unfortunately, there are not too many more options to address these requirements (although they are fairly common). One option is to copy the sales history from the old item to the new item, but that can throw off your aggregate sales history. Plus, it’s a manual process.


Comment From Pavani

Can we assign a custom category to TLB order STO in APO system, so that we can influence consumption of forecast by this custom category?


Sean Mawhorter: It is possible to set this up in the consumption configuration. It is a combination of the requirements planning strategy and ATP categories and assignments.


Comment From Anna

Do you have any recommendations on when to use backward vs forward consumption methods and the time frame for the orders to be considered?


Sean Mawhorter: This is a great question, but would require a long answer. Short answer is that a combination of the sales order volume/quantities, order drift, and forecast bias should lead to segregation of your materials and/or material-location combinations and backward/forward settings should be established and managed using these groups.


Comment From Rangarajan

Can you explain order drift?


Sean Mawhorter: Order drift is the propensity for an order to be in a different bucket that its intended forecast. An indicator of this can be where the forecast accuracy for a single bucket (month) is usually much lower than the forecast accuracy of a multi-bucket window (e.g., 3 months). So timing is more the issue than the quantity itself.


Comment From Rangarajan

Can we maintain forecast consumption settings in SNP different for each material?


Sean Mawhorter: This is standard functionality. The consumption parameters are set in the product master at the product-location level.


Comment From Suat

Consumption of different forecasts: If we have more than one forecast line (50 pcs and 100 pcs for example) in SNP relevant for consumption by a sales order of 120 pcs, how we can find a way to consume fully the first one and partly the second one? The reason why we have more than one forecast figure is because we want to add specific characteristics to represent priorities, for example.


Sean Mawhorter: Have you investigated the use of extra forecast characteristics for this one (i.e. adding priority as an additional characteristic to the customer to be used in the consumption)?


Comment From André

How can we use forecast and safety stocks together to increase the SNP plan quality?


Sean Mawhorter: This is an area where many are confused as to what dial to turn when...
Safety stock is really the dial to turn to adjust inaccuracies in the forecast quantity itself, versus consumption settings, which are used to address inaccuracies in the timing of that forecast.
There are many variables that can affect these; the key is knowing which dial to turn when. ;-)


Comment From Rangarajan

Can forecast consumption work other than the Active 000 Planning version?


Sean Mawhorter: It can, but you need to use the save_multi bapi to create the sales orders in the other version. Sales orders are normally sourced from ECC and only populate version 000 by default.


To view the rest of the transcript, click here.

Product Split:

The explanation, with no doubt, is more detail at SAP help. Quick points to remember (also there in SAP Help):


  1. This is basically used to replace the demand of one product to another (or others) during the release.
  2. You can have this split location-specific or cross-location (for this, do not maintain any entry for ‘Location’ in the product split table).


The date entered in the ‘Supply Date’ field gives the date from which the system is to only take stock of the new product into account. But anyways, this can be controlled by supersession chain under ‘Product Interchangeability’ – my personal recommendation: Do not maintain this field!



Location Split:

You don’t have ‘Location’ as a characteristic in Demand Planning, but you will have to release the demand at product/location level – as supply plan can only happen considering location along with product. It is in this case you use Location split.


In fact, this can also be used in case you don’t use 9ALOCNO as your location characteristic, but some custom info object. This means, you will have to define this info object in release profile (or transfer profile).


  1. If the split has to be maintained for all the products at a specific location, you can set Product = <<blank>>.
  2. The “valid to” field can be used in case you do not want to go for the split after a certain date.
  3. The proportion maintenance should be between 0 and 1, by the way.

loc split.JPG

Period Split:

This, you use if you have different storage periodicity between DP (say, months) and SNP (say, weeks). During the release, the monthly quantities of DP are disaggregated to weeks as per the proportions you maintained in the distribution function.


First, you create a distribution function (/SAPAPO/DFCT) and then you use this in period split profile (/SAPAPO/SDP_SPLIT).



The period split has below options (all self-explanatory).


This Document will give an idea for developing programs to automate creation of profiles and maintaining mass assignment.



It becomes very time consuming and cumbersome to create the Time series ID, Like ID and then maintain the data in the mass assignment table.


The programs are created to upload the data in to the below tables with an option of Full upload and Delta upload. Here the file is placed in to the data loads the program picks up these file and updates the tables.

The main focus of the program is automation of the profiles maintenance so as to improve ease-of-use rather than changing the feature functions itself. 

The basic characteristics against which the maintenance will be occurring is product and location. Maintenance against other characteristics levels such as material
group, forecast area is not anticipated. Note that if changes and maintenance is anticipated at conceptually a “higher” level than product and location, then
system settings can be adjusted. However, the program assumes the maintenance at product location level.


The Program uses the input from the spreadsheets to generate the following profiles:


  • Generate phase-in and/or phase-out profiles with a direct upload in to table.
  • Generate like profiles
  • Create, modify and delete assignments for product location combinations for the demand
    planning area with a direct upload in to mass assignment table.


  • 3 separate program needs to be developed.


During the upload process, the system should perform several validation and consistency checks, and error and warning report need to be generated.




Example Formats of File to be maintained:








  • Start date and end date should be in DD/MM/YYYY Format
  • The Profile Name (Column B) should be in CAPITAL Letters.
  • Phase In: Should be - "Before start date, apply constant factor" , the % part should be"ZERO"
  • Phase out :- Should be " after end date, apply constant factor" and the % part should be"ZERO"
  • Maximum Upload Limit = 60,000
  • Total Characters: Time series column = 22 and Description = 40
  • Upload to be is Full upload as Delta upload creates lot of inconsistency.
  • Either % or Values for phase-inand phase-out profiles can be specified but not both. If both are specified,
    the program uses the values and ignores the % after generating an error/warning message during the upload step.




2. LIKE:




  • The Like Profile (Column C) should be always in CAPITAL Letter.
  • Can upload combination of reference values in one profile (in the like profile definition) Maximum is 10 reference values.
  • Maximum Upload Limit = 60,000
  • Total Characters: Like Profile  column = 10 and Description = 40








  • During the upload to mass assignment table, its always better to clean the data first and then do a full upload. (There could be inconsistency during delta upload)
  • From and to date should be the same as used in the Phase in_out file.
  • Maximum upload limit = 60,000



Data gets uploaded in to the following tables:



Like Tables


/SAPAPO/T445LIKK – Header Table, Creates a LIKE ID


/SAPAPO/T445LIKE – LIKE ID Created Above is linked to the Like Profile


The link between these 2 tables is the GUID (LIKE ID)



Phase in out Tables: (These does not contain any GUID)

/SAPAPO/TIMETEXT – Header table.


/SAPAPO/TIMESERI – contains the Factors.


/SAPAPO/TSPOSVAL – Contains the Values


During Direct upload it is very important to read the created on and by and changed on and by in the table.








Here you have the Like ID (GUID) assigned to the like profile.
The Program should link the LIKE ID with the LIKE GUID from the table
/SAPAPO/T445LIKK and write in to the mass assignment table.


  1. Use FM /SAPAPO/TS_PLOB_LIST_GET to read CVC values from POS.
  2. Map the Like ID from the /SAPAPO/T445LIKK Table while generating the
    mass assignment profile





Example of the screens of the programs:















Please note that you should run these programs in the following sequence.





Well, let's first see the mystery - you have the below content in the planning book currently which you have downloaded to a file:

Figure 1


In the mean time let's assume you have changed the prod1 value to 60, thus making the total 120.

Figure 2


Ideally, you download the file to make some changes and then to upload. Let's assume you didn't change anything for this example (for easy understanding) and start uploading the values from the file to the planning book. You expect the values in Figure1 to get updated in the planning book. But with what you end up is something like below:


This is the mystery I subjected, and yes - you will understand this and will be able to resolve this in a short time .


The fact is: the behavior is correct - the note 1404581 explains this - and let me put the note explanation in simple way. The sequence how the behavior goes as per the note is:


  1. The total value is first compared between the file and the planning book, and in case of any discrepancy - the value of the file updates the internal table which finally updates the values to the database for the 'Total' value.
  2. Considering the new 'Total' value, the internal dis-aggregation to the product level (considering our example) happens as per the current situation of the planning book (and of course considering the calculation type of the key-figure: but don't get it here, and understand this as 'pro-rata' for now for easy understanding). These dis-aggregated values are stored in temporary internal tables but not in the database.
  3. The comparison now happens for the detailed level between the file and the current planning book values. The values are as well updated to this internal table.


Once these three activities are completed, the internal table is finally committed and the values are updated in the data base. It is this sequence which creates the confusion for the consultants. Let's understand this pictorially:

Point 1,2: File has 100 as total value while planning book as 120. So, the change happens from 120 to 100. Since the total is now changed, the dis-aggregation to the detailed level happens considering the new value of 100 as per the previous dis-aggregated situation.



Point 3:

Current situation of planning book says Prod1 = 60, and Prod2 = 60.

File says Prod1 = 40 and Prod2 = 60.


So, the value changes for Prod1. This change occurs still in the internal calculation. And this means, the total value is also impacted. Below picture says it:


And it is these values which get committed finally, creating a confusion for the guys who do the file upload to a planning book.


A point to note: File upload always happen to the key-figures which are not read-only. Any read-only key-figure will not be impacted by the file upload. By default, the 'Total' in the data view is always in edit mode. But with macro functions, you can make it read-only.

Let's assume we made it read-only in the above situation. But yet, during the file upload, the 'Total' is considered and changed. This is SAP bug. The fix for this is the note 1975441 which makes sure that the key-figures which are read-only are not considered during the upload.

If you can just try to understand the above scenario in case the 'total' is not considered during the upload, all goes well and file upload happens as per the expectation. In fact, what is not understood is the reason why SAP has given us the option of overwriting only 'Total' and 'Aggregated level' during the upload of the file but not 'Detailed level' - in case they had given this option, the workaround we would have asked our DP planners who regularly upload files is to simply select the 'Detailed level' radio button during the upload.


By the way, in case you are not able to understand what 'Aggregated Level' means - it's just that the point 3 doesn't happen .

Just wanted to share the behavior of CTM – comments/corrections are welcome


I don’t want to explain more here – just would like to show to you in a simple manner. The only thing you need to know is – CTM goes deeper and deeper until it finds the required receipt element (say, stock) for negating the demand. If nothing found, it ends up creating a planned order at the plant (of course, based on your master data selection again).


Supply Chain:


CTM Behaviour:


But then, I think you cannot appreciate this behavior of CTM. For example, if there is enough stock to fulfill at R2, it still searches still PLANT1. The ideal/expected behavior would be:


search for R1, if not - go for R2.

If both R1 or R2 fail, go for WH1, and then to WH2 and so on...


Basically, the expectation is: let it search node by node, why go deeper into the node? Right! You can force the CTM to behave this way - all you need is to activate the below check-box.


But then, for this to work - you need to activate the corresponding business function which you can find at the below place in SPRO - by the way, activate it at your own risk - it is risky (you need to assess the impact before you do that - like how does it impact on the existing profiles).


Well, that's what SUM( ) sometimes says. It doesn't perform its duty correctly when you use multiple auxiliary key-figures in macro steps and then you use the SUM( ) on these auxiliary key-figures.


This was identified with some unexpected results in one the BiG macros we had, and was given to SAP (almost an year back). SAP, after its kind attention toward the problem has come up with a minor development and the associated NOTE can be seen at 1895631.


But beware: this note only has solved the problem we had gone with. And when we confirmed on the success of the fix with the reported case: the note was released to all the customers. But I cannot guarantee on this yet, I was not in the project and I heard some other problems later, so: implement it at your own risk




  1. Auxiliary tables are global tables, they store the values of auxiliary key-figures not specific to a planning object - so, be ready for any misunderstandings in the macro calculations when you use auxiliary key-figures
  2. Best fix to solve the misunderstandings of 1) is to initialize the auxiliary values (you can do this in macro settings [double-click on the macro and don't check "Do not Initialize Auxiliary Table"])


Well, we maintain different calendars for different locations across the globe - this is understood. Though a group of locations belong to the same calendar - their holidays might differ. With regular holiday procedures at different locations, and each different from one another - you will have to maintain different calendars at each location - like PROD_<loc> for production calendar, SH_<loc> for shipping calendar.

If a holiday procedure to be implemented at different locations - it then becomes a manual activity in creating holiday by removing the specific streams of time in all these calendars. As of now, there is no way of generating the time-streams considering a particular time-stream as reference (I guess so) - this means, we will have to, without any choice, maintain the holidays at all these locations manually.

Let's suppose the holidays are now cancelled - this means, we need to remove the holidays and generate the time streams as per the calendar (which has no. of years in past/future with the calculation rule - specifies which days are working days [Mon-Fri]). This becomes bit tricky now - you will have to actually do the same job for multiple time streams!

Isn't there an easy way to do this? There's this note 1130778 which said - 'there is no option to auto-generate the time streams, and the solution for this is to create one custom program as specified in the solution part' - but this seems to generate the time streams for all the calendars - which is against to the requirement (holidays differ between China and US - we do not want the time streams for US locations to be disturbed for the sake of generating the time streams for China locations).

Unfortunately, the easy way is not found.

Guru Charan

CTM PDS creation

Posted by Guru Charan Sep 16, 2014

Well, if you use CTM – you will have to use the CTM PDS, but how do we create it? It’s pretty simple, as it suggests – because all here is just a standard which we are going to tweak.


You have some handy options to tweak the CIF behaviour; and the creation of CTM PDS, you may understand, as that it falls under the same umbrella. Just look at the ‘PP/DS PDS Enhancements’ section in SPRO / Integration with SAP components / Integration via APO CIF / App. Specific Settings & Enhancements. Under PP/DS section will you be able to see the BADI where a method is given by SAP for you to allow the CTM PDS to be created.





Go to se18, give the BADI name, create an implementation and change the method (flag and activate) to allow the CTM PDS to get created!




I wonder why SAP has not given this in rather a more flexible way by just ticking some 'check-box' in SPRO


By the way, you can in fact have this PDS immediately transferred to APO by activating a similar method in ECC. Note 1623443 speaks of this with fine screenshots .

Well, it doesn't happen across any standard SAP system, but we have been encountering regular errors while trying to update/delete the interchangeability groups which says "Package xxxxxx doesn't exist".


The system checks for and creates/deletes the Package ID (in table /sapapo/heurpack) when creating/changing/deleting an Interchangeability Group (ICG). It then populates the Package ID field in the Product-Location master(PP/DS tab).

In standard SAP, the package ID is used in ICG for PP/DS only. Additionally, in standard SAP, the system doesn't allow the user to create multiple ICGs for the same product. The root-cause of receiving the error message above in our system is because we have changed the system to allow users to create multiple ICGs for the same product.

Under SPRO / APO / Master Data / Product and Location IC / Consistency Checks / Maintain Validation, we have deactivated S_V18 thus allowing a product to be part of multiple ICGs.


We could have, rather than, completely deactivating this check - at least allowed 'Warning' message instead of error in the node "Maintain Consistency profiles and Assign Validation" (S_1, set S_V18 to "Warning" instead of error) under "Consistency Checks".



As a fix for this problem - we had to implement a workaround. This is possible for us as we are not using PP/DS in our system as of now. What we did is to deactivate the creation of any packages.


Under Prod/Loc INC of SPRO: choose "Application Settings / General Settings" and deactivate the PP/DS functionality so the concept of 'package' doesn't at all come


But for this setting to successfully be able to carry out - we need to address some checks which SAP has provided:


  1. The planning packages from the mat1(prod/loc) should all be cleared,
  2. All the planning packages from /SAPAPO/HEURPACK and /SAPAPO/HEURPACKT should be deleted,
  3. All the current ICGs should be blocked.


Once the configuration settings are done, the blocked ICGs can be 'Released'.

You want the macro steps to roll automatically as the period changes – and I thought the requirement for this to tick the below check-boxes until lately when I have encountered a strange (not anymore) case.


Our data view has a history of 3 years and the future of 2 years. And we didn’t maintain any ‘Planning Start’ date in the data view settings (see below). This means, the planning start date will be considered as the current time-bucket.


In one of the macro steps, I was trying to tick the above check-boxes but with no success. Below is with what I ended up and the ticks I had done are being vanished!


After searching for the message is what I found this consulting note 674239. What I learnt is: these check-boxes come to use when you have varying periods for the data view in history or future as we roll over (“or” because if one of them changes, the other gets auto-changed).


In the above situation – we had first month in weeks, and the rest 23 months in months as our time bucket profile. When we were in week-x1 – we had in total 27 periods in future, and the same is being changed as we rolled over.


Let’s assume we have developed a macro in week-x2 – it then would have had 26 periods in future, and then it becomes 25 periods the very next week because of the time bucket profile we are using. This has the possibility of producing unexpected results (not each time, but it depends on the logic we are using) since the last column is 26th period when we developed the macro but is 25th period when we jumped to week-x3.


To avoid this confusion to the system, and to let the system actually understand what is the first period and what is the last period – we use these check boxes. The system then determines the first/last column during the runtime, and accordingly calculates the results thus avoiding the wrong doings.


Note: You can check these boxes only if you have one of the following time buckets in the user-defined periods (assume we are in week-x1).


  1. First column in the history (Month1),
  2. Last column in the history (Month18),
  3. First column in the future (Week-x1),
  4. Last column in the future (Month42).

Otherwise, get ready for /SAPAPO/MA119.

Lately, I've seen a few customers facing this dump, therefore I decided to create this blog post:


You use transaction USMM for system and license measurement.


When doing so, you receive a dump like below


Category                     ABAP Programming Error

Runtime Errors             CONVT_NO_NUMBER

Except.                       CX_SY_CONVERSION_NO_NUMBER

ABAP Program            /SAPAPO/SAPLOO_TS_PLOB

Application Component SCM-APO-FCS


which has the following keyword suggestions for Note seach:






In order to solve the dump, implement SAP Note


1864055 /SAPAPO/DP_AUDIT function module dumps


and retest the behavior.

Product Interchangeability functionality in APO SNP provides the system ability to plan discontinuation of products. The products are substituted or replaced with another product due to various business purposes, which can be addressed through product interchangeability functionality.


Possible business scenarios:


  • A product is planned to be replaced with a technically improved product.
  • A defective product needs to be discontinued and switched to a non-defective, good quality product
  • There are multiple products which are similar in their technical properties and can be substituted with one another
  • Temporary replacement of a product to promote another product


Product interchangeability function helps the organizations to achieve the following:


  • Better discontinuation planning and execution
  • Optimizing material inventory and availability
  • Minimizing scrap


Product interchangeability in supply network planning is used to transfer demand of a product to be discontinued on to a successor product or to use existing stock of a current product to meet demand for the successor product. APO SNP supports the following product interchangeability methods:


  • Product discontinuation
  • Supersession chain
  • Form-fit-function (FFF) class


1.  Supersession: A ---> B <---> C


- Product A is forward interchangeable with product B. The product B and C have forward and backward relationship and are fully interchangeable. The substitution valid from <date>, use-up strategies as no/yes/restricted until <date> are used.


2.  Discontinuation A ---> B


- Product A is replaced by product B


The supersession and discontinuation use the parameters like valid from date. The substitution valid from <date>, use-up strategies as no/yes/restricted until <date> are used.


3.  Form-fit-function (FFF) class: It is a grouping of interchangeable parts which are identical in technically properties (form, fit, function). The FFF class has fully interchangeable products with no validity dates. The FFF class contains at least one FFF subset, which has a leading product defined. The leading product is only ordered/procured.

FFF class.PNG


Pre-requisites and Master Data for Product interchangeability:


  • Activation of product interchangeability in SNP global settings
  • Maintenance of interchangeability group
  • Assignment of interchangeability group to model
  • Maintenance of FFF class and FFF subset


Interchangeability group- creation



Model assignment:



FFF Class and FFF subset:



Interchangeability in SNP:

To  address product interchangeability functionality in SNP, the below points should be ensured:


  • Use SNP planning book 9ASNP_PS with data view PROD_SUBST or create your own planning book based on this standard planning book
  • The use of the above standard planning book/data view is recommended because it contains the required key figures for Substitution Demand and Substitution Receipt.
  • The standard planning book/data view also contains the required macro for calculating the stock balance
  • Activate product interchangeability in SNP global settings
  • For SNP Heuristic/ Optimizer run, ensure that the checkbox "add products from supersession chains" is selected
  • In SNP heuristics- Only the location heuristic supports interchangeability. The network and multi-level heuristics do not support interchangeability.





FFF Class in SNP:


  • All demands of the FFF subset are transferred on to the leading product
  • If the stock of leading product is insufficient, stock of other products is used from the FFF subset
  • If none of the product in the FFF subset has sufficient stock then only the leading product is procured
  • If the stocks are not sufficient to fulfill the demand, a substitution receipt is created for the product which has demand and corresponding substitution demand is created for the leading product.


Interchangeability functionality in CTM:


  • CTM does not support "full interchangeability" and supports only forward interchangeability.
  • CTM also supports FFF classes.
  • In the CTM profile, in the product interchangeability field on the special strategies tab page you can set the "Use discontinuation" or "Use FFF classes" options


Interchangeability in SNP- Integration with R/3:


The following restrictions exist with respect to transfer of SNP planning results from APO to R/3:


  • The SNP planned orders and stock transfers generated during planning can be transferred to R/3.
  • The generated SNP product substitution orders linked to these orders can not be transferred to R/3.
  • The substitution orders can only be transferred to R/3 in PPDS planning.
  • Deployment and TLB do not support interchangeability functionality.


Uploading discontinuation information from R/3:


XML upload functionality can be sued to upload the interchangeability or discontinuation information from R/3 to APO. XML upload can be done in the following two ways:


  • Manual upload using report /INCMD/GROUP_CREATE_VIA_UPLOAD
  • Through background job using report /INCMD/GROUP_CREATE_VIA_BATCH
  • SAP notes for more information on uploading the discontinuation data to APO are 617281 (from R/3 4.6C onwards) and 617283 (below R/3 4.6C)



Restrictions in Supersession chain:


The following restrictions exist for the maintenance of supersession chain:


  • A product can only be included in one supersession chain.
  • The base unit of measure must be the same for all products of a supersession chain.
  • A supersession chain can not contain configurable products.
  • Parallel discontinuation e.g. discontinuation of multiple products is dependent on one another, is not supported.
  • Only 1:1 relations/ substitutions are supported. Complex interchangeability e.g. where one product can be substituted by several other products is not supported.





Scenario 1: Forward interchangeability with restricted use-up strategy


Consider the below interchangeability group with forward interchangeability and use-up strategy- restricted. As per the group, the product 1000-01 needs to be substituted with product 1000-02 from 14th July and the stock of product 1000-01 can be used up to 28th July to meet it's demand, after this date even if the stock of 1000-01 exists, that should not be used to fulfill its demand instead distribution orders should be created.

demo-1st screen.png


Before planning situation:


In the below screenshot the stock of 70 is not used to fulfill the demand from 28th July onwards as the use-up date is 28th July in the interchangeability group.

demo-2nd screen.png


SNP planning is executed: Planning creates substitution receipt for product 1000-01.


demo-3rd screen.png


Corresponding substitution demand is created for successor product 1000-02.


demo-4th screen.png


Scenario 2: FFF classes with CTM planning


Below is the FFF subset consisting 3 products 1000-03, 1000-04 & 1000-05 with the leading product as 1000-03.




Demand exist for member products: 1000-04 & 1000-05


demo-5th screen.png


demo-6th screen.png


CTM planning is executed using FFF class, product interchangeability option:


demo-7th screen.png


The substitution demands are created for the leading product 1000-03.

demo-8th screen.png

demo-9th screen.png



SAP Note: 1405601- Implementation recommendations for APO 7.0 SNP, CTM and VMI


SAP Note: 1405636- Implementation recommendations for APO 7.0 MD, INT, INC


SAP Note: 617281- Migration of discontinuation data: SAP_BASIS 610 as above


SAP Note: 617283- Migration of discontinuation data: SAP_BASIS 46C and below

As we all know, SNP basically supports Make to Stock functionality. So, all the demands and supplies we have in MTS segment can be seen in SNP Planning book without any enhancement.


However, when we need to see data from other segements (Make to Order or Planning without final Assembly) in SNP Planning book, we may need to do some enhancements and changes to the SNP planning area.

This document is about making this functionality active.


1) You will have to use custom key figure with key figure function 2006 and 2008 for supplies and demands in Make to order or Planning without Final Assly segment.

There is a BAdi, /SAPAPO/SDP_INTERACT> GET_KEYF_SPECIALS and set parameter CV_KEYF_SWITCH to 3.

If you are working only with Make to order segment, you can set this parameter to 2.

If you are working only with Planning without final assly segment, you can set this parameter to 3.


2) Create and activate two custom Key Figures for demand and supply each.


3) Planning Area De-initialization


4) Add Key Figure function to Key Figures in Planning Area

In change Planning Area. Go to Key Figs tab and click Details.

For custom Demand Key figure add Key figure function 2008 and for custom Supply Key Figure add Key Figure function 2006. Also make sure that you are assigning correct category group for these custom key figures.


5) Planning Area Initialization

Now you can see these Key figures in Planning area.


6) Add these Key figures to the desired Planning Books.

Forecast and supply in “Planning without Final Assembly” segment can be seen in Forecast (ATO) and Supply (ATP) key figures respectively.

Users are facing the problem while trying to run some normal copy macros where we used ROW_INPUT function before the copy happens.


Below are some observations we have identified in regards to this case:

  • We have activated the KF lock on the planning area with no lock on the read-only KFs: so, if user A opens the data view, modifiable KFs of the view are locked.
  • We are modifying the attributes of one 'output only' KF to edit mode as part of the macro logic.
  • We are having a default macro in the same view which tries to close all the KFs (bring all the KFs to no-edit mode).

Suppose user A has first opened the view – he locks all the modifiable KFs of the view. If user B now opens the view and executes the macro, he is actually trying to bring one read-only KF to edit mode: this is because the macro first tries to bring the KF to edit mode before updating it. Mind you - when switching to edit mode, the data must be read again - this because: since this KF was previously not locked for user B, it could therefore have been changed by some other user (system thinks so!)...   


Since it is user B who tries to bring the KF in edit mode by a macro, system ideally has to lock this KF for user B. But again, it is user A who is authenticated for locking all the modifiable KFs as he was the first one to have opened the data view and have already locked all the modifiable KFs. The consequence: system cannot lock this KF for user B, and hence it cannot be modified. Hence is the error: “KF not locked, data not saved”...


Well, this is fine, but the error is still alive even if only one user tries to run the macro. And here, we have the default macro coming into action. The problem exactly happening here was:

  • I run my macro - the target KF gets updated,
  • I now have to save the results - so, I click the SAVE button, and then the error appears.


So, it is while SAVing that we are facing the error - meaning, the screen is getting regenerated; and our default macro doesn't allow the KF to go in edit mode.


Well, we have come across the consulting note 1649757 which recommended us to use CELL_INPUT instead of ROW_INPUT. Unfortunately, even this didn't work and is giving below error

After analyising further, it is understood that for CELL_INPUT to work correctly, we needed a minor change in the function module /SAPAPO/ADVF_CELL_INPUT as specified in the note 1328806. But the fact is: we are already in SCM702 and this note is seen implemented !


Where is the way now? we allowed SAP to look into the same, and they asked us to use CELL_INPUT instead of ROW_INPUT by explaining the cause of the issue as because of the default macro which doesn't allow the KF to go in edit mode.


I have finally tried to see only with the function written in the macro with no extra arithmetic.


And this is giving something as below:


I changed for CELL_INPUT( 1 ) in the above macro, and the error changes accordingly.


Finally, SAP has come back saying CELL_INPUT works correctly only with KF type being INPUT/OUTPUT - what we had is OUTPUT type KFs... hence was the problem - thanks to SAP


Transferring data between ERP and APO can be a tricky, error-prone activity, which is why we asked Claudio Gonzalez of SCM Connections to host an online Q&A to address the biggest questions users are facing and tips to optimize CIF performance. Check out the transcript from this Q&A session and see whether your questions were answered.

Claudio will be one of the featured speakers at SAPinsider’s SAP SCM 2014 conference in Las Vegas, April 1-4. For more info on the event visit our website at Logistics & SCM, PLM, Manufacturing, and Procurement 2014 and follow us on Twitter @InsiderSCM

Comment From Pat H.: Is CIF a standard, delivered SAP tool with ERP / APO or a separate add-on?

Claudio Gonzalez: The CIF is a standard delivered interface.  CIF is an integrated part of the ERP system.

Comment From Brad Antalik: If ECC and SCM were both on HANA does this eliminate the need for the CIF? If the CIF is still used how would HANA affect it?

Claudio Gonzalez: Should not affect it as of right now as there is no dependency. CIF is an integration module independent of the underlying database. Now in the future there has been some thinking that With ERP DB in memory, we could see the day that the master data (and transactional) dataset for both, merge, allowing an unified system with Global ATP (SCM-APO-GATP) available out of the box. The same for CRM and ERP integration.

Comment From Ana Parra: Can APO be based on the SAP HANA Platform? Do you have supported evidence of before and after scenarios for SAP HANA performance? Thanks!

Claudio Gonzalez: This one is a bit off the CIF topic, but to answer the question HANA is available on SCM as of version 7.02. Here is a good link with some details on it

Comment From Guest: Just implemented SAP TM9.0 and SAP EM9.0 with SAP Optimizer 10.0. ECC is EhP6. In past, I was able to use CIF cockpit to monitor CIF data transfer and troubleshoot any issue. What's the equivalent process/method to monitor and troubleshoot CIF related issue and data transfer in the newer versions?

Claudio Gonzalez: As far as I know the CIF Cockpit should be available as long as you have the SCM Suite. I do recommend to use the following CCR report (/SAPAAPO/CCR) , Queue Manager (/SAPAPO/CQ) and CPP report (/SAPAPO/CPP) in conjunction with the CIF Cockpit to troubleshoot CIF related issues

Comment From Axel Völcker: Our ERP system has to provide two SAP system with material master data. How can we set up the IM's for the Material Master data?

Claudio Gonzalez: You would create two integration models. Each integration model would have a different logical system. Each logical system is tied to a specific SAP SCM system.

Comment From Pavan Kumar Bhattu: I have two questions:

1) If we CIF Purchase info records with two purchase organization data, then how will It reflect in APO?

2) If we CIF Sub-contracting SNP PDS, then will it create any transportation lane in APO?

Claudio Gonzalez:
1) The info record would CIF to APO and on the external procurement relationship, the different Purch Org would show under the General data tab. I am assuming the different Purch Org will have also a different destination location, which would then create two lanes, this is the most common scenario

2) Yes, when you CIF a Subcontracting PDS to APO, be that SNP or PP/DS the system will create the lanes for the input components from the Manufacturing location to the Subcontracting location. The lanes from the Subcontracting location to the Manufacturing locations for the FG are created by the PIR or Contract.

Comment From AJ: Can you please further elaborate on PDS_MAINT in ECC?

Claudio Gonzalez: PDS_MAINT is a transaction in the ECC side that's used to update changes to PDS in APO. As of SCM 7.0 EHP1 you could not make changes to the PDS directly in APO. Thus, PDS_MAINT was used to make changes. Changes such as costing, priorities, consumption, bucket offset and so on. In EHP2 there is a functionality to mass maintenance PDS directly in APO but it has its limitations and it seems that if you re-send the PDS from ECC it will overwrite the changes, need to verify this.

Comment From Ayyapann Kaaliidos: Is there no WEEKLY R/3 Consumption mode available in R/3? We are in ECC6 SP10. But we use WEEKLY Consumption mode in APO. We are in SCM 7.01 SP 5.

We had to handle this in user exit only, as we have configured CIF Master data update as Instant and the CIF will overwrite the values in APO if it's not handled in user exit. Will this functionality be part of any future R/3 versions?

Claudio Gonzalez: I have not heard of any plans to add this functionality in ECC yet. But, before you go about modifying the CIF as to not override the ‘W' value in APO. Try the following, ECC has a consumption mode ‘4' (Forward/Backward) which Is not supported by APO. I believe if you set it to this value, it would not override the APO value. It is not pretty but it will save you a custom change.

Comment From Mukesh Lohana: What criterion determines whether inbound queues or outbound queues are faster? We use outbound queues and transfers for the huge planning data from APO to ECC. If we change to work with inbound queues to transfer data from APO to ECC, then in my understanding ECC will be responsible for handling the load the data. Since ECC is an execution system, and a lot of activities are occurring, will the change from outbound to inbound queues slow down the execution system (ECC)? What you suggest?

Claudio Gonzalez: Since this question is more technical than functional, let’s first quickly explain the difference between Outbound and Inbound.

- Communication Method: Outbound Queue - Calling system sends the queues to the receiving system without taking care of the system load of the receiving system. No scheduling of the processes happens in the receiving system. The can lead to overloading of the receiving system, which leads to deterioration of CIF performance with high data volume.

- Communication Method: Inbound Queue - Calling system sends the queues to the 'entrance' of the receiving system which allows the receiving system to control the system queue load on its own. Scheduling of the processes happen in the receiving system. Therefore, in theory this will lead to better CIF performance.

Based on the above and if you go through SAP recommendations, it is stated that if you have performance issues in the target system to use Inbound Queues.

At the end of the day, any actual performance degradation on the ECC side won't be known until change is made and tested.

The following notes deal with how to change the communication from outbound to inbound. 388001, 388528, 388677.

Also, I always recommend to have note 384077 on your favorites, as it deals with how to optimize CIF communication and it is updated regularly.

Comment from Brad: If transactional data goes real time to APO, what is the purpose of the batch stock, sales orders, and purchase orders CIF jobs?

Claudio Gonzalez: Regardless if the data goes real time or not, which by the way I recommend real time. You would need PO, Sales Orders and other transactional data to integrate into APO so that your planning system has all the necessary data for accurate planning.


To view the rest of the transcript, click here


Filter Blog

By author:
By date:
By tag: