14 Posts
Rajesh Prabhu

SPM - NW Compatibility

Posted by Rajesh Prabhu Nov 29, 2011

Lately we have been coming across queries around version/EHP support with regular frequency and this brief post will attempt to explain that.

Here is the compatibility matrix of SPM versions with NW EHP levels:

If SPM works for you then you do have the right version of the SPM UI. If not then how do you find out which version of SPM UI you have deployed?

Here are the steps that help you figure it out:
1)      Understand what NW EHP level you are on
2)      Log into the base portal URL (without the irj…)
3)      Click on System Information (and login if required)
4)      In Software Components, click on “all components”
5)      In the table of Software Components, scroll down to “SSAUI” to see the deployment details
6)      In this example, this SPM UI version that’s deployed is 2.1 release, EHP 1 at SP04 – patch1.


Hope that clarifies.

Recently I have seen quite a bit of requests come my way to understand the compatibility of different versions of SPM with different versions of SAP Sourcing (formerly e-Sourcing).

I will be posting more details on this soon.

For more information on SPM and SAP Sourcing integration pls check out the following resources:

1) SPM Application Help guide: link http://help.sap.com/content/bobj/sbu/docu_sbs_xsa_design.htm# >> Select Version >> Open the Application Help >> Click "Integration" chapter

2) Value Prop:
Demo Integrated Sourcing and Procurement > http://www.sap.com/solutions/sapbusinessobjects/large/enterprise-performance-management/spend-performance-management/demos/index.epx

The biggest challenge facing the standard BW content is not knowing which scenario it will cater to (since most of the content can potential cater to multiple scenarios) leading to a very bottom up design approach where the source system data model is faithfully replicated into the BW side. Now this in turn results in modifications (copy and change) of the original content, sometimes up to 80% in an implementation at the customer end.

On the other hand, as part of the EPM suite, SAP Spend Performance Management (SPM) is designed top-down. This is possible because the exact end user scenario is known (which is Procurement Analytics). And the benefit for the customer is that there is hardly any customization involved (under 20%) to the SPM data model. In fact there are a large number of implementations without a single modification in place to the SPM data model. Extraction customization on the other hand is dependent on the customizations made to the source system.

But there are occasions where customizations are needed. And in this case, the flexibility of the SPM architecture comes to your aid. As you will see, there are numerous loose couplings within the application to enable for rapid implementation time customizations.

Here are some of the key ‘loose’ coupling points:

·         The Extractor Starter Kit

·         Inbound

·         Data model

·         Data mgmt.

·         Reporting

Flexibility of Data Extraction: SPM application has its own data extractor starter kit (explained in this wiki) which helps in getting a jump start in data extraction. This is a highly customizable/configurable framework. You could add new fields to existing extractors, add custom code for each field or add brand new extractors on this starter kit framework.

Flexibility of Data Enrichment: Data enrichment (normalization, classification and cleansing of data) enables higher level insight into the data. This is an additional services provided by SAP. Customers are also free to go with a service provider of their choice.

Flexibility of source systems: The SPM application is not tied to SAP source systems. In fact the SPM application is architected ground up, keeping multiple source systems (SAP and non-SAP) in mind and is only loosely coupled to the SAP source systems.

Because of this you could bring in data from any source systems. It is best to match the fields to the SPM inbound data specifications (defined in “Master data for Spend Performance Management.xls” and “Transaction data for Spend Performance Management.xls” in service market place) to simplify the process or you could use the template mechanism to map the non matching field names in SPM data mgmt.

Flexibility of data load:

i.                     BW Data sources Vs Flat Files: SPM application allows for both BW data source and flat file based data loads. The choice depends on various factors, for on premise installations using BW data sources for SAP source systems is the best practice. For hosted solutions OR for external data (like market/commodity pricing information) OR for bringing data from non SAP source systems, flat files are the best way.

ii.                   Data Flow options for loading enriched data: Explained in detail in this article Spend Analytics.

iii.                  Data Load mechanisms: Explained in detail in this article here.

iv.                 Custom Data: Customer specific master data and transaction data can be uploaded into the SPM application. Here is a SPM Customization Series: Loading data in Customer Defined Dimension detailing how to load customer defined data into SPM.

v.                   Source system dependence config: Because of all this flexibility on the source system data, the uniqueness of records needs to be taken into proper consideration. This configuration is explained in Upload Types in SPM blog post.

Flexibility of custom fields: SPM has about 15 custom dimensions available for use by the customer for “any” dimension that is not already included in the data model. These dimensions are obviously shared (they have to be) across various SPM objects like Invoice, Contract and Purchase Order etc….
There are also numerous custom measures (Amount type, Quantity type and Number type) provided for the individual objects.

Flexibility of Data Model: In case there is a need for extending the data model, the overall application structure allows for this kind of flexibility. You could add new master data or new DSOs and cubes and extend the existing queries, by including it in the multiprovider (0ASA_MP01) or by creating brand new queries.

Flexibility of UI coupling with BW: The BW queries are loosely coupled to the SPM UI and this process is data driven. When the queries are extended because of addition of new dimensions or new measures or new objects, all that needs to happen is a refresh of metadata (triggered in the UI) and the back end changes get taken over by the SPM UI.

Flexibility of Ad-Hoc reporting: Although SPM comes with somewhere in the range of 140 out of the box scenario specific reports. The empowerment of creating ad-hoc reports by the business user is a key feature of the application. And strong content management facilities within the application help advance the ad-hoc reporting capabilities.

Flexibility of linking to external applications at runtime: Any URL based external resource can be wired up to the SPM UI by simply configuring the URL and mapping the parameters. And then this becomes a related application link available from any relevant report.

So as you can clearly see, despite the strong out-of-the-box offering, the SPM application is truly flexible on all fronts (starting from data extraction all the way to reporting) and can be made to bring about a custom fit based on the implementation needs!

Rajesh Prabhu

Upload Types in SPM

Posted by Rajesh Prabhu Dec 23, 2010

All of the types/objects of data that can be uploaded in Spend Performance Management (SPM) thru the SPM Data Management Tool are set in the table OPMDM_TYPES. All of these data types are known as Upload Types in this context.

The SPM Data Mgmt. tool uses this table as its source for loading all data be it objects that are part of the SPM product or whether the objects have been included into the SPM data model as part of the implementation.

The OPMDM_TYPES table has the following fields:

Column Name



Name of the target Object for data upload in SPM Data model


Whether it is master data or transaction data (two entries allowed: “IOBJ” or “DSO”)


Name of the Info object if it is a master data load or name of the Data Store if it is Transaction data


Group this entry belongs to “TRAN” for Transaction Data, “MDAT” for Master Data


Process Chain to be triggered after data is loaded in the Target (Specified in Column TARGET_NAME)


Pls ignore


Defines Source system dependence for Master Data


Which application will this be used for, Fpr SPM, use “XSA”


For a more detailed explanation of Source System dependence, go to the following link: Source System Independence considerations during data load in SPM

The SPM products ships with 44 entries in this table (for the 2.1 release). This essentially means that out of the box, SPM supports 44 upload types.

There is a Text table supporting the internationalization labels for all of the entries in this table. The text table is OPMDM_TYPEST. This table already comes populated with entries in SPM supported languages for all 44 upload types. You need to populate the language specific labels for upload types that have been created during the implementation.




There is an additional table OPMDM_TYPES_KEY which allows for configuring additional concatenations per upload_type for uniqueness.

Hope this explains the way SPM Data Mgmt. identifies and loads data for different upload types in the data model. Pls feel free to comment if you have any furhter questions.

The Spend Performance Management (SPM) data model already covers all the necessary dimensions for the Procurement Analytics scenarios. In addition to the Procurement specific dimensions there is a provision for 15 additional custom dimensions shared across all the SPM analytical objects (like Invoice, Contracts etc…).

But even with these built in provisions there can be a need for having additional customer defined dimensions. So how would you go about adding such a dimension and loading data in SPM?

Let’s take a ‘make believe’ example. Let’s say that we want to add a Spend Quality Dimension. Let’s call this characteristic infoobject as ZXASPENDQ. And in this example, the characteristic does not have any attributes (although it could just as easily have them) but is enabled for texts.

Now let’s create this infoobject. Make sure to turn on the texts flag and leave the Master Data flag as checked. And then activate the infoobject (make sure to attach it to a transport if needs to be delivered thru the system landscape).

Next step is to create a test file (csv format with semicolon as separator and double quotes as escape character). SPM also supports BW datasources to load data on top of the flat file based mechanism, for our example the flat file would suffice.

To load this file in SPM, first this new upload type must be registered in the table OPMDM_TYPES. Go ahead and do so, use the following values. The target type is “IOBJ” for infoobject (“DSO” if it’s a data store for loading transaction data). The target name is the infoobject tech name, the upload group is “MDAT” for master data. Feel free to ignore the rest of the fields except for the last one. Just set it to “XSA”.

Additional details can be found at Upload Types in SPM and Source System Independence considerations during data load in SPM.

Save the new entry.Additionally, in the text table, OPMDM_TYPEST, make the text entry for internationalized view.

And now you are set to load data for this customer defined dimension/attribute and you can proceed to use the SPM Data Mgmt. Tool for loading our data file into this infoobject. And that’s it!

(For more details on how to use the data mgmt. tool, refer to the following documentation in this location (link). Go to the 2.1 documentation and open up the Application Help >> Data Management.)

Before you start loading data into SAP Spend Performance Management (SPM) it is vital to you take a moment understand the data in terms of the uniqueness of technical ids of the dimensions and think thru the implications those ids might have when the data from multiple source systems (with perhaps similar id number ranges) is being loaded. So what does that mean exactly?

Let’s take two cases and for simplicity, in both the cases, there are two source systems (S1 and S2):

1) Same technical id to mean the same master data record: Let’s say that in both the source systems there are two buyer ids “B1001”. And they actually mean the same Buyer – Jane Doe. So when buyer data from the second source system is loaded into SPM after the buyer data from the first system has been loaded, and BW overwrites the master data record for Jane Doe, that’s just fine (as long as the same information has been maintained in both systems, and if not then the sequence of load is important).
For transaction data that has references to buyer master, like invoice line “IV100” from S1 and invoice line “IV200” from S2 both have references to "B1001" (since Jane Doe was the buyer who posted both the invoices). And it is okay if measures from both the invoices get aggregated for Jane, in fact that would be the desired result.
In this case the, this object, the buyer master is source system independent.
One other way of achieving this is to explicitly flag master data from all source systems with a central (conceptual) source system built into SPM. This source system is called "Additional Files". In this case, the source system dependence is turned on and in the Upload Properties in SPM UI, the source system is set to "Additional Files". So then this Buyer record will look like "B1001_XY" (wheer "XY" is an example of the central source system generated within SPM).

2) Same technical id but different master data records: Now let’s take the example where both the systems have the same id number range for Cost Center because of which two Cost Center records in both the systems have the same id “CC1”, but they actually mean two different cost centers for different locations. In this case the master data records, under no condition, should be over written. Since the latest overwrite will delete the older load. Meaning that if source system S1 data gets loaded first and then data from S2, in the end, there will be only one record CC1 from S2 when in fact what’s expected is that both the records exist and get aggregated independently.
In this case, this object, Cost Center is Source System dependent and flagging it appropriately will yeild "CC1_S1" and "CC1_S2". Which is exactly how it should work.

The mechanism of achieving the expected results in both the cases above is by ensuring that the table OPMDM_TYPES is correctly setup for all of the upload types. Thru this table, depending on the situation, the object can be made source system independent or source system dependent. If made source system dependent, the source system id will be concatenated with the technical id to make it globally unique. The column Source system dependence column is a Boolean.

All transaction data is expected to be Source System Dependent, regardless of what this flag is set to, all transaction data will be made source system dependent (this setting is ignored for transaction data).

All standardized and enriched master data, with the upload type starting with C_*, is Source System Independent. This is because it is Globally unique in nature be it Suppliers or Categories.

Out of the box, the table OPMDM_TYPES ships with some default settings for all the upload types. This flag can be then turned on or off depending on the implementation needs.

I frequently get this question on improvements in SPM 2.1 over SPM 2.0 related to performance, so I thought it would be easier to compile this list and share it with you.
Here are all of the performance improvements in 2.1:

  1. SPM Simple Cache:
    SAP simple cache, as explained in the performance article, is an application specific cache. Pre-calculated result sets are persisted in the in DB on the ABAP side, when the report gets executed the first time. For every subsequent executions of the report (until the underlying data changes with a new upload), this stored result set is used and a call to BW is avoided. Other users executing this report also avail the benefit of this cache as long as their locale and formatting settings are same as the saved report. The table where the results get persisted is OPMDM_PRECALC_A.
  2. Memory Monitor settings:
    The flex memory consumption can be controlled by setting the memory thresholds from within the application. Follow the instructions in the note 1477454.
  3. Data streaming format:
    For the releases 1.0 and 2.0, the data streaming format was plain XML. Starting 2.1, the switch has been made from XML to JSON objects which have a high degree of compression and help reduce network latency related performance losses. And on the Java stack the processing for those JSON objects is much faster compared to XML processing.
  4. Data Volume:
    Rendering really large volumes of data (20,000 records and 4 columns – as an example) is a significant challenge in FLEX. Not only is it slow to load but often times I have experienced that the flash player simply quits. To handle this case, SPM 2.1 works around the flash player limitations and boosts the volume, by upto 2 to 4X. This enhancement not only helps improve handling of large reports but also speeds up the rendering process for smaller reports. Additionally, starting 2.1, we have started using the new (as of Flex 3) feature of OLAP grids to represent reporting data. This has an inbuilt understanding of multidimensional analysis and hence is faster in rendering large number of records.
  5. Cap on charting for really large values:
    When there are too many rows (granular dimensions, like item level reports without filters), then charting of such data does not provide any value. In this case SPM caps the charting mechanism for really large values.

For general performance improvement advice, refer to this article.

And as has been mentioned in other places, we have seen a tremendous improvement in Flash player performance with the 10.1 version. So always get the latest flash player (regardless of the SPM release).

Last week the Spend Performance Management BPx site was re-launched after some major updates.

The goal of the new site is to provide information, tips and suggestions for making it easier for implementing the application and lowering the TCO and is relevant for both Customers and Implementation Partners. Additionally, it provides a great platform to get involved and contribute to the growing community of the application's user base. In conjunction to the site, new content for the application is also being created. The site would be a unified gateway to all of those resources (Articles, How-to-Guides, Blogs and Wiki's) related to the application on SDN. You would also be able to leverage your favorite search engine or SDN search to directly access these resources. More than 20 SDN new resources have been created recently.

There are two ways to access the site:

1) Navigation thru EPM: 
  * Go to sdn.sap.com
    * Click on BPx Community
    * In the left navigation pane, open Solutions and click Enterprise Performance Management
    * And then in Key Topics, click on Spend Performance Management

2) Direct Spend Analytics 

Once you get to the site, browse around for information, specifically, in Knowledge Center, browse thru and read up on all linked resources in "Getting Started...", "SPM acquisition and Management" and "SPM Implementation Considerations" . 



This site will be continually updated based on your feedback. We hope that you visit the site regularly to get the latest updates during the lifecycle of your implementation and beyond.

How to improve the system performance of Spend Performance Management is a common question. I have created an article which lists out all the different pieces involved.

Here is the link to he article.

Pls post your questions, comments, requests for changes/additions for the first article right here. I intend to keep the article fresh, by updating it with your suggestions.

New to Spend Performance Management application and dont know exactly how to go about blueprinting the implementation? Dont worry, its a fairly common scenario for customers and first time implementation partners.

I have created a quick-read article (the first of two part series) which will help you get a jump start with Requirements Gathering phase of the Blueprinting Exercise. Link: http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/d09be88e-8974-2d10-f2b9-fe162df9a6d1

Pls stay tuned for part two, which will deal with the next phase of blueprinting, planning and general implementation recommendations.

Pls post your questions, comments, requests for changes/additions for the first article right here. I intend to keep the article fresh, by updating it with your suggestions.

Spend Performance Management is a SAP’s next generation analytical application covering the procurement analytics and performance management space. It has been built with proven SAP technologies. The backend of the application resides on top of standard BW, that’s where the data model gets deployed. The user experience is managed through the Adobe Flash based rich client user interface.

The reliance of the application on standard BW allows our customer to leverage the in-house IT investments or implementation partner expertise on the long standing BW technologies.

Here is an overview architecture diagram:

The BW content is structured is depicted in the diagram below. There are three “data layers”, the inbound layer, which is more of a staging layer, the detail layer which stores data with source system transactional granularity and with some data enhancements through BW transformations and the reporting layer which has all of the Spend specific cubes.

The source of the diagram above is SAP help site, BI Content Documentation. Pls visit for additional details.

Over time I realized that I was responding to the same questions and queries on the Spend extractor starter kit. 

Questions like:
* What source systems does the extractor starter kit support?
* What are the different options for transferring data from source systems?
* Where can I download this starter kit?
* Where can I find additional documentation?

So I collated the most commonly asked and posted their answers at the following wiki location:

I will refresh the wiki with additional questions and answers as we go along.

Your suggestions/comments/questions are welcome.


 Couple of years ago, with the need for a quick prototyping setup, I created a very basic PERL script for removing non ASCII characters from a data file, that I wanted to upload into BW. This script helped me get around those upload failures typically associated with special characters. This is especially handy if your sandbox BW installation is not Unicode enabled.

 I had shared this script with colleagues and partners who have used it for prototyping, proof of concept (PoC), demos etc. They found it useful. It makes my life very easy, especially for those characters which cannot be RSKC escaped. Pls bear in mind that this is not solution meant for production usage.

Assuming no knowledge of PERL, I have listed out the steps you would need to follow, from start to finish. I am also assuming a Microsoft Windows based OS environment.


    1) Get perl (I downloaded the free version from this website: http://www.activestate.com/activeperl/ )

    After the completion of the business blue printing phase and before starting SAP Spend Performance Management (SPM) implementation, hardware sizing is highly recommended. The official Quick Sizer project for SPM helps you to determine the right hardware sizing. Refer to note 1253768 for further explanation of the usage of Quick Sizer tool.


    Before using tool, it will help to do some groundwork and find data volumes for Transaction and Master Data. Make sure that you take into account multiple source systems, external data feeds like bank feeds for expenses etc based on the business scenario. The time frame for extracting the data should also be considered based on feedback from Business User. For example last five years or 2008 to current date etc.. Based on the industry segment/type of business, an appropriate time frame can be chosen. Best practice recommendation is to go back to the point of relevance for Analyzing Spend and avoid "data overloading". An example is that Freight spend which is heavily impacted by oil prices, suffers lot of volatility which is not necessarily predictable, hence a timeframe going back only a couple of years might suffice. Following is the list of information that needs to be made available to be able to use the SPM quicksizer. 


    Transactional information:
    1)      Number of Invoice Lines
    2)      Number of pCard and T&E expense lines
    3)      Number of Contract Lines
    4)      Number of Scheduling Agreement Lines
    5)      Number of Purchase Order Lines
    6)      Number of Delivery Lines
    7)      Number of Budgeting and Forecasting Lines
    8)      Number of Project Lines
    9)      Number of lines for commodity/market pricing information 


    Master Data information:
    1)      Number of Item/Products
    2)      Number of Categories
    3)      Number of Suppliers
    4)      Number of Buyers
    5)      Number of Buying Organizations
    6)      Number of Cost Centers
    7)      Number of GL Accounts
    8)      Number of Site/Plants
    9)      Number of Management Orgs 


    The above list represents majority of the sizing volume inputs that would be required. Remember to inflate the above numbers appropriately in order to be able to accommodate for future loads until the next hardware upgrade is planned. In addition, consider the number of concurrent users productively using the SPM application. Now, armed with the above information, log into the SPM quicksizer tool and follow the instructions of the note 1253768 to get the recommended hardware specifications. For more information on how to use/understand the quicksizer tool check out the following blog links, Efficient SAP Hardware Sizing: Quick Sizer and Quick Sizer - Getting started


    Typically, SPM can be  deployed either on an existing BW instance (which includes other analytics) or in a standalone setting where it doesn't compete for system resources. Both are valid deployment scenarios. The second scenario is straight forward, all you need to do is take the recommendations from the Quicksizer tool, validate them with the system administrators and commission the machine. In the first scenario, the hardware sizing results can be used to extend the existing resource allocations. The disk space is additive to the existing configuration, but the SAPs and RAM might not be, especially if the business users of the other analytics (that SPM shares the box with) do not have the same work hours. Remember it's the concurrent usage that typically drives RAM and SAPs numbers.  


    Happy sizing!!! 


    (Remember, additional information can be found at the SPM doc location in Service Market Place, link