Financial Management Blogs by SAP
Get financial management insights from blog posts by SAP experts. Find and share tips on how to increase efficiency, reduce risk, and optimize working capital.
cancel
Showing results for 
Search instead for 
Did you mean: 


In this series of blog posts we will explore several topics related to the sizing of the SAP BPC 10.0 and 10.1 (standard model) NW application.

 

 

To see the first post (Part 0: Preliminary Information), in which we present a brief recap of the standard sizing guide and methodology, please click here: http://scn.sap.com/community/epm/planning-and-consolidation-for-netweaver/blog/2015/05/18/topics-in-...

 

 

In this post (Part 1: Limitations of the Standard Sizing Guide) we will explore some of the sizing challenges raised by the flexibility of the BPC application itself.

 

Part 1: Limitations of the Standard Sizing Guide


The BPC application itself is extremely flexible. In addition to giving customers control over the basic implementation structure (e.g. the dimensions, the structure of their hierarchies, all data access profiles and rules, etc.), BPC provides a wealth of powerful customization options (by way of member formulas, script logic, SQE and WriteBack BAdI's, etc.). Application users as well have a tremendous amount of flexibility in their reporting and query design. This flexibility is an important part of BPC's overall power and utility.

This flexibility, however, raises challenges for sizing. These sources of flexibility have impact on the overarching complexity of the application, and thus necessitate special consideration in terms of sizing. Some such considerations include:

 

  • The level of complexity of the calculations taking place on the application server and the database.

    There is a significant amount of variability in calculation complexity as governed by the use of custom member formulas, script logic, BAdIs, etc. And even within each of these customization methods there is a large variance in complexity level, e.g. a custom member formula can be anything from a simple sum to a complex nested formula spanning multiple dimensions.

    The sizing guide attempts to divide the complexity of a BPC implementation into one of three categories, but there are simply too many dimensions along which BPC can be customized for such a categorization to be exhaustive.

  • The nature and complexity of user profiles.

    The sizing guide addresses sizing in terms of user profiles: profiles that capture a particular user activity in the system. The per-user demand of each profile, combined with the distribution of concurrent users across these profiles, determines the overall sizing requirements of the BPC application.

    But these profiles are not exhaustive of the tasks that can be performed in a BPC system. And they do not take into account the varying complexity of the tasks included; again the complexity is divided only into one of only three categories. For example: the system demands of the user profile for "Run Report (EPM Add-in Client)" depend heavily on the complexity of the report being run, and this may not be adequately captured by a single user profile.

    And as we will explore in a future section, the "think time" – i.e. the time between the repetition of a user profile's tasks – has a very important impact on the CPU requirements associated to a given user profile. The think time is baked into the user profiles described in the BPC sizing guide, and the guide provides no way to customize these values in computing the sizing.

  • Use of the BW vs. HANA MDX engines.

    For BPC 10.0 / 10.1 NW on HANA systems, the choice of an MDX engine has an impact on the sizing requirements of the application server and of the database. Opting for the BW MDX engine will result in a heavier CPU requirement in the application server, while opting for the HANA MDX engine will shift some of that computational burden to the database. Currently the sizing guide does not distinguish between these two customization scenarios.

  • The complexity of Excel-side calculations, e.g. through formatting or the use of complex VBA logic.

    The performance of a report is governed not only by its complexity from a query standpoint, but by its client-side complexity as well. Reports of significant Excel complexity (formatting, VBA usage, etc.) may see poor performance regardless of the application server and database sizing. While this is not a traditional concern of system sizing, it is a customer pain point worthy of awareness.

  • The data volume level in reporting and planning.

    The performance of reporting and planning can also be impacted by the volume of data being sent over the network. Again this is not a problem that can be alleviated by adjusting the sizing of the client or the server side systems, but it is a topic worth understanding when assessing the overall performance of the system.


Many of the considerations above illustrate limitations of the standard BPC sizing guide.

 

The guide itself is further hindered by the fact that the relevant environment and testing details themselves are not publically available. So while the guide may describe the complexity level of each category, and while it presents some general guidelines around the complexity of the reporting objects that were tested, it is difficult to assess exactly how well the template environment / objects match up to those being sized. In other words, actual customer reports could be very different from those used in the standard sizing guide. And with no visibility into the raw computations used to produce the sizing guide's tables, it is impossible to tweak individual parameters (like the think time) to further customize the sizing results.

 

The BPC sizing guide is a good place to start in sizing the BPC 10.0 / 10.1 NW application, but should any of the above or further considerations arise it may be necessary to engage in some expert sizing techniques, i.e. sizing tailored directly to the customer's scenario.

 

In the next section we will explore the basic logic behind using a sandbox environment to estimate CPU sizing, both in terms of the number of cores and in terms of a rough SAPS measurement.