Financial Management Blogs by SAP
Get financial management insights from blog posts by SAP experts. Find and share tips on how to increase efficiency, reduce risk, and optimize working capital.
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

As all of you probably know, BPC NW on HANA has been available for about two years now. Initially, the focus of BPC on HANA was on accelerating the reporting, which yielded some impressive results when compared to BPC NW on a classic database.

It was, however, always clear, that it was necessary to accelerate the calculations in order to deliver on the vision of true agile planning & closing processes.   This would enable customers to react faster to changing business conditions. This becomes even more important as Big Data enters the space promising great benefits to our existing planning processes. Consequently, recent BPC service packs have brought significant enhancements with the acceleration of allocations and dimension formulas through HANA. In every project, however, there will still be the need for at last one complex, sophisticated calculation that is at the heart of the process and is only possible to be implemented with custom logic.

Customer Situation

At a customer project we had exactly that situation in which we had several complex calculations that would need to be run in default logic. One example being  the detailed labor calculations with a large number of job families with different rates, overtime factors and a large number of drivers like shifts per day, coverage per operation, etc. To complicate it further, all drivers and rates were planned at different levels in the hierarchy (e.g. some of them for each job family, others at groups of job families).

Since there were still uncertainties about the functional details of the calculation, we decided on an agile approach, going through several iterations while closely working with the end users for validation. For this, it was decided to use Script Logic since it was fast to write and easy to adapt throughout the iterations. Of course we always anticipated that ultimately the Script Logic would have difficulties fulfilling the performance requirements, but once we had a calculation in place and the results would fit the user’s expectations, the Script would still serve as a very detailed specification for the subsequent optimization tasks.

Once we got to this point, the most common approach would have been to re-implement the calculation logic in a custom BAdI in ABAP, which would read the data from the cube, loop over it, query additional reference data, calculate the result and save it back to the cube in the end.

However, since we were using BPC on HANA we wondered whether we could achieve the same by implementing and executing the calculation directly in HANA thereby leveraging HANA’s massive parallelization and optimization and at the same time avoid bringing the data first up to the ABAP layer and then the result back down into the database.

Custom Native HANA Calculations with BPC

The good news is that since BPC 10.0 SP11 there’s an option to create a native HANA data model (SAP Note 1902743), originally meant to improve the write-back performance and internally used for example for the allocations. As a side effect, the native HANA model also provides us with the ability to directly interact with the BPC data model in the HANA database:

  • Queries are possible on the BPC-generated OLAP views for each cube ($BPC$V$OLAP_*)
  • Write-back is possible using a BPC-generated table for each cube that is periodically merged back into the OLAP cube. $BPC$P$* is the table, while $BPC$V$OLAP_*P provides an OLAP view on top. The write-back data is stored as a delta to the main cube.

These two resources allowed us to come up with an approach to implement our calculations using native HANA SQLScript that is triggered through an ABAP BAdI, which in turn is (as usual) triggered via Script Logic. The ABAP BAdI is also able to pass the calculation context (e.g. Entity, Category, Time) to the HANA procedure in order to limit the calculation’s data range.

The HANA procedure queries the data through a calculation view that unions data from the BPC cube’s OLAP View as well as the Delta-Table, which includes the write-back data that hasn’t been merged back into the main table yet. The calculation is then performed using SQLScript relying either directly on SQL statements or the procedural features, such as loops.

At the end, the procedure calculates a delta between the result data and the current cube data and writes it back into the BPC-generated write-back table.

Of course, relying on the BPC-generated data model in HANA comes with the downside that it is sensitive to technical name changes as well as that it effectively circumvents the BPC write-back security. The first point is being addressed in the design through the use of the calculation views as an additional abstraction layer and could be completely solved by enabling the technical name stabilization for the BPC model. With regards to security, it has to be noted that the calculation (including write-back) is restricted to the context the current user has been working on and he is authorized to. If this is still a concern, it would always be possible to not write back the data directly, but return it to the ABAP BAdI, which could then write it through the proper BPC processes including security checks.

Benefits & Conclusion

By leveraging native HANA features for calculations, the approach provides additional performance benefits over the use of ABAP. We’ve used HANA as our primary calculation tool and have implemented different kinds of calculations, such as:

  • Data transformation and data movements from one cube to another
  • Driver-based calculations as in the case of the labour mentioned above
  • Carry-forward calculations, where the opening balance of one month is determined by the closing balance of the previous month

None of our calculations ran more than 1-2 seconds for query, calculation and write-back of one Entity generating hundreds of thousands of records. And that is before any additional optimization on the SQL statements. Of course the Script Logic and ABAP layer generate a small overhead, but that would be given even in the case of implementing the logic in ABAP. We’ve seen tremendous performance benefits when comparing with our initial script logic implementation where one batch job ran in 5 seconds instead of 6 minutes including any overhead.

Other than performance, an additional benefit is that it allows leveraging advanced HANA functionality, such as predictive algorithms or Smart Data Access to other data sources within our BPC planning processes. This combined with the performance improvements will be extremely interesting especially for Big Data scenarios. And even though the approach can really be used to speed-up any long-running and complex calculation, it will position BPC on HANA as the planning tool of choice for these kinds of projects.

I’m planning to write a follow-up to this to include more technical information and also show a few code snippets we used. In the meantime, I’d be really interested to hear your opinion on this!

_____________________________________

Obviously, everything we do here is not officially supported by SAP. Hence, just like most ABAP functionalities that we commonly use to interact with BPC, it has to be checked as part of regression testing for every BPC update.

23 Comments