SAP for Banking Blogs
Read and write blog posts showcasing innovative banking solutions and success stories powered by SAP. Discover cutting-edge insights and practical tips.
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

I received some encouraging feedback and few interesting queries regarding my last post about source data aggregation.  I try to answer them in this post and elaborate on some of the points which were discussed in the earlier post. ( Source Data Aggregation in Bank Analyzer)

Let us go back to the initial discussion. FPO is built at a financial product level and I have raised that BA doesn’t give the flexibility to build it a higher level. It is related to the product history. In the initial version, BA was used for merge scenario. The main intention of merge scenario: Already the local gaap complete accounting environment is directly transferred to SDL from external systems. This is done via importing the positions in to the position object in the SDL. Only the IFRS / IAS GAAP specific calculations are done in the PML and posted to RDL. Later on these postings are merged with the position data from external accounting to get the overall position. Since the scope was limited to the calculation of GAAP specific key figures for financial instruments which mostly occur at a financial product level, the FPO is built at a FT level.

Later on, BA scope was enhanced to make it as a complete sub ledger and AFI scenario is introduced.  Most of us are more familiar with the AFI scenario. AFI caters to complete accounting without the necessity of falling back on transactions that have already been recorded in local GAAP. But still the underlying architecture concept remains the same i.e. to build FPO at a financial product level.

For certain products like deposits or checking accounts, the postings at the granularity of a FT are not needed. Also, AFI process runs into performance issues due to large number of BT’s to be processed. To avoid the performance issues, SDA is introduced. And since the underlying architecture remains the same, SDA takes a work around. To aggregate the FT’s and build an aggregation object. Use the position object in the SDL to store the aggregated BT’s values (similar to merge scenario). To deliver any analytical results at a FT level from an external system like accruals and aggregate those to store at an aggregation object level. And due to this work around approach, every process in AFI needs an additional variant to process the aggregated object. New processes are introduced to prepare the data at an aggregation object level.This results in a complex process chain.

Every product is built with a vision to solve a particular problem or fulfill a business need. Based on them certain underlying assumptions are made at that time. As the product evolves, it tries to cater to broader requirements. It shall be easy to accomplish them if these requirements are in sync with the initial assumptions. If not, it results in a lot of additional work. I think, SDA is an example for this.

And when I think about AFI on HANA, I see similar challenges.  I can imagine that new products such as LRM on HANA or ALM  built to utilize the HANA capabilities and in sync with the new vision. And Vision - is a time dependent attribute.

What is your opinion ?