Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
0 Kudos

Real-time visibility on to your business suite processes is key for operational excellence. SAP Operational Process Intelligence provides real time visibility on to your processes no matter where they run. To learn more about SAP Operational Process Intelligence, visit this space.

You can gain visibility on your business suite processes using SAP Operational Process Intelligence by either Process Observer or an observed process.

Note: Using Observed process, SAP Operational Process Intelligence enables you to use operational data from third-party back-end systems (non-SAP systems) to model your business scenario. It is not necessary that these back-end systems have process definitions or even a “process” as a building block of the data flow. You have to extract, replicate, and manually transform this operational data into a Process Event Log format, so that SAP Operational Process Intelligence can consume the operational data in your business scenario.


Process Observer

  1. Model the process definition in business suite using Process Observer     
  2. Model a business scenario in SAP Operational Process Intelligence by dragging and dropping the process observer process     
  3. Replicate process observer data from business suite to SAP HANA using a replication technology like SLT, SDI or Data Services     

With this approach, you need to be proficient in ABAP programming, process observer and a replication technology like SLT, SDI or Data Services

Observed Process

  1. Model the business scenario in SAP Operational Process Intelligence by defining the observed process.
  2. Use SDI to transform and replicate transactional data form the business suite into the Business Process Analytics Format (BPAF). BPAF is a standard for the interchange of process analytics data which is adopted by SAP Operational Process Intelligence.

With this approach, you need to have a good understanding of the underlying tables that store the transaction data, the relationship between the tables and SDI. This is a good approach as no change is needed in your business suite system.

This approach has also been used with the smart process application – Omni Commerce Intelligence (OCI) in the SAP content hub which is a sample content for SAP Operational Process Intelligence. This Smart Process Application provides real time visibility on to the order to cash process in the business suite. You can download, deploy and configure it with minimal effort using the guide in the SAP content hub.

One of the big challenges with an implementation of SAP Operational Process Intelligence is to transform and replicate data from various sources to HANA which can be simplified by using SAP HANA Smart Data Integration (SDI). Hence I will focus on how to use SDI to transform and replicate transactional data form the business suite into the BPAF format.


Transform and replicate transactional data from business suite to the BPAF format using Smart Data Integration

Let us understand the steps that you would need to perform to achieve this goal, considering the “order to cash” process in business suite as an example.


Procedure

Step 1: Identify the transactions

The first step is to come up with the list of transactions that need to be captured. In case of order to cash process, following are the commonly occurred transactions.

  • Creation of new sales order
  • Completion of outbound delivery
  • Completion of pick & pack
  • Completion of Post Goods Issue (PGI)
  • Completion of billing/invoice
  • Completion of payment

Note: Depending on the customization in your system, the above list may vary/extend.

Step 2: Identify the tables

For each transaction,

  1. Identify the table that can act as the data source table i.e.; the table that will have identifiable impact (add/delete/modify of data) when the relevant transaction occurs. This table will act as the data source table in the hdbflowgraph. HDBFlowgraph is a type of artifact in Smart Data Integration which has to be modelled to achieve transformation and replication to SAP HANA. To get a glimpse how a simple hdbflowgraph looks like, please refer this article.
  2. To load the process context table, identify the table(s) which stores the required contextual information. This table may vary across transactions depending on the information that needs to be captured.

Step 3: Model & execute the hdbflowgraph

After the tables are identified, the next step is to implement the hdbflowgraph for each transaction. The hdbflowgraph should not only include the transformation logic but also the logic to verify if the transaction of interest has occurred. The reason is that the tables in the Business Suite system do not store the transactions like how it is stored in the process event log table. Additionally, this table may store too many data to uniquely identify the occurrence of a single transaction. Hence a direct replication will not suffice; the hdbflowgraph should also include the logic to interpret if the transaction of interest has occurred by using various transformation nodes like filter, join, lookup etc.; once the verification succeeds, the required data should be transformed and process event log table should be loaded.

The same hdbflowgraph which is modelled to capture a transaction can be enhanced to transform/load the process context table as well.

Once done, run the hdbflowgraph to see the transformed data coming up in the process event log and process context tables.


Example

Let us see how the hdbflowgraph for “Creation of a new sales order” transaction is implemented in the Omni Channel Intelligence Smart Process Application. i.e.; whenever a new sales order is created in the Business Suite system, the process event log table should have a relevant transaction loaded and process context table has the relevant contextual data.

Tables used

You can see below the list of tables used in the modelling of the hdbflowgraph for this transaction.

Tables

Purpose

VBAK (Contains the sales document header data)

Acts as the data source table in the hdbflowgraph and also contains some of the contextual information like Customer ID, Sales Organization etc;

VBKD (Contains the sales document business data)

Contains the payment terms related data which is required to compute the scenario cycle time.

T052 (Contains the terms of payment)

Implementation/Modelling logic

1. VBAK is used as the data source table as a new entry gets added in this table whenever there is a new sales order created in the source system.          But it should also be noted that this table also stores other types of sales documents as well; hence a filter transform has to be applied to filter out        only the “Standard sales orders”.

This is done by the following condition:

"IN_VBAK"."VBTYP"='C' AND "IN_VBAK"."TRVOG"=0 AND "IN_VBAK"."AUART"="IN_VBAK"."STANDARD_ORDER_TYPE”

where STANDARD_ORDER_TYPE is the sales document type configured in the Business Suite system for the standard sales orders. This value is not hard-coded here and taken as an input because the sales document type can vary across systems.

With this step, the data necessary to load the process event log table and some of the context data is obtained.

2. In order to compute the scenario cycle time which is part of the process context table, the payment term for each sales order is required.

    1. Hence after filtering out the standard sales orders, the lookup transform is applied on the VBKD table to identify the payment term for each sales order.
    2. As the payment term in VBKD denotes only the key/code to define payment terms, in order to get the payment periods for each payment term, the lookup transform is applied on the T052 table.
    3. Finally, the output of step 2.b is added to the Sales Order creation date to compute the scenario cycle time.

3. Finally, a filter transform is used to transform the data which can be directly fed into the process event log and process context tables.

The modelled flowgraph looks similar to this

The join transform applied on VBAK and CONFIG table is done to filter out the data from the VBAK table based on certain inputs given. For example, filter out only the sales orders in VBAK on client ID “X” (MANDT). Note that the “STANDARD_ORDER_TYPE” explained above is also stored in the CONFIG table.

A similar approach is used to model to hdbflowgraphs for other transactions as well.

Hence you can now gain operational excellence on your business suite processes using SAP Operational Process Intelligence

Additional Information

1 Comment