niki.scaglione2

5 Posts

Overview

The blog deals with new architectural models introduced by SAP Process Integration that will be enhanced with the introduction of SAP PI 7.3 release. It additionally describes a mandatory architectural approach chosen for handling a scenario over PI, where real-time messages have been processed according to strict time constraints.
Reading SAP document PI Federation–An Overview, it is possible to have a clear overview about number of landscape deployment solutions which can be categorized as central, distributed or federated. It describes advantages and disadvantages for each model helping to choose the best trade-off between Expansion and Consolidation.  Considering new PI 7.3 release features, less hardware requirements due to a single engine, a faster setup process and AEX features, probably in the next years, more and more PI architecture models will forward Horizontal scaling, working with additional PI Instances (Federation) or additional PI Decentral Adapter Engines (Distribution) rather then Vertical scaling, mainly involving Hardware specifications (Ram, CPUs, Application Servers).

PI Architecture models

Scenario

At the end of my blog Handling Low Latency Scenarios with PI it is mentioned a real case scenario where is mandatory to adopt a different architectural model from the central one to handle real-time scenarios. Shortly, the scenario handled messages exchanged by SAP and a Java controller for pallet movement with low latency and strict time frame message delivering. The problem stated while performing stress test was that running heavy load scenarios on PI (e.g. Master Data distribution scenarios) had a significant impact on real-time scenario. The result was that real-time scenario could not run together on the same Runtime environment where also master data and other flows have been running. For having a clearer picture of the problem, imagine a typical master data scenario (e.g. MATMAS) where Idocs sent from R/3 with EOIO QoS, must be forwarded towards two SAP systems after a message mappings phase and to other legacy systems (via FTP) as well. If you run 5000 or more Idocs in the same time, I bet average delivery time of overall PI messages is increasing significantly despite of the hardware you are working on.

Solution

The architecture model to treat with described performance limitation should at first provide and guarantee flow Segregation and Isolation; Indeed, this is the main reason I followed Decentral Adapter Engine setup. The idea is that real-time flows running of Decentral Adapter Engine are never affected by central instance load or by any maintenance activity done on central instance.  In order to achieve this good result there was the need to follow a non standard Decentral Adapter Engine setup.
I mean that, after the initial Decentral Adapter Engine setup and performing stress test I stated improvements in performance; on the other hand just analyzing Wily introscope charts I discovered that User Management response time was quite bad even if the real-time scenario worked on Decentral Adapter engine and MATMAS on central one.

UME_Stress_test

Later I found out there is the chance to decouple User Management of Integration Server and Decentral Adapter Engine to let them work independently. In this way, Decentral Adapter Engine should work also in case the Integration Server is not available. During the Setup phase of Decentral Adapter Engine (later is not possible!) you can chose to work on Local User Management (UME) as clearly described into How-to guide How To Install and Configure a Non-Central Advanced Adapter Engine with Local User Management. In this way, users for SOAP authentication together with system users are stored into Java database for a faster logon check at runtime.

Result

After installing Decentral Adapter Engine with Local UME and performing a stress test, the final data have been compared getting great results. But if both central instance and Decentral Adapter Engine are hosted in the same hardware there is a small effect in the overall performance so as final tip I suggest to install them on separate host.
Pictures below show the Processing time for real-time scenario working on Decentral Adapter Engine with local UME and a turnaround time measurement done on application side for a couple of messages (188 ms).

Wily results

Turnaround Time measurement

Next Steps

Finally, I want to share picture showing differences between actual and new SAP PI 7.3 release concerning components:

AEX 7_3

It seems Advanced Adapter Engine Extended (AEX) will be also available as Stand-Alone version or with Decentral Adapter Engine (planned from 7.30 SP2) offering a total independent Design and Runtime environment or only a total independent Runtime environment despite the central instance model. I am also interested to see if the new release provides a faster delivery time or implements an improved priority queue mechanism to dispatch messages with higher priority.

That’s all, I hope this helps you.

Scenario

I was involved to a very challenging project where I had to deal with real-time scenarios with low latency constraints. In this blog is supposed you are working on SAP PI 7.1 EHP1 since it enables ABAP proxy processing via SOAP adapter. Actors involved into Scenarios are SAP EWM, SAP PI and a Java controller for pallet movement. When main process is started, then messages exchanged between actors are continuous, each message created from a sender actor must be processed by receiver actor within a maximum time frame. Each message doesn’t respect timeline is discarded from an application perspective and must be sent again. There is no need to persist any message and it’s not requested to restart messages in case of failure.
Turnaround time for a message sent from sender actor and time to process its application reply message must be less than 500ms.

Landscape Overview

Solutions

I want to share all solutions I worked to provide a wider overview of this topic and technology approach chosen. For the test I worked on PI environment, with oversized hardware without any additional message load. On sender side I developed an Abap proxy method that sends out messages via proxy, after filling a payload with a single field. Message incoming to PI are just routed away without any payload transformation and communication type is exactly once. Goal is to process messages as fast as possible respecting time constraint. Java Controller has been developed by using Java 1.6 version with Integrated Web Services feature allowing easy and fast web service (Server/Client) development. The blog only points on how to deal with low latency scenarios and it’s not describing code development. Java controller processes messages within 10-15ms so I assume this component introduces no latency in the complete scenario. Tuning of SAP EWM and SAP PI systems has been performed in order to get best results from involved systems. Messages exchanged for this scenario are asynchronous since EWM or Java Controller sends a message and process the application reply without any blocking task. Sender actor expects an application reply from receiver actor for each message but in the meanwhile other type of messages can be exchanged. From a PI perspective there is one scenario from EWM to Java Controller and one from Java Controller to EWM.

Solution 1: Abap Proxy - Integration Server - Http adapter (Asynchronous)

This solution is handled by Integration Server and messages only have been processed on ABAP stack. Java Controller is designed to work as Http Proxy server so it enables one or more http communication channels to post and get data. Unfortunately solution was not fast enough: 1200ms is the average turnaround time when system is working exclusively on this scenario. Turnaround time is measured by EWM standard transaction as difference between message sent from EWM and time to process its application reply message, sent from java controller. After analyzing Performance Header section of message I discovered most of the time has been spent on qRFC schedulers of EWM and PI as well, that is the time  to get message from queue and dispatch it. Also setting properly a filter for queue priorization (sxmb_adm) gave no significant improvements in performance.

 

Solution 2: Abap Proxy (Soap) - Advanced Adapter Engine - Soap Adapter (Asynchronous)

Solution 2 has been designed trying to bypass PI qRFC scheduler and to work only on Advanced Adapter Engine. For this reason Java controller has been modified to Soap Web Server application and Http adapter (ABAP) has been replaced by Soap adapter (Java). On EWM side ABAP proxy has been dispatched via SOAP adapter using XI Protocol since it's enabled by PI 7.1 EHP1. To configure both scenarios (Outcoming and Incoming) I created two Integrated Configuration objects (ICO) to enable faster asynchronous solution. Unfortunately also solution 2 resulted not fast enough: 700ms is the average turnaround time.  For enhancing performance I also followed a tuning approach well described into blog from Mike Sibler, Tuning the PI Messaging System Queues  that improved performance even it was not solving qRFC scheduler limitation. In order to summarize, lesson learned is that qRFC scheduler is designed to maximize throughput and not to minimize latency!
I stated that when EWM has to manage several messages than qRFC Scheduler gives a serious impact in general performance, the retention period inside a message queue is not uniform and it’s increasing significantly when number of qRFC queues increases. Message Pipeline steps that contains time spent inside a queue is DB_ENTRY_QUEUING. Picture below shows difference in time between messages sent from EWM, it sounds clear it could never respect project requirement also with this approach. The only way to go on was to remove other persistency steps then I thought about synchronous solution.

DB_ENTRY_QUEUING

 

Solution 3: Abap Proxy (Soap) - Advanced Adapter Engine - Soap Adapter (Synchronous)

Reason for choosing Synchronous approach has been due to reduce persistence steps. A very interesting point to mention is that besides changing attribute mode for Service Interface into ESR to synchronous, there was no need to adjust developments both on EWM side and Java Controller side. Picture below shows 2 messages delivered from EWM to Java Controller and viceversa with 2 response messages generated as SOAP response. 

Turnaround time with this approach is 400ms finally respecting project requirements.

 

Message

Summary and Improvements

I hope this blog help you to have an qualitative and general overview about this topic. I sincerely don’t expect PI goal is to work as real-time system Integrator but mainly to handle a huge amount of messages to be deliver in considerable time.  Results I achieved also consider that in such a case network delay can be significant and must be considered. Despite of such network delay peaks the solution has been tested getting successful results but before adopting it as very stable productive solution, there was the need to change again, not the development done but the PI runtime components. After performing stress test sessions the scenarios has been moved from Central Adapter Engine to a Decentral Adapter Engine. When PI has to manage a huge load on Master data messages (e.g. MATMAS) involving Integration Server and Advanced Adapter Engine then also synchronous solution must be discarded due to performance reason. At the end, result achieved in term of performance and stability is great (approx 200ms) but Decentral Adapter Engine setup is mandatory for this case. This topic will be described into a different blog.

Many thanks to Sergio Cipolla and Sandro Garofano for their precious support.

 

An interesting feature enabled with PI 7.1 EHP1, mentioned into SAP document

<a href="http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/20c237f1-3caf-2b10-3a83-cce9ed5fbdf3?quicklink=index&overridelayout=true" target="_blank">New Features & Benefits for Enhancement Package 1 for SAP NetWeaver Process Integration 7.1</a>

, deals with User-Defined Message Search. It’s very interesting since it allows searching asynchronous message by business-relevant information criteria contained in the message payload. It also reduces TCO since separate TREX installation is not required. After defining filters and extractors, you just  need the access to Message Monitor java application to search payload of messages executed on both Integration Server and local Advanced Adapter Engine.

 

 

Let’s start with needed configuration steps and some useful remarks to quickly enable this feature.

System Configuration

 

 

Try to access http://<host>:<port>/nwapi and check if it’s properly configured, in case of issue please follow instructions contained into wiki

<a href="http://wiki.sdn.sap.com/wiki/display/JSTSG/%28NWA%29Problems-102" target="_blank">(NWA)Problems-102</a>

. A correct configuration shows PI Overview and common tasks list.

 

 

 

XI Component

 

 

To enable cache synchronization for defined filters and extractors, there is the need to create a logical port for Consumer proxy named MessageSearch with implemented operation SetMessageFilters. Since it’s a web service published in central Service Registry, you can also think to define filters and extractors with a different application.

 

 

 

Ws navigator

 

 

Beside this, just access transaction SOAMANAGER and follow instructions contained on Synchronizing the Cache section of SAP help page

<a href="http://help.sap.com/saphelp_nwpi711/helpdata/en/48/c85598f63335bfe10000000a42189d/frameset.htm" target="_blank">Configuring the User-Defined Message Search</a>

. Keep in mind that page contains information on how to configure filters and extractors as well.

 

 

 

Soa management

 

COnsumer Proxy

 

 

Web service navigator

 

 

 

Filters and Extractors definition

 

Main transaction to define, maintain and synchronize filters and extractors is SXMS_LMS_CONF. In the samples I created two different filters to check messages executed on both IS and local AAE. For each filter, it’s possible to maintain more than one extractor. The picture shows a filter with two extractors searching for name and surname fields of a Service Interface using XPath expression. Don’t forget to Synchronize Cache for any change done on filters and extractors.

 

Filter definition

 

 

After creating extractors, flag filter as active and then you can test it with test extractor function. In case of issue apply SAP

<a href="https://service.sap.com/sap/support/notes/1418263" target="_blank">Note 1418263 - Transaction SXMS_LMS_CONF: Test function</a>

.

 

 

 

Test extractor

 

Test Scenario

 

Before to proceed with User-defined message search test I want to remark an important point related to PI message monitoring application. If you access Sap Netweaver Administrator and follow the links SOA Management->Monitoring->PI Message Monitoring, you are accessing Message monitor application for ONLY local web services and local AAE processed messages and not the ones processed on Integration Server. A detailed description with more information is also available into SAP help page Monitoring Messages  (http://help.sap.com/saphelp_nwpi711/helpdata/en/48/b2d2347895307be10000000a42189b/content.htm).

 

 

 

Message Monitor

 

Finally, the scenarios I tested adopted following technology:

•    Soap->PI->File (AAE)

•    IDoc->PI->Proxy (IS)

 

After running a test over PI, access Message monitor application and select Database label of Integration Engine component, then choose Advanced, then Select User-Defined Attributes to add attributes defined with transaction SXMS_LMS_CONF.

 

 

 

Pi overview

 

Soap->PI->File scenario search results

 

AAE search

 

IDoc->PI->Proxy scenario search results

 

!https://weblogs.sdn.sap.com/weblogs/images/251695386/3Search_IS2.png|height=400|alt=IS search|width=599|src=https://weblogs.sdn.sap.com/weblogs/images/251695386/3Search_IS2.png!</body>

While playing with  PI 7.1 EHP1 I found an interesting feature that can easily support file-to-file transfer scenario especially dealing with big file size. The feature is only available for file-to-file scenario using EOIO quality of service. The feature is named ChunkMode and it enables splitting binary sender files into fixed “chunks” in order to process big file without affecting J2EE engine of PI system.

Even if this feature seems to be not yet mentioned into SAP NetWeaver PI 7.1 EHP1 documentation page, the screenshot below shows the additional attributes of Adapter Metadata File into PI 7.1 EHP1 not present into PI 7.1 version one.

 

Adapter Metadata

 

To enable ChunkMode, at first, set the Quality of Service to Exactly Once In Order with a specified queue name for the sender communication channel then set the advanced mode of the channel and define the Maximum Size[MB] for the chunks. As  receiver communication channel no relevant options must be configured.

 

CC_Processing

 

CC_Chunk_size

 

For testing the solution just create an Integrated Configuration object to define a scenario using sender and receiver communication channel before mentioned. In the test I did, some files were placed inside one ftp server and then processed successfully using EOIO with chosen queue name as shown in the pictures.

 

 FTP

 

 Communication Channel Detail

 

Finally I did some tests tuning chunk size of sender communication channel, and that is  the summary of processing time for the qualitative test done.

 

Test result

That’s all.

1. Scenario

During one of latest projects I worked on, there was a need for sending data across multiple PI 7.1 systems. Main issue for this kind of landscape configuration are related to find and monitor many possible points of failure along complex data pipeline. 

Dealing with complete monitored solution it's often not so easy, so that's the reason why I worked trying to solve the problem and share the solution with XI/PI Community Experts.

 Landscape

The goal is to handle messages sent from one or more SAP systems (section A) connected to one or more SAP systems (section D) through two PI systems (section B and C), belonging to different customer departments, providing  data validation mechanism as well. Data should be processed on section D only if the amount of received messages for each independent flow is equal to what is sent from section A, in other words the messages have to be treated as bundle and maintained together. The solution should also prevent BPM adoption.

When you treat these scenarios, it’s really critical to foresee and manage all possible points of failure during data exchange through several systems.Consider IDocs or Proxy messages generated on section A stuck in many points for technical and/or functional issues! You will surely waste time and struggle any time with errors occurred in the pipeline.

An additional requirement is even to minimize interactions between PI systems as well as build a monitoring cockpit for the complete landscape which flows are safely and quickly monitored in.

This blog is only focused on Sender side solution (section A and B), with a brief description of Receiver side.

2. Solution

The solution offers the chance to handle both IDoc and ABAP proxy technologies on section A to send data out from your SAP systems, choosing Queued and Not Queued communication type according to specific business requirements. The idea is to let PI system of section B be responsible of starting, controlling and alerting activity of the flow up to edge with section C, where received messages should be compared with the amount of messages “declared” by PI system of section B. In section C, finally, a cockpit is developed to gather information and show the results.

The connection between PI system B and C is accomplished using XI protocol that connects directly Integration Engines bypassing Adapter Engine step, improving delivery performance and enabling automatically, for instance, acknowledgments propagation for each message when requested. Let’s have a look in deep to technical solution of each section.

2.1 SAP systems (Section A)

Proposed solution is actually adopted to handle ABAP Proxy scenario with queued communication and IDoc without queued communication but you can customize according to different scenarios. Concerning Proxy approach, it’s possible to maintain fixed or variable queue names with an additional prefix to better identify and treat flows (such as XBQSMATMAS*). Refers here for documentation about Proxy Runtime.

The steps to enable IDoc communication on section A are:

  • Setup partner profile in order to collect IDocs on system from transaction WE20.

WE20

  • Import easy Remote-enabled custom function module available here that selects IDocs for given Message Type, Partner Number of Receiver, Partner Number of Sender and Receiver port. Then the standard report RSEOUT00 is called to dispatch and finally return the amount of successfully processed IDocs.

For enabling Proxy communication:

  • After writing your ABAP code for handling Outbound Proxy messages with queued communication, access transaction SMQR and deregister given queue name. In this way, the messages generated will stuck into Inbound Queue (transaction SMQ2), and they can be found on transaction SXMB_MONI in status "scheduled".

XBQS

 

Scheduled Messages
 2.2 PI system (Section B)

As stated before the idea is to give PI system (B) the control for checking system availability of SAP systems (A) and second PI (C) before transmission start, verifying former process flow run in the past, triggering message sending from A, updating transmission details on C and also providing monitoring feature as described here.

After a little bit of debug on ABAP side I found the standard function module to activate qRFC is TRFC_QIN_ACTIVATE that is called to trigger Proxy messages processing. There was even the need to write an easy function module named Z_IDOC_CONTROLM, to handle message activation only for IDocs on each source system (A) involved.

So I wrote the Report that executes below mentioned steps:

  • Checks initially if sender and receiver system connection is working by use of function module RFC_WALK_THRU_TEST that also performs authorization checks (better than RFC_PING).
  • Calls a remote function on PI system (C) that returns the result of previous run for a given flow (identified by Sending System, Outbound Interface Name, Receiving System, Inbound Interface Name). From a logical perspective the remote function indicates if there is a blocked or not yet "approved" flow. If there is a blocked flow the report will stop the execution raising an alert because this means an error occurred across the pipeline and probably former "declared" messages didn't reach PI system (C).
  • Only if the result of last function was successful and there are no blocked or not approved messages, the function module TRFC_QIN_ACTIVATE or Z_IDOC_CONTROLM is called remotely on SAP system (A), to start messages processing, returning number of messages successfully processed.
  • Last step is to call a remote function on PI system (C) that updates a table with interface details (Sending System, Outbound Interface Name, Starting time, Starting Date, Number of messages processed, Receiving System, Inbound Interface Name) to be checked and approved.

For a deeper explanation about Report and to download ABAP code, click the link here.

After report creation you have to schedule a job for each interface involved in the scenario then filling the parameters to identify the flow together with a defined alert category to be raised in case of failure. Take in account to set a properly value to Max Queues Activation Time.
Proxy and IDoc Input Parameters

Input Parameters - Proxy

Input Parameteres - IDOC

Receiver side of scenario is out of scope of this blog. From a functional perspective, a job on system (C) compares a table filled with flow details, to verify if "declared" number of messages is equal to the ones received and processed on system. If no errors occur a flag is set on a table meaning that new messages package can be received and processed. The results of the checks done are collected into a fast-view cockpit to accomplish a full monitored solution.

A successfully test scenario in shown below together with a fault case after which an alert is generated and a mail is forwarded to monitoring group.

Success case:

Report

 

SXMB_MONI

Report resultFault case:

Mail RWB

 

Mail

 

Just to summarize, the steps needed to extend this approach to new flows are:

  • Create a new entry into table of PI system (C).
  • Deregister queue name (Proxy) or set partner profile (IDoc) on SAP system(A).
  • Create a variant of Report ZBRGP_TRIGGER_ MSG into PI system(B) and schedule job according to business needs.

3. Conclusion

The scenario described is currently adopted from one of my customer and is currently handling Proxy and IDoc messages without facing any kind of problem, with monthly approx 85.000 messages exchanged.

Using Proxy approach with solution built here, there is also the chance to enable a fixed time process execution. As mentioned into SAP Documentation related to function TRFC_QIN_ACTIVATE (If you set the import parameter MAXTIME to a value not equal to 0 (0= unrestricted; the call is returned once the queue is empty), the qRFC Manager only activates the queue within the specified time. If the time runs out while the last LUW is being processed, this call is returned after the last LUW ), it's possible to calculate end time for processing messages on PI. Surely, that's not the best ideal solution since it's never a good approach to play with time while dealing with distributed programming, but one of my customer needs is to send  messages generated on section A, within a fixed time slot, not before not later.

This approach provides with few restrictions a fixed and phased delivery process.

A limitation of this approach is that the validating job on PI system(C), needs a minimum amount of time to update and approve all different flows; let's define this time , tupdate  =  2 min approx, that means also triggering job for each flow on PI system(B) must respect this time limitation. If ti is the starting time for flow named i, t MAXTIME  is the value of Input parameter (Max Queues Activation Time) of the report  then in order to avoid a job errod ti+1 > t+ tupdate  + t MAXTIME.

That means to avoid errors on scheduled job, you have to schedule next job start for each independent flow, after previous job is finished + approving flow time on system C.

 

Finally, I wish to thank Sergio Cipolla for great given support.