Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member


Before we can build a generic integration architecture that will handle our existing and future bank payment interfaces, we need to inventory the objects that already exist. That also means looking at how the banking interface is currently run. My basic assumption is that I'm coming into a situation where an extract process already exists.

I won't get into any of the specifics but on most projects and sites that I've worked at, custom extract programs were written that pulled the data required by the bank from the following tables:



    • PAYR: Payment Medium File




 



    • REGUH: Settlement data from payment program




 

  • REGUP: Processed items from payment program


The extract programs have been either custom with one per extract or based on standard SAP payment extraction reports such as RFCHKE00, which has the logic you need to pull data for every bank set up in your system as a house bank. It even writes an ASCII file to the application server.

But it exports its data in only one format: defined in the data dictionary by structures DTACHKH and DTACHKP, standard check extract header and item data delivered by SAP.

These structures hold most of what you need for most of your bank interfaces, but as we noted in my last posting, Bank Payment Interfaces: A Practical Approach to Integration Thinking, each bank has its own custom payment reconciliation interface that it's been using for years. And, of course, the bank tells its customers what to do and how to send their files.

One approach is to export the DTACHKH and DTACHKP data and then map it to your bank's format. But not everything is there for every payment. If you're making payments to American Express, for example, they'll want a reconciliation payment file that includes the account numbers being paid. Credit numbers are not included in the DTACHKH and DTACHKP.

In addition, most banks want a summary segment that totals the payments and provides a count of item segments. Some don't even have header records. So it's not unusual to write a custom report to provide bank specific data not included in these structures and summary segments required by most banks.

I Prefer IDocs


My preference would be to use an IDoc for these payment interfaces and to then add whatever you needed for your particular bank in the EDI map. SAP provides message types REMADV and PAYEXT, both linked to basic type PEXR2002, and a standard ABAP — RFFOEDI1 — that generates an IDoc with bank payment extracts.

But you don't always get what you need for your particular banks using these standard programs. So custom extract programs are almost always written that generate an ASCII file matching the bank's published specifications.

That's the situation I faced wherever I had to do this work. There was already an existing custom extract program for at least one bank based on RFCHKE00 with code to trigger a script at the operating system level that kicked off an SFTP push and with no way of knowing whether or not the transmission succeeded.

The ideal of course is to be able to design a generic approach during an implementation project, to identify payment extract requirements for all banks during the blueprint phase and to then build a generic architecture that works for them all.

But the real world is messy and you don't always have that luxury, particularly if you find yourself at a site that already has a production SAP system that may not have been optimally designed and built. At that point, you have to make do with what's available to you.

So we have a custom version of RFCHKE00 that extracts data for one bank. It generates a file according to that bank's specifications and sends the file by triggering a script that calls the SFTP push at the operating system level.

What else do we have? We're an EDI shop using Sterling Integrator as middleware. And we've invested considerable time building a generic architecture that's integrated with SAP and provides the results of EDI processing milestone to the control and status records for all outbound IDocs such invoices and delivery orders.

The Starting Point


This EDI architecture includes the following elements that we can use in our new bank interface approach:



    • An RFC Destination in SAP that connects to a receiving business process model (BP) in SI through JCo and the registered program.




 



    • A custom look-up table in SI that maps SAP to EDI data.




 

  • A data driven approach to EDI interface processing.


For outbound interfaces, the look-up table identifies the EDI envelopes to be applied to each outbound IDoc during translation. The envelope in turn identifies and calls the translation map that builds the EDI interchange.

The look-up table is read by a JDBC service with an SQL query by SAP partner number, IDoc message and basic type, direction, and so on. It returns EDI sender and receiver partner IDs, which are used with the IDoc basic type to identify the envelope. The query also returns such useful information as the communications BP to call after translation for AS2 or FTP, and any partner ID that may be required by the communications protocol.

  • This look-up table supports an approach to EDI integration that segregates communications from translation in separate BPs and that uses only one generic communications BP for each protocol ... AS2, FTP, or JCo RFC into or out of SAP. The specifics of any particular interface are passed to the BP at runtime through XPath statements from correlations data or from our look-up table. 

    Each BP can therefore figure out the what, the who, and the where of each IDoc and EDI message as it passes through its processing stream from the data that's available to it at runtime. There's no hard-coding here.


So we have a starting point. We'll use the RFC Destination and the receiving BP to send the bank extracts from SAP to SI. The look-up table will then route the bank extract to a new processing stream that will provide translation or communications services and will identify the communications BP tht will call the SFTP push.

This is a data driven approach that allows us to design and build a generic architecture that will recognize each extract at run time and decide dynamically, from the data available to it, what needs to be done next.

We'll need to add a few objects in SAP and SI to make it all work together but we'll also be able to develop a checklist of what needs to be done each time we plug a new bank into the architecture.

Before we get there, we'll fire up Visio and do a little design work. We'll figure out what it's going to look like and take it from there ... next posting.

2 Comments