Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
TomVanDoo
Active Contributor

I touched briefly on the data integrity issues that come with mobile scenarios on SAP ERP software, in my rant about the Apple-SAP Hype. I've been working as an SAP software architect for quite some years now and I've encountered first hand how bad SAP ERP copes with multiple client applications working on the same business objects.

Inherent flaw in SAP Software design

The SAP data model is build around record based information. When a user wants to change an object, he locks the record, makes his changes, saves the object, which updates the entire record and then releases the lock.

In the meantime, no one else can edit the object. This can become really frustrating when someone opens an object, leaves his PC, and doesn't come back for a couple of hours.

It gets worse when a customer has created a custom transaction to interact with the same business object, but ignores the locking mechanism. Mayhem ensues. An update on the object doesn't just change a few fields, but it updates the entire record. What is even worse, is that either user is not aware of the fact that someone else is editing the same object, and which changes were made, because the SAP system was setup as a request fulfilment system. In other words, the client requests data from the server. The server never sends data to the client on it's own accord. (similar to webservers)

Suppose the customer now also introduces web-apps, API's and mobile apps, with potentially even offline stores, that synchronize an hour after the facts... Universe explodes. Business users expect the data in the system, to reflect the data that the user just entered. But this isn't the case anymore. The state of the database is no longer consolidated immediately. Due to synchronizations, offline stores and concurrent changes, the best we can offer is an eventually consolidated state. Sure, the data might not be correct now, but give it five more minutes...

Do note that these sync issues only arise in the case of updates and deletes, never for reads and creates, which is exactly why so many mobile apps only do the two latter.

This isn't a new issue. It's a fundamental flaw in the Software design of SAP ERP, CRM, SRM and whatnot... The only system that doesn't suffer this flaw, is SAP BW. That's because BW doesn't treat an object as a record, but as a source state, with a bunch of change events, eventually consolidating in the correct state.

So, I want to apply that principle to API's on top of SAP ERP software (beit a SOAP service, ODATA service, an RFC or an IDOC, whatever...) To do so, I came up with a theorethical model (I haven't tested it yet in practice).

Online

So without much further ado, I bring you, someUpdateService:

The idea here is that any remote online application will first subscribe to a changeBroadCaster for the business object which it opens. (for Read, or for Edit, no matter. In fact, forget about the distinction Read/Edit. There's only open for edit.) This changebroadcaster can be a webSocketsHandler (which also exist in modern NetWeaver ABAP stacks)

When the remote client wishes to update the businessobject, he triggers a webservice. The webservice will post the execution request to an internal queue  and send back an acknowledge. The execution logic could be an RFC, which is posted to a BGRFC queue for example. The queueID would be a combination of object type and object ID (for example, BORTYPE and BORID)

Why a queue, you may ask yourself.

  1. SAP can decide when to process messages on a queue, depending on the system load
  2. if the update failes (object locked anyone?) the message remains on the queue
  3. multiple messages can be kept on the queue, in sequence
  4. queues can be automatically reprocessed
  5. queues can also be visualised (Business client sidepanel for pending changes anyone? With BORTYPE and BORID tags for example?)

When finally, the update message is succesfully processed from the queue, the last step in the execution logic, or a user exit on save, must post a change notifier onto the broadcaster, ideally with the fields that have changed.

All subscribed clients will then receive the changes and be able to update their UI accordingly and notify the user of the fact that things have changed (again, some eventstream in a sidepanel maybe?)

Offline

For offline applications with background sync, there's a catch.

First of all, a background sync doesn't have to subscribe to the data change broadcaster.

Secondly (and more important) there's an issue with the sequence on the queue. It is possible that a synced update actually has changes that precede a  change executed online on the object, which would cause an override of data entered at a later stage. That's where we really hit the "inherent flaw in SAP ERP software design" hard.

The way to overcome that, is by applying the principles of BW to ERP. One thing we need to agree upon upfront, is that the business object in the SAP Tables must be the aggregated result of the beginstate + all changes made. Because it's this object that will be displayed in standard transactions, and used for reads by other webservices.

Having agreed on that, we can have a closer look to the queueing mechanism and the updates. when an update on the queue has been processed succesfully, we shouldn't just remove it from the queue. Rather, we should mark it as succesful, so that the next update can be processed, but keep it on the queue for a certain amount of time. (example, 24 hours) The queue sequence should be done based on the UTC timestamp given by the client.

If a new message arrives on the queue, we shouldn't just reprocess that message, but also all messages that are behind it relative to the UTC timestamp.

So in the case where an update was done online at 12:30:00 and a synced update comes in from 12:15:00 afterwards, we process both the synced update from 12:15 and then the already processed update of 12:30 again. Just to make sure that we don't override the existing data.

This only makes sense if an update doesn't change the entire record, but rather only updates the affected fields.

To do so, it is important that the client only sends the changed fields in the update request. On SAP side, you can then inspect the metadata of the incoming request and determine which fields changed. Only these fields should then be passed to the execution logic as a parameter.

WAT

This is in other words quite a bit of work to get a reliable synchronization system. So next time your customer starts bargaining on the price of their mobile app (what, it's just an app. Even my son builds iOS apps...), argument them that it's not the app that matters. It's the API and the sync framework that makes it expensive.

PS: I've deliberately created this as a document, so that I and others can amend it in the future once the idea has been tested and refined.

4 Comments
Labels in this area