Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos
When playing around with Galaxy's Process Composer design time tooling, sooner rather than later you will stumble across "subflow" activities. Subflows are essentially other processes that are invoked from a "parent" process. As such, subflows offer a straightforward way of reusing existing functionality, thus greatly reducing modeling effort for recurrently appearing processes fragments (within and across processes). Apart from that, subflows allow for hierarchically structuring complex processes into different levels of granularity. In the example below, a subflow "Maintain Business Partner in ERP" encapsulates a possibly complex procedure of creating or updating business partner master data record in an SAP ERP system, thus abstracting from those process details in the invoking parent process(es):

The latter also comes in handy for top-down process decomposition. In there, business analysts start with course-granular value chains that abstract from implementation details at lower levels which are rather considered as "black boxes". And finally, Galaxy offers a fairly elegant supportability feature through "patchability" and (de-)activation of subflows. That is, a process that acts as a subflow for (multiple) parent process(es), may be patched without also having to go all the way through patching, building, and deploying all affected parent processes. Instead, parent processes will automatically (even for running parent process instances) incorporate the new "patched" version of the subflow. Versioning and patch-ability is part of Galaxy's process lifecycle management concept which I have decribed in further detail in a separate article.

Preserve Interface Stability

A parent process spawns a new subflow instance by passing data to its Web Service-like interface. That is, the subflows start and end events' (common) WSDL portType and operation jointly determine that interface. As subflows are indepedent artifacts that may even reside in a different Development Component (DC, NWDI lingo for a project), a parent process' view onto the subflow is confined to its interface. As a result, the only way to spawn a subflow instance and to pass data into (out of) a subflow is by mapping parent process context data onto (from) the "request" ("response") message of this WSDL operation.Passing data in and out of a subflow happens through the respective input and output mappings of the respective subflow activity:
 
 
In order to easily exchange a concrete subflow version at runtime, parent process and sub process are loosely coupled through the interface, only. Consequently, changing the interface (e.g., by altering the request type) will break this coupling and lead to undersirable effects. For an illustration, please have a look onto the schematic illustration of a parent process (P) - subflow invocation below (S):

 
The parent process P resides in some DC1, the subflow S is from DC2 and exposes an interface I which is both referenced from P and S. When S is patched into S' (without changing its interface I) and deployed onto the process server, both newly started and even running instances of P will automatically incorporate S'. This is due to the fact that S and S' share the same interface. When the interface itself is later changed, implicitly yielding a new version I', a re-build of DC2 will take care of incorporating this change into S' which becomes S'' (changes in referenced artifacts automatically result in a new version of the referencing artifact). At runtime, S' (having the interface I) will be de-activated and S'' with the new interface I' will be the newly active version of that process. As P resides in a separate DC, it is not automatically re-built and re-deployed alongside S''. Instances of P will fail to invoke the (now inactive) subflow S'. This is due to the fact that the latest (active) version S'' exposes a different interface (I') which is not triggered from P. Even if DC1 is manually re-built (resulting in a new version P') and re-deployed, running instances of P will still be affected and, thus, fail to perform invocations of S'' (or any older version thereof).

Recommendation:Be cautious when changing process interfaces at all times. Make sure that when deploying a process with a changed an altered interface, no parent process instances operating on an old interface are still running. Also check where parent processes are located and be sure to re-build and re-deploy the respective DCs alongside the altered subflow interface.    
 

It ain't over 'til it's over 

You like BPMN, do you? We trust you do, because we just love it's way of flexibly modeling processes. And don't let anyone tell you, that would not come in handy to capture real-life business processes. In particular, it helps avoiding model redundancies and attracts business (i.e., "non-IT") people to really model processes (and not just draw them with their presentation software of choice). But then again, there are those places where you need to think twice to really understand all the implications that this brings along. In How to avoid modeling errors in Netweaver BPM? Part 1: Gateway fun! of this blog series, I have already featured some of these cases and subflow completion is just another one. For an illustration, please have a look onto the process below:     

As you know, BPMN allows for mixing different gateway types or even avoiding block structures entirely. In the example process, the end event ("Response") is effectively triggered twice. As long as this instance constitutes a top-level process, the respective behavior is crystal clear: Only when the second token hits the end event, the process will be complete. 

But what if this process was invoked as a subflow? In this case, continuing the parent process is asynchronously de-coupled from completing the subflow instance. That is, the first token that arrives at the end event will craft a "response" document which does not only pass (return) data to the parent process but also continues the parent process instance (i.e., passes a token to the downstream flow). Any other tokens hitting the subflow's end event will just be swallowed and won't continue to exist in the parent process. But at the same time, the subflow will actually continue to run until all of its tokens have reached the end event or its parent process has itself completed. In the latter case, all subflows will be recursively terminated, as well.   

You may wonder why we have decided to implement this apparently complicated completion behavior of subflows. The answer to this is twofold: First of all, subflows (like other activities) constitute, from an outside perspective, synchronous calls, either returning a response (x)or a fault, but never multiple responses or both a response and a fault. That just does not make sense and would probably confuse most people. Also bear in mind that processes must behave consistently, no matter if they are exposed as subflows or plain Web Service calls. Secondly, we do acknowledge that there are still situations where it is useful to have the subflow instance continuing to run while the parent process resumes its execution on the subflow activity's outbound edge. Think of intra-subflow post-processing or cleanup operations that aren't really mission-critical (but time-consuming):

Also mind that synchronizing concurrent flow may be a tricky business if multiple tokens reside on the same control flow branch. For instance, have a look onto the subflow below:
In this example, an "Activity 4" is supposed to be executed twice, once for each parallel brach. Placing "Activity 4" onto both branches (behind "Activity 2" and "Activity 3") has the same effect but introduces a redundant duplication. Altogether, our recommendation in this regard is as follows.

Recommendation:Synchronizing concurrently existing tokens on different control flow branches is generally a good idea before hitting an intra-subflow end event. If that is hard to achieve, you may do without, thus leveraging the "Discriminator"-like behavior of subflow completion (the first token gets "through", the others are swallowed). But never place anything mission-critically onto the affected control flow branches. A subflow may be implicitly terminated at any time (when the parent process completes).
 
As said, from the calling parent process' perspective, subflows are supposed to behave like synchronous calls, either returning a response or a fault document. While subflows may internally spawn concurrent and asynchronous flow, their outside appearance must still be perfectly synchronous. As a result, flows returning both a fault and (optionally) a response should be avoided:
The example flow non-deterministically first returns a response (x)or a fault documen, followed by another response or a fault. From the calling (parent) process perspective, this is clearly invalid as a subflow is expected to either return a respose or a fault. The initially received response (fault) continues the outer flow on the regular outbound flow (boundary event flow):

Please note that after a reponse (fault) was returned (raised) from a subflow, no successive fault (response) may be received on the respective boundary event flow (regular outbound flow), yielding the following recommendation:
 
Recommendation: When raising exceptions from subflows, always make sure to "throw" them before a response was returned (i.e., a token has hit an end event). That also applies to nested activities which potentially throw exceptions that "bubble up" the stack.
4 Comments