1 2 Previous Next

james.wood

16 Posts

Not so long ago, I began working on a project which was very set in its "ASAP" ways of capturing documentation. This implies that new requirements trickle their way down in waterfall fashion from business analysts to developers and ultimately to the poor devils tasked with maintaining the system after the project is concluded. After having participated in all phases of this process over the course of about 12 years, I really got to thinking about whether or not the status quo is working. In other words, what sort of value are we getting by compiling all these documents? What follows are a series of observations I have made concerning these issues during my time working on SAP projects.

 

Observation #1: Throwing Requirements Over the Wall Doesn't Work

In Computer Science/Software Engineering courses, the software development lifecycle (SDLC) is portrayed as an iterative cycle in which developers play an active role in all phases of development (see below). However, it has been my experience that most SAP projects tend to wait to bring developers in until the design phase, if not later. Here, the thought is that business analysts will capture requirements in functional specifications, throw them over the wall to developers, and everything will proceed like clockwork.

220px-SDLC_-_Software_Development_Life_Cycle.jpg

There are several problems that I see with this approach:

  1. A lack of up-front dialog eliminates the possibility for both sides to find common ground in terms of vocabulary, functionality, and so on. This, to me, explains why most developers struggle to comprehend complex functional specifications: because the business analyst doesn't know how to write to his/her audience.
  2. Developers perform best whenever they understand the business domain they're working in. While it's certainly possible to code certain requirements without such knowledge, the quality of the product will almost always suffer. This is particularly noticeable in the long term as wrong assumptions lead to brittle code that is difficult to maintain and/or enhance.
  3. Without a developer presence, business analysts tend to make questionable technical assumptions which can trickle down into the implementation phase if there is not strong development leadership in place to catch them. And here, it is often too late. One extreme case of this was a project in which all of the development objects were scoped out and estimated before the design phase got started in earnest. Here, the analysts had worked in silos to identify all of the interfaces for the project, coming up with a comprehensive list which contained many duplicates. Alas, by the time the interface team got their hands on them, any opportunities for consolidation/reuse/SOA had long since sailed away.
  4. Finally, excluding the developers eliminates opportunities for innovation that are beyond the technical grasps of business analysts.

 

The bottom line here is that developers should have some level of participation during the requirements analysis phase. Maybe they don't attend every meeting, but at least some presence makes a big difference.

 

Observation #2: Organizing Documentation by RICEF Category is a Mistake

A common blueprinting exercise for many SAP projects is to identify all of the RICEF development objects and parcel out the work from there. While this may work OK from a project perspective, I find that it leads to much fragmentation when it comes to documentation gathering. This is particularly evident for "enhancement" requirements which may span multiple development objects (e.g. add a new field to a database table, extend an IDoc to bring it in, add the field to a WDA screen, etc.). Frequently, I see such enhancement requirements documented within a series of enhancement specs. In the long run, this begs the following question: where do developers go to find documentation concerning a particular function within the system?

 

To put this into perspective, allow me to paint a picture of a defect I was tasked with solving some years ago. The defect in question was related to a series of enhancements applied to the PR-->PO conversion process. In this particular project, there was no master document which described this process from nuts to bolts. Instead, what little documentation that existed about this process was scattered across various enhancement specs (some of which were not PR-PO related at first glance). Not finding what I needed in the documentation, I decided to go into the BAdI code and see if I could make sense of things from there. Ultimately, what I discovered was that the defect was introduced by one enhancement that was ignorant of another. Literally, the right hand had no idea what the left hand was doing.

 

While this is a rather extreme case, there are several take-aways to consider from all this:

 

  1. Documents should be assigned to an organizational structure based on meaningful elements such as business unit, business objects, etc. as opposed to fleeting project constructs which will eventually become obsolete.
  2. If documentation is not well organized, then it is likely that other documents will come along and render certain aspects of a document obsolete. For instance, in the PR-->PO example above, there should have been one comprehensive set of documentation that was consistently updated over the lifespan of the project.
  3. In order for documentation to have value, it must be accessible to the people who need to use it. If developers can never find what they're looking for, then all those documents will do nothing more than clog up some file share out on the network, never to be read again.

 

As is the case with any writing exercise, it is very important when writing documentation to know who your audience is. What is it that they need to know? How do we ensure that they can find what they're looking for? How do we make sure it is up-to-date and reliable? These are the questions we must be asking ourselves.

 

Observation #3: Fluff Makes Things Worse with Tech Specs

I frequently encounter a lot of tech specs which span 10-20+ pages and yet don't really say anything useful. Basically, they just contain lots of boilerplate sections, regurgitated content from the functional spec, and so on. In general, I find the following types of content to be pointless when it comes to capturing technical documentation:

 

  • Audit trails which contain information about CTS requests. So long as the pertinent development objects are identified in the tech spec, any competent developer should be able to track down transport import history in the system. To force developers to go back and annotate this information after the fact is a waste of time.
  • Pseudo code that isn't pseudo code. Here, I'm talking about those sections of the document where developers literally copy-and-paste ABAP code into the document in order to satisfy some project-level requirement that states that every section of the tech spec must be filled in. Pseudo code is useful for describing complex algorithms, etc. It uses, by definition, a simplified syntax which makes it much easier to read by technical and non-technical readers alike. If readers want to see how the code works, they should read the code. After all, it is the one artifact that is most likely to be kept up-to-date.
  • Flow charts which go on for pages at a time. Such charts should be broken up into smaller pieces. Or, better yet, refactored into a better modeling notation such as UML or FMC.
  • Extensive narrative text. As the saying goes, a picture's worth a thousand words. I find that I can convey the majority of my meaning in a tech spec to a handful of UML diagrams: a package diagram to identify the key development objects, a class diagram to describe their relationships, and sequence/activity diagrams to illustrate behaviors. Naturally, some narrative text is called for in order to explain key aspects of the diagram, etc. On the other hand, an extensive play-by-play narrative of the code is wasting everyone's time. If developers want to dig down to that level of detail, they should go straight to the source.
  • Regurgitated and redundant reference materials. For example, I recently encountered an interface tech spec which contained pages and pages of reference material providing an executive summary of what SAP NetWeaver PI is and how it works. To me, this goes back to understanding who your audience is. The assumption going in for a PI interface tech spec has got to be that the reader has some clue as to what PI is and how to use it. If they don't, then your tech spec is not the right place to acquire such knowledge. Any time spent compiling this information in each and every tech spec is a waste. This also goes for tech specs which copy-and-paste the functional spec before proceeding. Here, a simple hyperlink should suffice. Plus, it makes sure that the reader always finds the latest and greatest information straight from the source.
  • Tutorial-like instructions for routine development tasks. Once again, the point of a tech spec is to provide a meaningful artifact for maintenance staff to consult from time-to-time while supporting the system long term. Here, our objective is clear: we need to provide these readers with information to understand why the code was written the way it was. If readers don't know how to open up Transaction SE80, then they have much bigger problems on their hands. Some tutorial-like instructions are warranted for particularly complex tasks, but we should be providing click-by-click instructions for routine tasks such as creating a function module, etc.

 

Sometimes, less is more, and more is less. This goes beyond simple laziness; it's about effective writing. A concise tech spec that is to the point will be much easier to consume than a bloated document containing redundant information that the reader has to sift through. Plus, from a project perspective, you have to define realistic expectations for developers tasked with maintaining these monstrosities. Some of these "fluff" sections are excellent candidates for getting out of sync during the course of a project.

 

Observation #4: Unit Test Documents are Fine, but Automated Unit Tests are Better

The great thing about unit tests in general is that they force developers to come up with test cases to ensure that their development objects work like they're supposed to. For many projects, this exercise culminates in a unit test document which is then used as a guide for manually testing code changes throughout the object's lifecycle. While this is a worthy goal, its effectiveness is limited to the due diligence of the developers who maintain them. Here, it is easy to fake results and sneak shoddy objects past this QC gate so that testing becomes the problem of an unsuspecting business analyst.

 

To me, if you're going to go to the trouble of creating these artifacts, you might as well go the extra mile and create automated unit tests in ABAP Unit, etc. That way, unit tests become more than just a piece of paper; they become an tangible construct that can be used to automate a unit test at a click of a button. This saves testing time and ensures more predictable results from test-to-test. Plus, it takes a lot of the guesswork out of interpreting test results for future developers who may not be as familiar with the technical underpinnings of a particular development objects.

 

Observation #5: Good ABAP Code is the Best Documentation

At the end of the day, the definitive source on any development effort is going to be the source code. After all, it's the most likely artifact to get consistently updated. Given this, it's important that the code be maintained in such a way that it is highly readable. Here are a few tips for ensuring readability in your code:

 

  • Variables should have meaningful names that are intuitive to users.
  • Good modularization techniques should help break complex requirements up into smaller chunks that are easier to understand. Being an OO aficionado, I find tremendous value in modeling problems using classes which assume certain responsibilities within the problem domain. Such anthropomorphism makes it easier to translate between the human world and the code world. Method calls on such classes read like statements we would make in normal conversation. For example, IF po->is_locked( ) EQ abap_true. reads a whole lot cleaner than SELECT flag from EKKO...
  • It almost goes without saying that comments are highly important. Here though, I'd go a step further and say that good comments are important. Redundant or obvious comments are a waste of time.
  • Indentation and white space are important. It should be easy to group sections of code visually for the reader.
  • Don't be afraid to rip out portions of a module if the code becomes obsolete. There's nothing worse than having to scroll through pages of code that has been commented out over time. That's what version history is for. There are plenty of other places for developers to go back and find out why something changed.
  • Data structures are important. I often find programs that define pages and pages worth of global variables that should have been encapsulated into a structure or object type. Such grouping makes the variable uses more intuitive and the code more concise.
  • A class-based exception concept is preferred to having a series of subroutine calls followed by IF statements checking the values of flags. Such logical units of work should be wrapped in a TRY statement to separate the exception handling logic from the main programming logic.
  • Modules should have high cohesion, which is to say that they should do one thing and one thing only.
  • The ABAP workbench and CTS provide excellent documentation resources to provide documentation for function modules, classes, methods, transports, and more. There's no better place to provide documentation for a particular module than right here. After all, as a developer, my first inclination is always going to be to go right to the source. Plus, with SAP's excellent CTS, it's easy to make sure that this documentation gets copied along with the rest of the code reliably.

 

More than any other artifact, the code is the one place where good documentation practices can't slide.

 

Conclusions

Before I conclude this blog post, I would be remiss if I didn't say that I think documentation is very important. As a developer, I feel that it is my responsibility to leave behind a series of artifacts which accurately portray the design and implementation decisions that went into my development process. In my experience, these artifacts can take on many different forms depending on the context. As such, I think it important for projects to provide developers with the flexibility to produce something that is meaningful as opposed to a series of boilerplate documents which aren't worth the paper they're printed on.

 

Having worked in other software disciplines, I find other methodologies such as RUP much more effective than the ones used on the typical SAP project. I say this not to point the finger at SAP since SAP has long since moved on from the waterfall approach typical of ASAP projects of the late 80's and 90's. More often than not, these outdated practices are carryovers from consulting practices/development shops that have "always done it this way". These days, I don't think that's going to cut it anymore.

Last fall, my 10 year old son expressed some interest in learning how to program games. So I, being the CS nerd that I am, was thrilled and immediately went to work trying to find good learning resources to help get him started. Eventually, I settled on a book entitled Hello World! Computer Programming for Kids and Other Beginners (you can read about it here). As the name suggests, this book is geared for kids looking to get started with programming for the first time. Though there are several books out there that purport to do this, I thought there were two things that really set this book apart:

 

  1. The book is co-authored by the author's 12 year old son. So, you get the insight of a young developer learning how to program for the first time. In particular, the book contains many sidebars which document specific pain points encountered during the learning process.
  2. It uses Python as its language of choice for teaching introductory programming.

 

Given my natural prejudices when it comes to scripting languages, I was a little skeptical about the selection of Python as a first programming language. What little I had seen of it had given me horrible flashbacks to my days of doing CGI scripting in Perl in the late 1990s. Back then, scripting languages just seemed like controlled chaos: no typed variables, weird and cryptic syntax, and a certain amount of terseness that just went against everything I had ever learned about programming in school. Still, if you look at what the young whippersnappers of this generation are learning in schools, you'll find that scripting languages like Python are towards the top of the list. So either a whole generation of developers has it wrong (probably), or maybe it's me that needs to broaden my horizons. So, I decided we'd give it a shot.

 

So what's all this got to do with ABAP you might ask? Well, during the course of our journey, I discovered some things about scripting languages in general and Python in particular that really got me to thinking about the way we perform day-to-day tasks using traditional enterprise programming languages such as ABAP and Java. So what follows is an opinion piece which documents some of the lessons I learned while coming up to speed with Python.

 

Lesson 1: Dynamic Typing Ain't That Bad

Though I've programmed in many languages over the years, Java has always been my first love (sorry ABAP). And it was in Java that I really began to embrace the notion of static typing. If you're not familiar with this concept, then a brief description is in order.

 

When we talk about types in a programming language, we're talking about artificial constructs which provide an abstraction on top of some section of memory. For example, the primitive int (integer) type in Java carves out 4 bytes in memory to store a 32-bit signed, two's complement integer. Similarly, other data types such as float, double, or char in Java or I, F, P, and C in ABAP map an abstract data type onto a series of bits in memory. To the computer, it's 1's and 0's as usual; to us, we have an intuitive construct which can be used to model the data we encounter in the real world.

 

As practitioners of a given language, we normally remain blissfully unaware of such bookkeeping, relying on the runtime environment to take care of the low-level bit-twiddling details. Language designers, on the other hand, care about these details a great deal. In particular, they are interested in defining a scheme for determining when and where to apply a particular abstraction (type). Such mapping schemes can be usually classified into two broad categories:

 

  • Static Typing
    • With static typing, the types of variables must be declared up front at compile time.
    • If a variable is created without a type, a syntax error will occur and the code won't compile.
    • Similarly, if a variable of a given type is statically assigned a value which is outside the boundaries of that type, the compiler will catch that too. Of course, there are limits to what can be checked at compile time. After all, the compiler can't predict how a poorly written loop might cause overflow in a variable assignment, etc.
    • In addition to the efficiencies it offers to compiler implementations, static typing is geared towards preventing developers from hanging themselves with type mismatches and the like.
  • Dynamic Typing
    • With dynamic typing, a variable is not assigned a type until runtime whenever it is first assigned a value.
    • This is made possible by VM/interpreter implementations which are designed to allocate just about everything on the fly.
    • Since there are no compile-time restrictions on type declarations, it is possible that some type mismatch errors won't be caught until runtime.

 

As you may have guessed, both ABAP and Java employ static typing. So, whenever we define a variable in one of these languages, we must assign it two things:

 

  • A name
  • A specific type

 

For example, if we wanted to define a variable to hold a floating point number in Java, we would need to define it using a syntax like the following:

 

float pi = 3.14159f;

 

With ABAP, we probably end up with a syntax like this:

 

DATA pi TYPE p DECIMALS 5.

pi = '3.14159'.

 

Conversely, the equivalent variable declaration in Python looks like this:

 

pi = 3.14159

 

As you can see, Python does not require a type declaration up front. So what, you say? Well, besides saving several keystrokes (or many if it's a complex data structure), the dynamic approach is much more flexible in the long run. For example, think about what would happen if at some point we needed to increase the precision of our PI variable. In the ABAP/Java examples, we would probably have to go back and touch up the code to choose a wider data type. With Python, no code changes are required; the interpreter will simply carve out a larger space in memory as needed.

 

In his article, Scripting: Higher-Level Programming for the 21st Century, John Ousterhout puts this into perspective: "...scripting languages are designed for gluing: they assume the existence of a set of powerful components and are intended primarily for connecting components together. System programming languages (e.g. C) are strongly typed to help manage complexity, while scripting languages are typeless to simplify connections between components and provide rapid application development."

 

As I progressed further and further with Python, I found that I didn't really miss the formal type declarations like I thought I would. That's not to say that I didn't encounter a runtime error here or there. But the thing is, I encounter those kinds of issues in my day-to-day ABAP work, too. So, at the end of the day, I had to ask myself a fundamental question: what is static typing truly buying me other than a lot more keystrokes? As much as I have been a strong proponent for static typing over the years, this is a question I found difficult to answer with anything other than "because...".

 

Lesson 2: Internal Tables Could Use a Facelift

One of the things I like about Python is its rich set of built-in collection types: lists, tuples, sets, and dictionaries. These collection types are quite feature rich and flexible in their use. Sure, you can accomplish all the same things with internal tables in ABAP, but the Python way of doing things is a whole lot easier. From a usage perspective, we have the option of working with these collections in two different ways:

 

  1. We can perform operations using the rich set of API methods provided with Python collection types just as we would with collection types in Java or (e.g. those in the java.util package).
  2. Python also allows us to perform certain operations on these objects using built-in operators (e.g. [ ], +, etc.).

 

To put this these advantages into perspective, let's take a look at a side-by-side comparison between ABAP code and Python code. The other day, I was tasked with enhancing a simple workflow report in ABAP that provides statistics about agents assigned to specific workflow items. As I read through the code, I found the selection logic to be pretty typical of most reports:

 

  1. First, the report fetched the work item information into an internal table.
  2. Then, for each work item record in the internal table, additional information about the assigned agent (e.g. agent name, duty, etc.) was fetched and aggregated into a report output table.

 

In order to improve performance and avoid the dreaded "SELECT within a LOOP", the developer had built a temporary table which contained the super set of agents assigned to the work items. That way, the agent information only had to be selected once as opposed to over and over again within the loop. From an ABAP perspective, the set generation process looked something like this:


LOOP AT lt_work_items ASSIGNING <ls_work_item>.

  READ TABLE lt_agents ASSIGNING <ls_agent>

        WITH KEY wi_aagent = <ls_work_item>-wi_aagent.

  IF sy-subrc NE 0.

    APPEND INITIAL LINE TO lt_agents ASSIGNING <ls_agent>.

    <ls_agent>-wi_aagent = <ls_work_item>-wi_aagent.

  ENDIF.

ENDLOOP.

 

Though this is pretty simple code, I would draw your attention to the number of lines of code it takes to perform a simple task such as building the  LT_AGENTS superset (and we didn't even include the type definitions, data declarations, and so on). Now, while there are arguably better ways of performing this task in ABAP (the somewhat obscure COLLECT statement comes to mind), this copy idiom is fairly typical of a lot of the ABAP code I see out there. With that in mind, let's look at the Python way of doing this. Here, if we structure our collection types correctly, we can achieve the same task using a single line of code:

 

#Assuming wi_dict is a dictionary type with key:value pairs of the form

#{Work Item:Agent ID}...

agent_set = set(wi_dict.values())

 

In this case, we simply collect the list of agents from the wi_dict dictionary object and then pass it to the set collection's constructor method. Since the set type automatically filters out duplicates, we can perform the task in one fell swoop. Of course, that's just one of many operations that is made easier using Python collections. Overall, I found that it was much easier to create custom data structures and perform all manners of operations on them in Python as opposed to ABAP (and Java, too for that matter). This leads into my next lesson learned.

 

Lesson 3: ABAP Would Taste Sweeter with Some Syntactic Sugar

The first time I looked at an ABAP program, my initial reaction was how much it looked like COBOL, a language often chastised for its verbosity. 12 years and a case of carpal tunnel syndrome later, things haven't really changed all that much on this front. Sure, there have been a lot of enhancements to the language, but there are still many trivial tasks that seem to take more lines of code than they should. For example, look at the following piece of sample code written in Python:

 

import os, glob

[f for f in glob.glob('*.xml') if os.stat(f).st_size > 6000]

 

This complex expression is called a list comprehension, and can be interpreted as "list the set of XML files in the current working directory that are larger than 6,000 bytes". In ABAP, we'd have to call a function to retrieve an internal table of files in the target directory, loop through each file, and apply the predicate logic after the fact. They both achieve the same thing, but I can get there quicker with Python.

 

Lesson 4: Less Fluff = Improved Readability

As I mentioned earlier, I certainly had my doubts about using Python as a learning language. However, I was surprised at how quickly my son was able to pick it up. After spending just a little time with it, he seemed to have no trouble reading sample code and tweaking it to create simple games. Ultimately, I think this comes down to the fact that Python has so little fluff in it that it's really easy to zero in on what a particular piece of code is trying to do. Compare this with the 30K line ABAP report which contains 2-3 pages full of nothing more than type/variable declarations. Sometimes less is more, and I think Python and scripting languages in general got this part of language design right.

 

Lesson 5: Everything Works Better if you get the Core Right

As I have begun branching out with my Python programming, I started looking at how to perform common IT-related tasks such as XML parsing, Web service calls, string processing, and so on. While working with these APIs, I noticed a common trend in the APIs: no matter the technology, most everything can be achieved using basic Python built-in types. For example, when parsing XML, I don't have to familiarize myself with 10-20 interfaces (Yes, I'm looking at you iXML). Instead, elements are stored in lists, attributes are stored in dictionaries, and it's basic Python programming as per usual.

 

I liken this to the Unix OS architecture where everything's a stream. Once you establish this foundation, everything just seems to flow better. Of course, every new technology is going to present a learning curve, but as long as the core remains the same, it is much easier to come up to speed with all the rest.

 

Conclusions

If you've made it this far through my ramblings, then you might be wondering what conclusions can be drawn from all this. After all, SAP's not likely to re-purpose ABAP as a scripting language anytime soon. Still, languages have a way of borrowing features from one another (see ABAP Objects), so maybe it's possible we'll see ABAP loosen up a little bit more in the coming years. Also, with the advent of VM implementations such as Jython, it's possible to mix-and-match languages to solve particular types of problems.

 

On a more personal level, I found it interesting to see how the next generation of developers are being taught to program. Clearly things have changed, and sometimes change is good. Like it or not, a good majority of next generation cloud-based applications are being built using these languages. Indeed, at Google, Python is right up there as a first-class citizen with Java in the enterprise realm. Suffice it to say that the dynamic programming hippies are here to stay, so lock up your daughters and hold your statically-typed variables close at hand. :-)

The other day I was troubleshooting a Floorplan Manager (FPM) application and needed to debug around an event triggered in a dialog window based on a GUIBB (Generic User Interface Building Block). Normally, when faced with this task, I use the familiar "More Field Help" function in the Web Dynpro runtime environment to locate the component configuration for the GUIBB, open it up in the Web Dynpro component configurator tool, and then look at the component configuration properties to identify the GUIBB's feeder class. Then, I can set breakpoints in the feeder class methods to debug the functionality using the ABAP Debugger as per usual.

CompConf.png

 

While this is normally a pretty mundane task, I ran into a little bit of a challenge this time around: the Web Dynpro component configurator tool short dumped due to a coding error in an SAP component every time I tried to open up the target component configuration. So, without being able to determine the feeder class via the configurator tool, I needed to come up with an alternative method for locating the feeder class. What follows is one potential workaround for dealing with this kind of problem.

 

Workaround Steps to Locate the Feeder Class for a GUIBB Manually

 

  1. Open up the Data Browser tool (Transaction SE16) and display table WDY_CONFIG_DATA.
  2. Plug the GUIBB's component configuration ID in the CONFIG_ID field and hit the Execute button.
  3. In the table record, copy the hex-binary content stored in the XCONTENT field to your clipboard. This hex-binary content includes an XML document which contains additional metadata about the component configuration.
  4. There are many ways to decode the hex-binary content, but one simply way is to open up a browser window and navigate to http://www.string-functions.com/hex-string.aspx. This website provides a handy hex-binary to string converter tool as shown below. Here, notice that the feeder class is embedded within the <FEEDER> element.

DecodedXML.png

Hopefully you'll never need to use this, but if you do run into this problems, I hope this helps.

Despite all of the advances in Web service and proxy technologies over the course of the past few years, I still find that many customers prefer to use the tried-and-true ALE/IDoc technology stack. Consequently, a frequent administrative headache is the upload of IDoc metadata using Transaction IDX2. In this blog, I will demonstrate a simple report program that can be used to automate this task.

h4. What is IDoc metadata, anyway?

If you haven't worked with the PI IDoc adapter before, then a brief introduction is in order. As you know, all messages that flow through the PI Integration Engine pipeline are encoded using XML. Thus, besides providing raw connectivity, adapters such as the IDoc adapter also must perform the necessary transformations of messages so that they can be routed through the pipeline. In the case of the IDoc adapter, the raw IDoc data (you can see how the IDoc data is encoded by looking at the signature of function module IDOC_INBOUND_ASYNCHRONOUS) must be transformed into XML. Since the raw IDoc data does not provide information about segment field names, etc., this metadata must be imported at configuration time in order to enable the IDoc adapter to perform the XML transformation in an efficient manner.

From a configuration perspective, all this happens in two transactions:

    In Transaction IDX1, you create an IDoc Adapter Port which essentially provides the IDoc adapter with an RFC destination that can be used to introspect the IDoc metadata from the backend SAP ALE system.
      1. In Transaction IDX2, you can import IDoc types using the aforementioned IDoc adapter port. Here, you can import standard IDoc types, custom IDoc types, or even extended types.

    If you're dealing with a handful of IDocs, then perhaps this isn't such a concern. However, if you're dealing with 10s or 100s of IDocs and a multitude of PI systems, then this process can become tedious in a hurry.

    h4. Automating the Upload Process

    Now, technically speaking, the IDoc adapter is smart enough to utilize the IDoc port definition to dynamically load and cache IDoc metadata on the fly. However, what it won't do is detect changes to custom IDocs/extensions. Furthermore, if you have scenarios during cutover which block RFC communications, not having the IDoc metadata in context can lead to unexpected results. The report below can be used to automate the initial upload process or execute a kill-and-fill to pull in the latest and greatest changes. In reading through the comments, you can see that it essentially takes two inputs: the IDoc adapter port defined in IDX1 and a CSV file from your frontend workstation that defines the IDoc types to import. Here, you just need to create a two-column CSV file containing the IDoc type in column 1 and the extension type (if any) in column 2.

    REPORT zidx_idoc_load_metadata.

     

    &----
    *& Local Class Definitions                                             * &----
    CLASS lcl_report DEFINITION CREATE PRIVATE.   PUBLIC SECTION.     CLASS-METHODS:       "Used in the selection screen definition:       get_frontend_filename CHANGING ch_file TYPE string,

     

          "Public static method for running the report:       execute IMPORTING im_idoc_types_file TYPE string                         im_idoc_port TYPE idx_port.

     

      PRIVATE SECTION.     "Class-Local Type Declarations:     TYPES: BEGIN OF ty_idoc_type,              idoc_type TYPE string,              ext_type  TYPE string,            END OF ty_idoc_type,

     

               ty_idoc_type_tab TYPE STANDARD TABLE OF ty_idoc_type.

     

        "Instance Attribute Declarations:     DATA: idoc_port  TYPE idx_port,           idoc_types TYPE ty_idoc_type_tab.

     

        "Private helper methods:     METHODS:       constructor IMPORTING im_idoc_port TYPE idx_port,       upload_idoc_types IMPORTING im_idoc_types_file TYPE string                           RAISING cx_sy_file_io,       import_idoc_metadata,       remove_idoc_metadata IMPORTING im_idoc_type TYPE string. ENDCLASS.

     

    CLASS lcl_report IMPLEMENTATION.   METHOD get_frontend_filename.     "Local Data Declarations:     DATA: lt_files       TYPE filetable,           lv_retcode     TYPE i,           lv_user_action TYPE i.     FIELD-SYMBOLS:       idoc_port       WITH idoctyp = im_idoc_type       AND RETURN.   ENDMETHOD.                 " METHOD remove_idoc_metadata ENDCLASS.

     

    &----
    *& Selection Screen Definition                                         * &----
    PARAMETERS:   p_idxprt TYPE idx_port OBLIGATORY,   p_ifile  TYPE string LOWER CASE OBLIGATORY.

     

    &----
    *& AT SELECTION-SCREEN Event Module                                    * &----
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_ifile.   CALL METHOD lcl_report=>get_frontend_filename     CHANGING       ch_file = p_ifile.

     

    &----
    *& START-OF-SELECTION Event Module                                     * &----
    START-OF-SELECTION.   CALL METHOD lcl_report=>execute     EXPORTING       im_idoc_port       = p_idxprt       im_idoc_types_file = p_ifile.

    h4. Final Thoughts

     

    I hope you'll find this simple report program useful. Please feel free to try it out, modify it, or do with it what you will. If you have any questions, please feel free to contact me. Also, if you are interested in learning more about SAP NetWeaver PI development, then I would encourage you to check out my new book: SAP NetWeaver Process Integration: A Developer's Guide . Also, if you're more of an e-book kind of person, be on the lookout for the Kindle release of this book coming in the next few days.</p>

    During the course of my day-to-day PI developments, I frequently encounter scenarios in which I need to create an async-to-sync bridge in order to connect to some 3rd-party Web service. Here, for example, I might receive an asynchronous IDoc on one hand, and need to turn around and call a synchronous Web service on the other hand. Generally speaking, there are two ways of building this bridge using PI:

    • You can interject an integration process to broker the synchronous call and deal with the results.
    • You can use the JMS adapter module-based bridge described here.

    While both of these scenarios are viable options, they are not without their drawbacks. For a simple point-to-point call, the interjection of an integration process adds a lot of unneeded overhead. The JMS adapter module solution performs much more favorably, but can be difficult to work with if messages need to be delivered reliably (and perhaps in order). To get around these limitations, I bent the rules a little bit and devised a custom adapter module that allows you to configure the synchronous Web service call using asynchronous message processing semantics. Sufficiently confused? Don't worry, this blog entry describes the architecture of this solution.

    Architectural Overview

    If you've worked with the SAP standard async-to-sync bridge, then you know that it interjects a couple of adapter modules into a JMS sender channel definition in order to implement a synchronous call behind-the-scenes and then forward the result on asynchronously to some final destination (e.g. a JMS queue, IDoc, etc.).

    The archtiecture of my solution is similar, but it deviates in the way it executes the synchronous call. With SAP's solution, the synchronous call is brokered through the Integration Server. The advantage of this approach is that it allows you to leverage any adapter type when performing the synchronous call. The downside is that you must process the message using a QOS of "Best Effort". In my solution, I have tailored the adapter module to facilitate the Web service call internally and then replace the request message payload with the results message payload. Thus, from a configuration perspective, you're essentially configuring an asynchronous scenario that forwards the results of the Web service call to some receiver interface.

    To demonstrate how this works, imagine that you are receiving an ORDERS05 IDoc message out of some source SAP ECC system and forwarding it on to some downstream 3rd-party system using SOAP Web services. You then want to take the results of this Web service call and forward the message back to the source SAP ECC system. The configuration steps are as follows:

    1. You configure an asynchronous scenario which takes receipt of the IDoc message, converts it into a SOAP request message (minus the SOAP envelope), and then places the result on a JMS queue.
    2. Then, you configure a JMS sender channel using the custom adapter module to pick up the request message, forward it on to the 3rd-party system, and then swap the payload such that the results of the Web service call are forwarded on to the Integration Server.
    3. From there, you have a normal asynchronous scenario in which you are sending the response message asynchronously back to the source SAP ECC system.

    As you can see, PI really doesn't know much about the synchronous call. To the Integration Server, it's business as usual. Of course, the adapter module is smart enough to react to communication failures and SOAP faults. If this occurs, an exception is raised and the message gets rolled back. That way, messages can be delivered with a QOS of "Exactly Once" or "Exactly Once In Order".

    Implementing the Adapter Module

    You can find the complete source code for the custom adapter module here. For step-by-step instructions for building the EAR file and deploying it, I highly recommend William Li's article on the SDN entitled Developing Adapter User-Module to Encrypt XML Elements in Process Integration 7.1. As you can see, the code itself utilizes the Java standard SAAJ API as well as the JDOM and Commons Codec libraries. Beyond that, the code should seem fairly straightforward. In essence, it's job is to:

    1. Extract the values of relevant configuration parameters which determine the target URL to call, the SOAP action of the target operation, and basic authentication tokens (if needed).
    2. Extract the request message payload, build a SOAP envelope, and call the target Web service operation.
    3. Validate the results and raise an exception to rollback the message as necessary.
    4. And finally, if the request was succesful, overlay the request message with the response message.

    Naturally, you can tweak this behavior for any custom message processing requirements that you might have.

    Configuration in the Integration Directory

    Once the adapter module is deployed, you can configure it by selecting the Module tab in your JMS sender channel. The figure below demonstrates how this works. The Module Name field must contain the JNDI name assigned to the adapter module when it is deployed. Other than that, the configuration is very straightforward.

    Communication Channel Configuration

    Final Thoughts

    I hope that you'll find this utility module useful for those scenarios that just don't quite fit in with the default SAP behavior. Also, if you like what you see above, allow me to offer a shameless plug for my new book SAP NetWeaver Process Integration: A Developer's Guide.

    If you're an SAP NetWeaver PI developer looking to take your skills to the next level, or if you're a novice developer eager to get started with PI, then perhaps you'll indulge me as I offer a shameless plug for my new book: SAP NetWeaver® Process Integration: A Developer's Guide.

    image

    Why this book is different

    This book was written by developers, for developers. Each topic is covered with a balanced approach that combines conceptual theory with practical examples. Along the way, you’ll find plenty of illustrations and code samples that will help you get started right away with your own developments using SAP NetWeaver PI 7.1 (and Ehp 1).

    Within the book, you’ll find detailed information about core development topics in SAP NetWeaver PI as well as some complementary Internet-based technologies that go hand-in-hand with modern interface development. Specific topics include:

    • Conceptual and architectural overview
    • Enterprise Services Builder and the Enterprise Services Repository
    • SOA/service-design concepts
    • Service interface development
    • Message mapping development
    • Integration Processes
    • Integration Builder and Integration Directory
    • Integration Server and the Integration Engine
    • Advanced Adapter Engine
    • Monitoring tools and techniques
    • Internet-based technologies such as XML, XML Schema, SOAP, and WSDL
    • And much, much more

    Tour of the Book

    One of our goals in writing this book was not to simply compile a bunch of
    loosely related information together into a reference manual. While this book
    can certainly be used as a reference in your day-to-day work, we also hope
    that you’ll find each chapter to be an interesting read in and of itself. Though
    each of the chapters are designed to be self-contained, we think you’ll get
    the most out of your experience by reading the book in order as each
    chapter builds upon previous concepts. The chapters are organized as
    follows:

     

    • Chapter 1: Foundations

      In this first chapter, we attempt to lay some groundwork by
      describing what SAP NetWeaver PI is, where it came from, and what
      value it brings to the table in the world of enterprise software
      development. As such, this chapter is a microcosm for the entire
      book.

    • Chapter 2: Working with XML

      For developers new to the world of interfacing in the Internet age,
      this chapter will introduce you to the eXtensible Markup Language
      (XML) and some of its surrounding technologies such as XML Schema.

    • Chapter 3: The Web Services Technology Stack

      If you’re not familiar with Web service technologies such as SOAP,
      WSDL, and UDDI, then this chapter will provide you with a gentle
      overview to bring you up to speed. Having an understanding of these
      concepts is important for being able to comprehend how interfaces
      are defined and processed within the SAP NetWeaver PI runtime
      environment.

    • Chapter 4: Getting Started with the ESR

      In this chapter, we begin getting our hands dirty with the PI design
      time environment. Here, you will learn how to organize and
      manipulate SOA assets within the Enterprise Services Repository
      (ESR).

    • Chapter 5: Service Design Concepts

      This chapter sets the stage for service interface development by
      introducing you to some SAP and industry-best practices for designing
      and modeling business processes in the SOA context. Along the way,
      you will become familiar with some SOA modeling tools that can be used to visualize various aspects of a business process at different
      levels of abstraction.

    • Chapter 6: Service Interface Development

      In this chapter, you learn about the various approaches to service
      development supported by SAP NetWeaver PI. Here, you’ll learn how
      to develop custom services from scratch, or leverage pre-existing
      services.

    • Chapter 7: Mapping Development

      This chapter introduces you to some of the basics of mapping
      development in SAP NetWeaver PI. In particular, we will show you
      how to implement graphical message mappings and import custom
      mapping programs written in Java and XSLT.

    • Chapter 8: Advanced Mapping Development

      This chapter picks up where Chapter 7 left off by showing you some
      advanced mapping development concepts. Here, you will learn how to
      define and configure operation mappings, perform value mappings
      and mapping lookups, and much more.

    • Chapter 9: Integration Processes

      In this chapter, we will show you how to implement sophisticated
      message processing requirements using integration processes. As
      you’ll see, these workflow-like components can be used to implement
      stateful processing, conditional logic, and much more.

    • Chapter 10: Working with the Integration Builder

      This chapter introduces the Integration Builder tool which is used to
      define configuration objects within the Integration Directory.

    • Chapter 11: Collaboration Profiles

      In this chapter, you will learn how collaboration profiles are used to
      model the endpoint systems that will participate in collaborative
      business processes.

    • Chapter 12: Integration Server Configuration

      This chapter shows you how to configure collaborative business
      processes for execution within the Integration Server, which is an
      ABAP-based runtime component of SAP NetWeaver PI. Here, you will
      learn how to define logical routing rules and some of the other
      configuration-time objects used to influence the behavior of the
      messaging components at runtime.

    • Chapter 13: Advanced Configuration Concepts

      In this chapter, we will introduce you to some advanced
      communication variants that are supported in version 7.1 of SAP
      NetWeaver PI. Here, you will learn how to configure local processing
      within the Advanced Adapter Engine (AAE) as well as point-to-point
      scenarios between SAP-based Web service runtime environments.

    • Chapter 14: Process Integration Monitoring

      This final chapter presents some of the various monitoring tools
      provided with SAP NetWeaver PI. Here, we’ll show you how these
      tools can be used to monitor the flow of messages, the health of
      messaging components, and so on.

    • Appendix A: Proxy Programming Concepts

      In this appendix, you’ll learn about proxy programming concepts.
      Specifically, we’ll show you how to develop proxy objects in ABAP that
      can communicate with the PI Integration Server using the native XI
      protocol.

    • Appendix B: Enhancing Enterprise Services Provided by SAP

      This appendix demonstrates techniques for enhancing enterprise
      services provided by SAP.

    • Appendix C: Collecting Mapping Requirements

      In this appendix, we’ll provide you with some tips for collecting
      mapping requirements from the various stakeholders involved in a
      collaborative business process.

    Where can I find out more?

    If you are interested in learning more about the book and what it has to offer, check out the introduction and table of contents available here. You can also download the first chapter here. Finally, if you have specific questions, you can also e-mail me at james.wood@bowdarkconsulting.com.

    Right now, the book is available for sale online at https://www.createspace.com/3555638. You can also find a Kindle Edition online at http://www.amazon.com/SAP-NetWeaver-Process-Integration-ebook/dp/B0054S3JNS/ref=sr_1_2?ie=UTF8&qid=1307711749&sr=8-2.

    If you've ever worked with adapter modules in SAP NetWeaver PI, then you already know how powerful they can be when it comes to enhancing the functionality of Java-based adapters. In this blog, I'll show you how adapter modules can be used to dynamically derive the queue ID (also referred to as the sequence ID) for EOIO message processing.

    Getting Started

    For those of you that may not be familiar with adapter modules, a brief introduction is in order. In many respects, you can think of an adapter module as a type of user exit that can be used to enhance the functionality of a Java-based adapter. From a technical perspective, adapter modules are implemented as a stateless session Enterprise JavaBean (EJB). Here, in addition to implementing the typical javax.ejb.SessionBean interface, adapter modules must also implement the com.sap.aii.af.lib.mp.module.Module interface prescribed by SAP. Such standardization makes it possible to seamlessly interject adapter modules into the processing sequence for adapters. On the development side of things, you're simply tasked with implementing the process() method defined by the Module interface.

    If all this seems a little hazy, then an excellent resource is William Li's article Developing Adapter User-Module to Encrypt XML Elements in Process Integration 7.1. There, you will find a step-by-step introduction to adapter modules, their development, deployment, etc. For now, we'll assume that the basic framework is in place and move on to take a look at the Java code required to perform the dynamic queue derivation.

    Dynamic Queue Derivation

    As you're probably already aware, the term EOIO stands for Exactly Once In Order. In terms of message processing, what we're saying is that each incoming message should be processed exactly once, and in the order that it is received. To guarantee sequencing, messages are serialized through FIFO queues (where FIFO stands for "First In, First Out").

    The problem with many Java-based adapters is that the queue names are statically assigned at configuration time instead of at runtime. For example, in the screenshot below, you can see how a JMS sender channel is configured for EOIO processing. In this case, the queue name is statically defined at configuration time within the Integration Directory. In some cases, this may not be granular enough for your use case. For instance, you may want to derive the EOIO queue names in terms of some key identifier in the inbound message (e.g. a document number).

    image

    To get around this limitation, you can write an adapter module to override the queue ID value at runtime. The code itself is surprisingly simple:

    public class MyAdapterModule implements SessionBean, Module
    {
      public ModuleData process(ModuleContext moduleCtx, ModuleData moduleData)
        throws ModuleException
      {
        try
        {
          // Extract the input message from the module data:
          Message msg = (Message) moduleData.getPrincipalData();

          // Use the Message instance to parse the incoming XML;
          // For example, if the incoming message is a sales order,
          // then we might derive the queue ID using the sale order number:
          // String queueId = "";

          // Once the queue ID is derived, set it using the setSequenceId()
          // method:
          msg.setSequenceId(queueId);
          moduleData.setPrincipalData(msg);

          return moduleData;
        }
        catch (Exception ex)
        {
          // Exception handling goes here...
        }
      }
    }

    Looking at the code above, you can see that the only thing required to dynamically set the queue ID is a call to the setSequenceId() method defined in interface com.sap.engine.interfaces.messaging.api.Message. The queue ID itself can be derived based upon the content of the incoming message. For example, we could write a SAX handler class to quickly parse through the incoming message looking for a document number or some other key identifier that helps us derive the seqeunce for EOIO message processing.

    Once the adapter module is complete and deployed on the AS Java, you can interject it into the processing seqeunce for a sender channel like the OrderMessageRouterBean shown below. That's all there is to it.

    image

    Closing Thoughts

    While this technique is powerful, you should be careful not to add too much additional overhead to your process. If you're familiar with SAX-based XML parsing, then I highly recommend that you use it to keep the memory footprint low and reduce the overall runtime.

    With all of the hype surrounding Web services these days, you don't hear much about message-oriented middleware (MOM) anymore. This is a shame, because messaging solutions really do have a lot to offer in terms of reliability and scalability. But that's another lesson for another day.

    Assuming you've already bought into all that MOM has to offer, then you're probably at least familiar with the Java™ Message Service (JMS) on some level. JMS is a vendor-neutral API that can be used to interact with messaging systems. As a Java™ developer, this means that you can write a piece of code to interact with IBM's WebSphere MQ and then turn around and use that same piece of code to process messages on SAP's JMS provider. This is analogous to the use of other generic APIs such as JDBC, JavaMail, and JNDI.

    While the JMS API defines rich semantics for processing messages, it doesn't have a lot to say about how to carry out certain administrative tasks. Frequently, these operations are performed using proprietary extensions and/or JMX MBeans. But what if you just want a simple administrative console to perform simple operations like browsing a queue, adding messages/removing messages, and so on?

    Coming up with a solution for this has been something I've been playing around with off and on for quite some time. After giving the open-source HermesJMS tool an extended look, I ultimately decided that I needed something a little more specialized for what I was trying to accomplish. In this blog, I will present a custom Web Dynpro-based management tool that can be used to perform simple administrative tasks on the default SAP JMS provider.

    The Solution

    The solution was developed using Web Dynpro for Java on an SAP NetWeaver AS Java 7.1 stack. The core functionality is implemented exclusively using the JMS and JNDI APIs, so no proprietary extensions are required to install this solution. The following figure shows the main selection screen area where you can search for JMS queues that you wish to manage. The results table allows you to view messages, create sample messages, and remove messages.

    image

    Searching for Queues

    If you can't remember the name of the queue you want to manage, you can browse to it using the advanced search capabilities built into the tool. This search help lets you search for queues using regular expression syntax (see the figure below).

    image

    Creating Messages

    You can click on the Create button to create a new message. Here, you will be prompted with an input mask that allows you to create a TextMessage. This could include a sample XML snippet, or some other text payload of your choosing. As the source code is provided, you could conceivably develop support for other message type as necessary.

    image

    Viewing Messages

    You can view a JMS message by selecting it in the table and clicking on the View Message button. The figure below shows what the screen looks like. In addition to the JMS message ID and timestamp, you can also see custom header properties that were added to the message.

    Viewing a JMS Message

    Removing Messages

    To remove a message, simply select it in the table and click on the Remove Message button. One thing to keep in mind here is that this function will only work if there are no registered listeners (e.g. MDBs, and so on) on the queue in question; otherwise the operation will fail. This is because JMS does not provide API methods for removing messages. Therefore, the only portable way of achieving this is to create an ad hoc consumer that uses message selectors to filter on the selected message IDs.

    Purging the Queue

    You can think of the Purge Queue button as a mass deletion operation that is designed to remove all messages from the queue. This function is also subject to the same constraints as the Remove Message function.

    Security

    As you can imagine, many of these operations are powerful and must be secured against novice users who could potentially wreak havoc on the system. To guard against this, a series of UME actions were defined to protect access to the more sensitive functions. The figure below shows the creation of a sample UME role called JMSAdministrator that contains these actions. You can mix and match these actions in different roles as needed.

    image

    Where can I go to get it?

    The final solution is bundled as an SCA archive and available for download here. I hope that this will be of some use to you, and thanks for reading.

    Overview

    If you're like me and enjoy getting hands-on writing code, then perhaps you'll indulge me as I offer a shameless plug for my new SAP Press book entitled ABAP Cookbook. In this book, I have compiled a series of code examples that I think you might find useful in your day-to-day development activities.

    One of my goals in writing this book was to provide more than just a random compilation of code examples. After all, there's a wealth of these kinds of examples on SDN. When I purchase a book, I want to really dive in and learn something. Therefore, rather than just presenting the code examples, each chapter introduces a topic so that the code is presented in context. Along the way, I try to describe best practices that can be applied to your own development efforts.

    As an added bonus, this book builds a code library that contains various utilities that I hope you'll find useful. This code library is maintained at the book's companion site at http://www.bowdarkconsulting.com/books/abapcookbook. Here, I also welcome feedback and will post errata as needed.

    Topics

    As you can imagine, this book covers a wide range of ABAP-related programming topics including:

    • String processing with regular expressions
    • Dynamic and reflective programming
    • Unicode
    • File processing
    • ABAP Object Services
    • XML processing
    • Web programming with the ICF
    • Web Services
    • E-Mail
    • Security
    • Shared Memory Objects
    • Parallel processing
    • And much more

    Where Can I Get One? :-)

    The book is available online in print and electronic form at http://www.sap-press.com/products/ABAP-Cookbook.html. I hope that you enjoy reading it as much as I enjoyed writing it.

    ABAP Cookbook 

    In my previous Exploring OO Design Patterns: The Strategy Pattern, we considered how the Strategy pattern could be used to dynamically select the appropriate solution to a problem at runtime. Obviously, one of the prerequisites for implementing this pattern is having pre-existing knowledge about the types of problems that could arise and their corresponding solutions. However, sometimes we don’t know how to solve a problem until we examine it more closely. In this blog, we’ll explore the use of the Chain-of-Command design pattern and show you how it can be used to implement solutions to these kinds of problems.

    The Chain-of-Responsibilty Pattern: Defined

    The Chain-of-Responsibility pattern is often used to handle various types of requests or events. Here, whenever an incoming request is received, we may not know how to handle it until we examine the details of the request at length. Rather than defining one big monolithic request handler module, we would prefer to keep the various handler solutions separate so that they can vary independently. Then, we can weave the individual request handlers into a composite solution by chaining them together. In this way, an incoming request is forwarded along the chain until a request handler is found that can process the request. Normally, the process stops after the request handler is found. On the other hand, in some cases, we might want to let the request continue down the chain so that each handler module can add additional functionality to the request handling process. The figure below demonstrates how a request is processed by a chain of request handlers.

    Chain-of-Responsibility Process Flow

    Now that you have a feel for how the Chain-of-Responsibility pattern works, let’s look at a more formalized definition. In the Gang of Four book, the Chain-of-Responsibility pattern is defined as follows:

    Avoid coupling the sender of a request to its receiver by giving more than one object a chance to handle the request. Chain the receiving objects and pass the request along the chain until an object handles it.

    In order to be able to implement a generic chain of modules, each module must conform to the same interface. The UML class diagram below demonstrates how this works with a generic handler interface called Handler. Each of the modules in the chain implement this interface, providing module-specific functionality in the overridden handleRequest() method. At runtime, each of these modules can be invoked polymorphically.

    UML Class Diagram for Chain-of-Responsibility Pattern

    Case Study: Implementing Handler Lists in the ICF

    Sometimes, abstract concepts like design patterns are best explained with an example. If you’ve had an opportunity to work with some of the Web-based technologies of the AS ABAP such as BSPs, Web Dynpro, etc., then you are probably somewhat familiar with the Internet Communication Framework (ICF). The ICF is an object-oriented framework that simplifies the way that you interact with the module that puts the Web in the SAP Web AS: the Internet Communication Manager (ICM). Whenever you host Web applications on the AS ABAP, the entry point into these applications is realized in the form of an ICF service node. ICF service nodes define basic information about a Web application such as the request path, authentication details, etc. In addition, ICF service nodes also allow you to configure a handler list. This handler list implements the core functionality of the Web application, allowing you to respond to HTTP requests programmatically.

    To put all this into perspective, let’s consider the ICF service node that implements the BSP runtime environment. ICF service nodes are maintained in Transaction SICF. The figure below shows the initial screen of this transaction. Here, you can click on the Execute button to maintain service nodes.

    Transaction SICF

    On the subsequent maintenance screen, you can see that ICF service nodes are organized hierarchically underneath virtual hosts. These hosts represent the Web host that is hosting the Web application. As you can see in the figure below, the BSP runtime environment is nested underneath the default host at the following path: sap --> bc --> bsp.

     

    BSP Runtime Environment Node

     

    To view/edit the bsp service node, simply double-click it. The figure below shows the handler list for the bsp service node. Here, a single handler class called CL_HTTP_EXT_BSP has been configured. This class (and all ICF handler nodes) implements the IF_HTTP_EXTENSION interface.

     

    Handler List

     

    The UML class diagram below shows how the IF_HTTP_EXTENSION interface is defined. As you can see, this interface is defined in much the same way as the Handler interface illustrated earlier. Here, concrete subclasses can implement the proper request handling functionality in the HANDLE_REQUEST() method. This method has access to the details of the request, as well as a reference to an object that can be used to generate the appropriate response. In addition, the IF_HTTP_EXTENSION interface also defines a public instance attribute called FLOW_RC. This attribute allows a handler module to pass back a return code to the surrounding ICF framework. Based upon the value of this return code, the framework can determine whether or not to call the next handler module in the handler list.

     

    IF_HTTP_EXTENSION

    As I mentioned earlier, it is sometimes useful to allow a request to continue down the chain so that other handler modules can process the request further. For example, imagine that you have a custom logging requirement for any incoming BSP request. Here, rather than implement that logging requirement in each BSP application, you could interject a handler module before the CL_HTTP_EXT_BSP handler in the handler list so that you could implement the logging requirement centrally. In this case, you don’t want the custom module to take the place of the BSP handler, you just want to implement some value-add functionality ahead of it. These same concepts also apply to SOAP toolkits that may need to implement special SOAP header processing, etc.

     

    Summary

     

    Hopefully by now you have a feel for how to implement the Chain-of-Responsibility pattern in your own development projects. In my next blog, we'll look at another way of implementing this kind of behavior using the Decorator pattern.

    These days, it is getting harder and harder to ignore the object-oriented side of ABAP. If you are one of the many developers who have embraced this change, then perhaps you might be looking for ways to take your skills to the next level. Regardless of the discipline, one of the best ways to master a trade is to look at how senior practitioners go about solving particular problems. However, if you don’t have an OO guru sitting in the cubicle beside you, you might be wondering where you can go to learn these trade secrets. Fortunately, as it turns out, many of these best practices have been documented and left behind by some of the masters of the trade in the form of design patterns. In this blog series, I will introduce you to design patterns and show you how to use them to improve your designs.

    What are Design Patterns?

    The term “design pattern” was coined by Christopher Alexander, a noted architect whose book A Pattern Language: Towns, Buildings, Construction helped inspire software architects to document design approaches in terms of patterns. In his book, Alexander notes that “Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice”. Whether these solutions are fashioned from brick and mortar or from classes and interfaces doesn’t really matter. The point is that we can use this approach to document a solution to a problem in context.

    The design pattern concept was first introduced to the software community in the classic software engineering text Design Patterns: Elements of Reusable Object-Oriented Software (colloquially known as the “Gang of Four Book”). In this book, four leading software engineers attempted to catalog a set of patterns that were informally known throughout the community. In an effort to bring some structure to the text, they documented these patterns in four parts:

    1. Each pattern was assigned an informal name that could be used to identify and discuss the pattern throughout the community.
    2. Next, the problem that the pattern addressed was described in context.
    3. Once the problem was clearly defined, a general solution was described in terms of the classes/interfaces that make up the design, as well as the relationships, responsibilities, and collaborations between those classes. Here, you will not find a solution that can be cut-and-pasted into your program, but rather a design approach that can be adapted to fit the particular problem at hand.
    4. Finally, they documented trade-offs associated with applying the design. This information includes design alternatives, consequences of using the design, etc.

    Throughout the course of this blog series, I will introduce you to some of the more common design patterns documented in the Gang of Four book. To illustrate these solutions, I will try to show you how these solutions might be used to solve particular kinds of problems in ABAP. So, without further adieu, let’s begin our journey by looking at the Strategy pattern.

    Classifying Behavior

    In some respects, the OO analysis and design (OOAD) process bares similarities to the classification process used by biologists to organize plants and animals. Here, we attempt to make sense out of a set of functional requirements by classifying the objects that make up a given problem space. A common tactic used to identify these classes is to read through a functional specification and underline all of the nouns. This approach, though flawed to a certain degree, is useful in taking a first pass through a set of requirements. On the other hand, it is easy to get carried away if you don’t have an open mind.

    Technically speaking, a noun is a person, place, thing, or idea. Oftentimes, developers get hung up on those first three definitions, looking only at concrete object types such as business partners, plants, sales transactions, etc. However, this narrowed perspective limits you when modeling real world phenomena. After all, there are a lot of requirements that don’t fit into such neat little packages. For instance, what about algorithms? How do we classify those?

    Normally, algorithms are classified as a type of behavior and associated with one or more classes in the form of methods. However, in certain cases, this design approach violates a core OO design principle in that it reduces the cohesiveness of the class(es) in question. Here, the term “cohesiveness” refers to how strongly related and focused the various responsibilities of the class are. In other words, does the algorithm fit within the class? Or should it stand on its own?

    To answer these questions, let’s take a look at an example from the sales and distribution domain. Let’s imagine that you want to develop a sales contract application. As you would expect, one of the core classes in this application would be a "Contract" class. Of course, there are many different types of contracts. There are fixed price contracts, cost-plus contracts, basic ordering agreements, etc. At first, you might be inclined to model these differences by developing a class hierarchy like the one depicted in the UML class diagram below.


    Traditional Inheritance-based Solution

    Looking at the UML class diagram above, you can see some problems with this approach. In this simple diagram, we have only shown a subset of the classes that might be derived. In reality, the class hierarchy might end up extending several levels down depending upon how many different contract types the application needs to support. This kind of fine-grained specialization results in designs that are brittle and hard to maintain.

    Deriving the Strategy Pattern

    So, you might ask, how should we proceed differently? Well, before we investigate alternative designs, let’s look at a few common themes in the Gang of Four book (I’m paraphrasing here):

    1. Consider what should be variable in your design.
    2. Encapsulate the concept that varies.
    3. Favor composition over inheritance.

    Looking back at our sales contracts example, let’s see if these concepts can help guide us towards a better solution:

    1. First of all, we need to figure out what is variable in our design. Now, much of this depends upon our functional requirements. However, for the sake of this discussion, let’s assume that the only difference between these contract types is in how their overall value is calculated.
    2. The next step is to encapsulate the concept that varies. Here, we need to encapsulate the algorithm(s) that calculate a contract’s value.
    3. Finally, once we have properly encapsulated the value calculations, we can graft them back into our Contract class hierarchy using composition. This gives us lots of flexibility as there are many ways we may want to integrate these two concepts. Indeed, we can even swap solutions out at runtime in a plug-and-play fashion.

    When you put all this together, you arrive at the Strategy pattern. In the Gang of Four book, the definition of the Strategy pattern is as follows:

       Define a family of algorithms, encapsulate each one, and make them interchangeable.
       Strategy lets the algorithm vary independently from clients that use it.

    The UML class diagram below shows a revised design architecture for our contracts example using the Strategy pattern. As you can see, we have factored out the various contract-specific calculations into a separate class hierarchy underneath the abstract base class ContractValuation. Within this hierarchy, we can reuse existing algorithms to derive specialized value calculations such as the one required for a cost-plus incentive contract. In this way, the Contract class can vary independently from the various calculations (which may change more frequently). Also, by splitting the calculations out, we give ourselves more room to maneuver. This makes it possible to mix-and-match valuations at runtime, re-use calculation logic in other areas of our application, etc.

    Strategy Pattern-based Solution

    Summary

    Hopefully by now you have an appreciation for the Strategy pattern. Generally speaking, you will want to use the Strategy pattern when you come up with a design in which you have many related classes that differ only in their behavior. Other warning signs that might indicate a fit for the Strategy pattern include situations where you find yourself writing a lot of CASE statements or doing a lot of “copy-and-paste” coding. In an upcoming blog entry, I will show you how you can blend the Strategy pattern with another pattern to simplify the integration process

    Throughout the course of this blog series, we have covered the basic cornerstones of object-oriented programming including:

    In this final installment of the series, we will expand upon the concept of inheritance/polymorphism by introducing you to the concept of interfaces.

    Why do we need interfaces?

    In my previous OO Programming with ABAP Objects: Polymorphism, we explored the notion of interface inheritance and showed you how to use it to implement polymorphic designs. Here, the basic premise is that since a subclass inherits the public interface of its superclass, you can invoke methods on an instance of a subclass in exactly the same way that you call them using an instance of the superclass. As we learned, you can leverage this functionality to develop generic methods that can work with an instance of the superclass or any of its subclasses. This is all fine and well when you're working with classes that fit neatly into a particular inheritance model. But what happens when you want to plug in functionality from a class that already has an inheritance relationship defined?

    In some programming languages, it is possible to define multiple inheritance relationships in which a given class can inherit from several parent classes. For instance, in the UML class diagram below, class D inherits from classes B and C. Though the concept of multiple inheritance may sound appealing on a conceptual level, it can cause some serious problems on an implementation level. Looking at the UML class diagram below, consider the inheritance of method "someMethod()" for class D. Here, let's assume that classes B and C have overridden the default implementation of this method from class A. Based on this, which implementation of method "someMethod()" does class D inherit: the one from class B or class C? In object-oriented programming parlance, this conundrum is referred to as the diamond problem.

    Diamond Problem

    Rather than try and tackle multiple inheritance issues such as the diamond problem, the designers of the ABAP Objects language elected to adopt a single inheritance model. This implies that a class can only inherit from a single parent class. This does not mean, however, that you cannot implement multiple inheritance in ABAP. Rather, you simply must go about defining it in a different way using interfaces.

    In order to explain the concept of interfaces, it is helpful to see an example of how they are used in code. Consider the LIF_COMPARABLE interface shown below. This interface defines a single method called "compare_to()" that returns an integer indicating whether or not an object is less than, greater than, or equal to another object of the same type. As you can see, we have only defined the method here; there is no implementation provided. Indeed, you are not even allowed to provide implementations within an interface definition.

    INTERFACE lif_comparable.
      METHODS:
        compare_to IMPORTING im_object TYPE REF TO object,
                   RETURNING VALUE(re_result) TYPE i.
    ENDINTERFACE.

    Looking at the definition of the LIF_COMPARABLE interface above, you might be wondering why we would want to bother defining an interface. After all, they don't anything particularly exciting. Still, much like classes, it does encapsulate a unique concept: comparability. Comparability is a feature that we would like to implement in a number of different classes. In fact, defining comparability in a common interface enables us to develop generic algorithms for sorting objects, etc. The question is how. Since many of the classes we want to implement this with likely have pre-existing inheritance relationships, we can't define the comparison functionality in a common superclass. However, we can model this functionality in an interface.

    Taking our comparability example a step further, let's imagine that we want to define a sort order for a set of customer objects of type LCL_CUSTOMER. For the purposes of our discussion, let's assume that class LCL_CUSTOMER inherits from a base business partner class called LCL_PARTNER. In order to assume the comparability feature, LCL_CUSTOMER also implements the LIF_COMPARABLE interface as shown below.

    CLASS lcl_customer DEFINITION
                       INHERITING FROM lcl_partner.
      PUBLIC SECTION.
        INTERFACES: lif_comparable.
        "Other declarations here...
    ENDCLASS.

    CLASS lcl_customer IMPLEMENTATION.
      METHOD lif_comparable~compare_to.
        "Implement comparison logic here...
      ENDMETHOD.
    ENDCLASS.

    Looking at the code excerpt above, you can see that we have split class LCL_CUSTOMER into two dimensions: a customer is a partner; but it is also comparable. This means that we can substitute instances of class LCL_CUSTOMER anywhere that the interface type LIF_COMPARABLE is used.

    Now that you have a feel for how interfaces are used, let's attempt to define interfaces a little more formally. An interface is an abstraction that defines a model (or prototype) of a particular entity or concept. As you saw above, you don't define any kind of implementation for an interface; that is left up to implementing classes. Once a class implements an interface, it fulfills the requirements of an inheritance relationship with the interface. In this way, you can implement multiple inheritance using interfaces. Indeed, classes are free to implement as many interfaces as they wish.

    How can I use interfaces in my own designs?

    Hopefully by now you can see the power of interfaces, but you may be unsure of how to use them in your own designs. In these early stages of development, it is helpful to look around and see how others are making use of interfaces. In particular, we can look to see how SAP uses interfaces in various development objects. Some of the more common places where interfaces are used extensively by SAP include:

    • The iXML Library used to parse XML in ABAP.
    • The ABAP Object Services framework that enables object-relational persistence models.
    • The Web Dynpro for ABAP (WDA) context API.
    • Business Add-Ins (BAdIs)
    • The Internet Communication Framework (ICF) used to send and receive HTTP request messages.

    If you have ever worked with BAdIs before, then perhaps you may have interacted with interfaces without even realizing it. BAdIs are a type of customer enhancement in which customers can implement a "user exit" that supplements core behavior with custom functionality. The screenshot below shows the definition of a BAdI called "CTS_REQUEST_CHECK". This BAdI is used to validate a Change and Transport System (i.e. CTS) transport request at various important milestones. On the Interface tab, notice the interface name "IF_EX_CTS_REQUEST_CHECK". This interface defines the methods "check_before_creation()", etc. shown below. Whenever we create a BAdI implementation for "CTS_REQUEST_CHECK", the system will generate a class that implements this interface behind the scenes. At runtime, the CTS system will invoke these methods polymorphically to implement the desired user exit behavior.

    BAdI Definition for CTS_REQUEST_CHECK

    The BAdI example above provides us with some useful observations about interfaces:

    1. Interfaces are particularly well suited to modeling behavior. In other words, while classes are often representation of nouns, interfaces can often be used to supplement these core entities with different types of behavior, etc.
    2. Interfaces make it possible to implement polymorphism in a lot of different ways. For example, if a pre-existing class contained functionality to handle CTS request milestones, then we could implement the IF_EX_CTS_REQUEST_CHECK interface in that class rather than reinventing the wheel.
    3. Interfaces allow you to further separate the API interface from its underlying implementation.

    Based on these observations, we would offer the following rule of thumb:  when developing your OO designs, try and represent your core API using interfaces. This step helps ensure that your design remains flexible over time. An excellent example of this is the iXML Library used to process XML in ABAP. The only concrete class provided in the iXML Library is the CL_IXML factory class - everything else is interface-driven. This abstraction makes it possible for SAP to neatly swap XML parser implementations behind the scenes without anyone knowing the difference. Similarly, if your core API is represented using interfaces, you have much more flexibility at the implementation layer. Over time, you'll thank yourself for putting in the effort up front.

    An excellent resource for coming up to speed with interfaces is the classic software engineering text Design Patterns: Elements of Reusable Object-Oriented Software (Addison-Wesley, 1994). This book allows you to enter the mind of object-oriented pioneers who have documented many useful OO design patterns in an easy-to-read catalog-based format. Digging into these designs, you'll see how interfaces can be used to implement certain types of flexibility that simply cannot be realized with basic inheritance. You'll especially appreciate the ABAP Objects implementation when you see how the authors struggle to implement certain functionality in C++ (which does not support interfaces).

    Closing Thoughts and Next Steps

    I hope that you have enjoyed this blog series as much as I have enjoyed writing it. Thanks to everyone for their kind and useful feedback; it is much appreciated. In many ways, this series barely scratches the surface of the possibilities of OO programming. If you are interested in learning more, might I offer a shameful plug for my book Object-Oriented Programming with ABAP Objects (SAP Press, 2009). Here, I cover these topics (and more) in much more depth. Best of luck with your object-oriented designs!

    Object-Oriented Programming with ABAP Objects

    In my previous OO Programming with ABAP Objects: Inheritance, we learned about inheritance relationships. As you may recall, the term inheritance is used to describe a specialization relationship between related classes in a given problem domain. Here, rather than reinvent the wheel, we define a new class in terms of a pre-existing one. The new class (or subclass) is said to inherit from the existing class (or parent class). Most of the time, when people talk about inheritance, they focus their attention on code reuse and the concept of implementation inheritance. Implementation inheritance is all about reducing redundant code by leveraging inherited components to implement new requirements rather than starting all over from scratch.

    One aspect of inheritance relationships that sometimes gets swept under the rug is fact that subclasses also inherit the interface of their parent classes. This type of inheritance is described using the term interface inheritance. Interface inheritance makes it possible to use instances of classes in an inheritance hierarchy interchangeably – a concept that is referred to as polymorphism. In this blog, we will explore the idea of polymorphism and show you how to use it to develop highly flexible code.

    What is Polymorphism?

    As you may recall from my last OO Programming with ABAP Objects: Inheritance, one of the litmus tests for identifying inheritance relationships is to ask yourself whether or not a prospective subclass fits into an “Is-A” relationship with a given parent class. For example, a circle is a type of shape, so defining a “Circle” class in terms of an abstract “Shape” class makes sense. Looking beyond the obvious benefits of reusing any implementation provided in the “Shape” class, let’s think about what this relationship means from an interface perspective. Since the “Circle” class inherits all of the public attributes/methods of the “Shape” class, we can interface with instances of this class in the exact same way that we interface with instances of the “Shape” class. In other words, if the “Shape” class defines a public method called “draw()”, then so does the “Circle” class. Therefore, the code required to call this method on instances of either class is exactly the same even if the underlying implementation is very different.

    The term polymorphism literally means “many forms”. From an object-oriented perspective, the term is used to describe a language feature that allows you to use instances of classes belonging to the same inheritance hierarchy interchangeably. This idea is perhaps best explained with an example. Getting back to our “Shape” discussion, let’s think about how we might design a shape drawing program. One possible implementation of the shape drawing program would be to create a class that defines methods like “drawCircle()”, “drawSquare()”, etc. Another approach would be to define a generic method called “draw()” that uses conditional statements to branch the logic out to modules that are used to draw a circle, square, etc. In either case, there is work involved whenever a new shape is added to the mix. Ideally, we would like to decouple the drawing program from our shape hierarchy so that the two can vary independently. We can achieve this kind of design using polymorphism.

    In a polymorphic design, we can create a generic method called “draw()” in our drawing program that receives an instance of the “Shape” class as a parameter. Since subclasses of “Shape” inherit its interface, we can pass any kind of shape to the “draw()” method and it can turn around and use that shape instance’s “draw()” method to draw the shape on the screen. In this way, the drawing program is completely ignorant of the type of shape it is handling; it simply delegates the drawing task to the shape instance. This is as it should be since the Shape class already knows how to draw itself. Furthermore, as new shapes are introduced into the mix, no changes would be required to the drawing program so long as these new shapes inherit from the abstract “Shape” class.

    This generic approach to programming is often described using the term design by interface. The basic concept here is to adopt a component-based architecture where each component clearly defines the services (i.e. interface) they provide. These interfaces make it easy for components to be weaved together into larger assemblies. Here, notice that we haven’t said anything about how these components are implemented. As long as the components implement the services described in their interface – it really doesn’t matter how they are implemented. From an object-oriented perspective, this means that we can swap out a given object for another so long as they share the same interface. Of course, in order to do so, we need to be able to perform type casts and dynamic method calls.

    Type Casting and Dynamic Binding

    Most of the time, whenever we talk about the type of an object reference variable in ABAP Objects, we are talking about its static type. The static type of an object reference variable is the class type used to define the reference variable:

    DATA: lr_oref TYPE REF TO zcl_shape.

    An object reference variable also has a dynamic type associated with it. The dynamic type of an object reference variable is the type of the current object instance that it refers to. Normally, the static and dynamic type of an object reference variable will be the same. However, it is technically possible for an object reference variable to point to an object that is not an instance of the class type used to define the object reference. For example, notice how we are assigning an instance of the ZCL_CIRCLE subclass to the lr_shape object reference variable (whose static type is the parent class ZCL_SHAPE) in the code excerpt below.

    DATA: lr_shape  TYPE REF TO zcl_shape,
          lr_circle TYPE REF TO zcl_circle.

    CREATE OBJECT lr_shape.
    CREATE OBJECT lr_circle.
    lr_shape = lr_circle.

    This kind of assignment is not possible without a type cast. Of course, you can’t perform a type cast using just any class; the source and target object reference variables must be compatible (e.g., their static types must belong to the same inheritance hierarchy). In the example above, once the assignment is completed, the dynamic type of the lr_shape reference variable will be the ZCL_CIRCLE class. Therefore, at runtime, when a method call such as “lr_shape->draw( )” is performed, the ABAP runtime environment will use the dynamic type information to bind the method call with the implementation provided in the ZCL_CIRCLE class.

    The type cast above is classified as a narrowing cast (or upcast) as we are narrowing the access scope of the referenced ZCL_CIRCLE object to the components defined in the ZCL_SHAPE superclass. It is also possible to perform a widening cast (or downcast) like this:

    DATA: lr_shape  TYPE REF TO zcl_shape,
          lr_circle TYPE REF TO zcl_circle.
    CREATE OBJECT lr_shape TYPE zcl_circle.
    lr_circle ?= lr_shape.

    In this case, we are using the TYPE addition to the CREATE OBJECT statement to create an instance of class ZCL_CIRCLE and assign its reference to the lr_shape object reference variable. Then, we use the casting operator (“?=”) to perform a widening cast when we assign the lr_shape object reference to lr_circle. The casting operator is something of a precaution in many respects as widening casts can be dangerous. For instance, in this contrived example, we know that we are assigning an instance of ZCL_CIRCLE to an object reference variable of that type. On the other hand, if the source object reference were a method parameter, we can’t be sure that this is the case. After all, someone could pass in a square to the method and cause all kinds of problems since class ZCL_CIRCLE may well define circle-specific methods that cannot be executed against an instance of class ZCL_SQUARE.

    Implementing Polymorphism in ABAP

    Now that you have a feel for how type casting works in ABAP Objects, let’s see how to use it to implement a polymorphic design in ABAP. The example code below defines a simple report called ZPOLYTEST that defines an abstract base class called LCL_ANIMAL and two concrete subclasses: LCL_CAT and LCL_DOG. These classes are used to implement a model of the old “See-n-Say” toys manufactured by Mattel, Inc. This model is realized in the form of a local class called LCL_SEE_AND_SAY. If you have never played with a See-n-Say before, its interface is very simple. In the center of the toy is a wheel with pictures of various animals. When a child can positions a lever next to a given animal, the toy will produce the sound of that animal. In order to make the See-n-Say generic, we define the interface of the “play()” method to receive an instance of class LCL_ANIMAL. However, in the START-OF-SELECTION event, you’ll notice that we create instances of LCL_CAT and LCL_DOG and pass them to the See-n-Say. Here, we didn’t have to perform an explicit type cast as narrowing type casts are performed implicitly in method calls. Furthermore, since the LCL_CAT and LCL_DOG classes inherit the methods “get_type()” and “speak()” from class LCL_ANIMAL, we can use instances of them in the LCL_SEE_AND_SAY generically.

    REPORT zpolytest.

    CLASS lcl_animal DEFINITION ABSTRACT.
      PUBLIC SECTION.
        METHODS: get_type ABSTRACT,
                 speak ABSTRACT.
    ENDCLASS.

    CLASS lcl_cat DEFINITION
                  INHERITING FROM lcl_animal.
      PUBLIC SECTION.
        METHODS: get_type REDEFINITION,
                 speak REDEFINITION.
    ENDCLASS.

    CLASS lcl_cat IMPLEMENTATION.
      METHOD get_type.
        WRITE: 'Cat'.
      ENDMETHOD.

      METHOD speak.
        WRITE: 'Meow'.
      ENDMETHOD.
    ENDCLASS.

    CLASS lcl_dog DEFINITION
                  INHERITING FROM lcl_animal.
      PUBLIC SECTION.
        METHODS: get_type REDEFINITION,
                 speak REDEFINITION.
    ENDCLASS.

    CLASS lcl_dog IMPLEMENTATION.
      METHOD get_type.
        WRITE: 'Dog'.
      ENDMETHOD.

      METHOD speak.
        WRITE: 'Bark'.
      ENDMETHOD.
    ENDCLASS.

    CLASS lcl_see_and_say DEFINITION.
      PUBLIC SECTION.
        CLASS-METHODS:
          play IMPORTING im_animal
                    TYPE REF TO lcl_animal.
    ENDCLASS.

    CLASS lcl_see_and_say IMPLEMENTATION.
      METHOD play.
        WRITE: 'The'.
        CALL METHOD im_animal->get_type.
        WRITE: 'says'.
        CALL METHOD im_animal->speak.
      ENDMETHOD.
    ENDCLASS.

    START-OF-SELECTION.
      DATA: lr_cat TYPE REF TO lcl_cat,
            lr_dog TYPE REF TO lcl_dog.

      CREATE OBJECT lr_cat.
      CREATE OBJECT lr_dog.

      CALL METHOD lcl_see_and_say=>play
        EXPORTING
          im_animal = lr_cat.
      NEW-LINE.
      CALL METHOD lcl_see_and_say=>play
        EXPORTING
          im_animal = lr_dog.

    As mentioned earlier, one of the primary advantages of using polymorphism in a design like this is that we can easily plug in additional animals without having to change anything in class LCL_SEE_AND_SAY. For instance, if we want to add a pig to the See-n-Say, we just create a class LCL_PIG that inherits from LCL_ANIMAL and then we can start passing instances of this class to the “play()” method of class LCL_SEE_AND_SAY.

    Conclusions and Next Steps

    Hopefully by now you are starting to see the benefit of implementing object-oriented designs. In many respects, polymorphism represents one of the major payoffs for investing the time to create an object-oriented design. However, as you have seen, polymorphism doesn't happen by accident. In order to get there, you need to pay careful attention to the definition of a class' public interface, make good use of encapsulation techniques, and model your class relationships correctly.

    If the concept of polymorphism seems familiar, it could be that you’ve seen examples of this in other areas of SAP. Perhaps the most obvious example here would be with “Business Add-Ins” (or BAdIs). In my next blog, we will look at how BAdIs use interfaces to implement polymorphic designs. Interfaces are an important part of any object-oriented developer’s toolbag; making it possible to extend a class into different dimensions.

    In my OO Programming with ABAP Objects: Encapsulation blog entry, we continued our discussion on OOP by showing how visibility sections could be used to hide implementation details of a class. If you are new to OOP, you might be wondering why you would want to go to such lengths to encapsulate your code. After all, don't we want our software to be open these days? However, the use of implementation hiding techniques does not imply that software must be closed off completely. Rather, we just want to establish some healthy boundaries so that we can give the software some structure. This structure helps the software to gracefully adapt to inevitable changes within a particular area without affecting the overall integrity of the software as a whole.

    In this blog entry, I will introduce another core concept of OOP called inheritance. Inheritance describes a relationship between related classes within a particular problem domain. Here, you will see that the use of good encapsulation techniques enables you to expand and enhance the functionality of your applications without having to modify pre-existing classes. In my next blog entry, we will see how these relationships can be exploited using polymorphism.

    Generalization and Specialization

    During the Object-Oriented Analysis & Design (or OOAD) process, we evaulate real world phenomena and try to simulate the problem domain using classes of objects. Frequently, this classification process goes through several iterations before we get it right. For instance, our first pass through the requirements might generate an OO design with some very basic classes. As we dig deeper, we focus in on determining the roles and responsibilities of each class. Along the way, our classes evolve to become more specialized.

    In an ideal world, this analysis process would take place in a vaccuum, allowing us to completely refine our object model before we implement it. Unfortunately, most of us do not have this luxury as we are subject to tight deadlines and limited budgets. Typically, we must draw a line in the sand and build the best software we can given the constraints laid before us. In the past, such hasty development has made it very difficult to adapt the software to implement new functionality, etc. Here, developers would have to decide whether or not they felt like an enhancement could be implemented without jeopardizing the existing production code. If the answer to that question was no, they were forced to cut their losses and try to salvage as much of the code as they could using the "copy-and-paste" approach to building new development objects. Both of these approaches are fraught with risks. Early object-oriented researches recognized that there had to be a better way to extending software.

    When you think about it, an enhancement extends or specializes a portion of the system in some way. In an OO system, this implies that we want to enhance or extend certain functionality within one or more classes. Here, we don't really want to modify the existing class(es). Rather, we just want to expand then to handle more specialized requirements, etc. One way to implement this kind of specialization in object-oriented languages is through inheritance.

    Inheritance defines a relationship between two classes; the original class is called the superclass (or parent class) and the extended class is called the subclass (or child class). In an inheritance relationship, a subclass inherits the components from its superclass (e.g. attributes, methods, etc.). Subclasses can then build on these existing components to implement additional functionality. When thinking about inheritance, it is important to realize that the relationship is not transient in nature. In other words, a subclass is not just a copy or clone of its superclass. For instance, if you change the functionality in a method of a superclass, that change is reflected in its subclasses (except in the case of overridden methods - more on these in a moment). However, the converse is not true; changes to a subclass are not reflected in its superclass.

    In OO parliance, an inheritance relationship is known as an "Is-A" relationship. To explain this relationship, let's consider an example where we have a superclass called "Animal" and a subclass called "Cat". From a code perspective, the "Cat" class inherits the components of the "Animal" superclass. Therefore, as a client looking to use instances of these classes, I see no difference between them. In other words, if the "Animal" superclass defines a method called "eat( )", I can call that same method on an instance of class "Cat". Thus, class "Cat" is an "Animal". This relationship leads to some interesting dynamic programming capabilities that we'll get into in my next blog.

    Defining Inheritance Relationships in ABAP Objects

    At this point, you're probably ready to dispense with all the theory and get into some live code examples. In the code sample below, you'll see that it is very easy to define an inheritance relationship between two classes.

    CLASS lcl_parent DEFINITION.
      PUBLIC SECTION.
        METHODS: a,
                 b.

      PRIVATE SECTION.
        DATA: c TYPE i.
    ENDCLASS.

    CLASS lcl_child DEFINITION
          INHERITING FROM lcl_parent.
      PUBLIC SECTION.
        METHODS: a REDEFINITION,
                 d.

      PRIVATE SECTION.
        DATA: e TYPE string.
    ENDCLASS.

    As you can see in the example above, you can define an inheritance relationship in a class using the INHERITING FROM addition of the CLASS DEFINITION statement. In the example, class "lcl_child" is a subclass of class "lcl_parent". Therefore, "lcl_child" inherits all of the components defined in class "lcl_parent". However, not all of these components are directly accessible in class "lcl_child". Any component defined in the PRIVATE SECTION of "lcl_parent" cannot be accessed in "lcl_child". However, like any other client of class "lcl_parent", "lcl_child" can access these private components through defined "getter" methods, etc. Sometimes, you may want to share access to a component of a parent class with its subclasses without opening up access completely. In this case, you can define components in the PROTECTED SECTION. This visibility section allows you to define components that can be accessed in a given class and any of its subclasses only.

    Once the inheritance relationship is defined, you can access a subclass' inherited public components in the same way you would access them in the parent class. Another thing you might notice in the definition of class "lcl_child" is the REDEFINITION addition added to method "a()". The REDEFINITION addition implies that you want to redefine the way that method "a()" is implemented in the "lcl_child" subclass. Keep in mind that these redefinitions only reshape the code in the IMPLEMENTATION part of the class definition. In other words, you cannot change the interface of the method, etc. - otherwise, you would vioate the "is-a" relationship principal. Inside the redefinition of method "a()" in class "lcl_child", you can reuse the implementation of the superclass using the "super" pseudoreference variable like this: super->a( ). You can think of the super pseduoreference as a sort of built in reference variable to an instance of the subclass' superclass.

    Some Final Thoughts

    Inheritance relationships can be very powerful, allowing you to reuse software components and improve productivity. However, it is important not to get carried away with trying to define complex inheritance hierarchies, etc. Indeed, many top OO researchers advise against the use of inheritance in many design contexts. The bottom line is that there are places where inheritance works, and places it doesn't.

    Another important idea to consider is that inheritance is about more than reuse - it's about building relationships. One nice thing about these relationships is that you can define inheritance hierarchies where you have a family of classes that are interchangeable. You can then design your programs generically using plug-and-play techniques - something we'll learn about in my next blog.

    In my OO Programming with ABAP Objects: Classes and Objects blog entry, I introduced the concept of classes and objects, showing you how to create and use both in ABAP Objects. If this is your first exposure to OO programming, you might be wondering what's so great about it. After all, on the surface, a class looks a lot like a function group or subroutine pool. In this blog, we will dig deeper to see where classes differentiate themselves from procedural concepts.

    What's Wrong with the Procedural Approach?

    One common misconception about OO programming is that it is different from procedural programming in every way - not true. There are many important lessons to be taken from the procedural approach. However, there are certain limitations of this approach that influenced early researchers in their design of the OO paradigm. These limitations are best described with an example. Let's imagine that you want to build a code library to make it easier to work with dates. To do so, you create a function group called ZDATE_API that contains various function modules to manipulate and display dates.

    From a data perspective, you have a couple of options. Function groups allow you to define group-specific data objects that can be utilized within function modules (similar to the use of attributes in methods). However, in practice, such data objects can be difficult to use. This is because it is not possible to create "instances" of function groups. For example, in the ZDATE_API function group, I might define a global data object of type SCALS_DATE to keep track of the date being manipulated by the function modules. However, if I need to keep track of multiple dates in my program (e.g. created on date, changed on date, document date, etc.), I need to build an internal table to keep track of each date "instance". I also need to keep track of the key to this table externally - otherwise I have no way of identifying a particular date instance. This limitation causes most developers to keep track of their data objects outside of the function group. If you think back to the last time you tried to call a BAPI and all the data objects you had to define beforehand, you'll appreciate what I mean. We'll explore some of the implications of this approach to data in a moment.

    Assuming that we elected to keep track of data separately, let's look at what a function module might look like in our ZDATE_API function group. For the purposes of this discussion, we'll keep it simple and just look at a function module used to set the "day" value of the date:

    FUNCTION z_date_set_day.
      * Local Interface IMPORTING VALUE (lv_day) TYPE I
      *                 CHANGING (cs_date) TYPE SCALS_DATE
      *                 EXCEPTIONS invalid_date
      DATA: lv_month_end TYPE i. "Last Day of Month

      CASE cs_date-month.
        WHEN 1.
          lv_month_end = 31.
        WHEN 2.
          ...
      ENDCASE.

      IF iv_day LT 1 OR iv_day GT lv_month_end.
        RAISE invalid_date.
      ELSE.
        cs_date-day = iv_day.
      ENDIF.
    ENDFUNCTION.

    This contrived example simply ensures that we initialize the "day" value of a date to a proper value. Clearly, there are probably better ways to implement something like this, but the point is that we have defined some business rules inside of a function module that is part of an API designed to simplify the way that we work with dates.

    Now that we have our function group in place, let's imagine that you are asked to troubleshoot a program that is using your function group to display dates in various formats but it is outputting them incorrectly (e.g., 02/31/2009). At first, you might think that there is a problem with Z_DATE_SET_DAY as the day value is incorrect. However, after further review, you discover that the invalid assignment was made in the program itself. After all, there's nothing stopping a program from changing the day value of a local variable directly. To that program, the day component of the SCALS_DATE structure is nothing more than a 2 digit numeric character with a valid range of 00-99 - the semantic meaning of the day value is defined within the confines of our ZDATE_API function group.

    Beyond the issue of data corruption, think about how clumsy the typical function group is. The separation of data and behavior in the ZDATE_API function group limits the usefulness of the abstraction, making the use of the API awkward as we have to pass the SCALS_DATE object back and forth between function calls. This becomes something more than a nuisance when the library expands. For example, think of the impact of expanding this API to support timestamps. A better approach would be to hide this data such that callers don't have to worry about it.

    Hiding the Implementation

    In programming terms, a function group like ZDATE_API is an abstract data type (or ADT). As the term suggests, an ADT abstracts a particular concept into an easy-to-use data type. Ideally, the creation of a date API would imply that we no longer need to worry about how dates work. Rather, we can leverage the hard work (and testing) that went into the creation of the date API and focus in on other important tasks. However, this is difficult to do if the ADT is not well encapsulated. The term encapsulate implies that we're combining something into an enclosure (or capsule). In the case of an ADT, we're grouping data and behavior together. Moreover, encapsulation also suggests that we are protecting these resources from external tampering. Initially, most programmers balk at this, preferring to have complete control over all parts of the code. The problem with this is that taking control of any code also implies that you assume some of the risk for ensuring that it operates correctly. In our date example, look at how problematic it was to allow external programs to modify the date structure. Ideally, we would prefer that any modifications to this structure pass through business rule checkpoints to make sure that data is not corrupted.

    Encapsulation is a good engineering practice used in many disciplines. For instance, you don't have to know how a car works in order to drive it. Of course, it does help if you have power steering, automatic transmission, etc. These features represent the "interface" that users utilize to interact with the car. ADTs also have an interface (namely the signature of the function modules, method, etc.). Good interface design should make it easy to use an API without diminishing any of its capabilities. Another advantage of this engineering approach is that parts become more interchangeable. For example, imagine that a car manufacturer decides to redesign their fuel injector to improve performance. As long as the new fuel injector has the same interface (e.g. same "hookup"), the manufacturer can swap the parts and nobody's the wiser. In my next two blogs, I'll show how inheritance and polymorphism allows you to do some powerful things here with your OO programs.

    Hopefully by now you agree that it is a good idea to group data and behavior together in a class. This, by itself, does not mean that a class is encapsulated. Remember, to achieve this, we must also place a protective capsule around the resources. OO languages such as ABAP Objects allow you to define component visibilities using access specifiers. The following class shows how to define these component visibilities:

    CLASS lcl_visible DEFINITION.
      PUBLIC SECTION.
        DATA: x TYPE i.
      PROTECTED SECTION.
        DATA: y TYPE i.
      PRIVATE SECTION.
        DATA: z TYPE i.
    ENDCLASS.

    As you can see, the components of class lcl_visible are partitioned into three distinct sections: the PUBLIC SECTION, the PROTECTED SECTION, and the PRIVATE SECTION. Components defined in the PUBLIC SECTION are visible everywhere. Components defined in the PRIVATE SECTION are only visible within the class itself. Thus, the only place that you can access the attribute "z" would be inside of an instance method of class lcl_visible. We'll talk about the PROTECTED SECTION when we talk about inheritance.

    In this way, we can reproduce our date API in a class like this:

    CLASS lcl_date DEFINITION.
      PUBLIC SECTION.
        METHODS: set_month IMPORTING im_month TYPE i
                           EXCEPTIONS invalid_date,
                 set_day   IMPORTING im_day TYPE i
                           EXCEPTIONS invalid_date,
                 set_year  IMPORTING im_year TYPE i
                           EXCEPTIONS invalid_date.
      PRIVATE SECTION.
        DATA: date TYPE scals_date.
    ENDCLASS.

    CLASS lcl_date IMPLEMENTATION.
      METHOD set_month.
        "Implementation of method set_month goes here...
      ENDMETHOD.

      METHOD set_day.
        DATA: lv_month_end TYPE i. "Last Day of Month

        CASE date-month.
          WHEN 1.
            lv_month_end = 31.
          WHEN 2.
            ...
        ENDCASE.

        IF iv_day LT 1 OR iv_day GT lv_month_end.
          RAISE invalid_date.
        ELSE.
          date-day = iv_day.
        ENDIF.
      ENDMETHOD.

      METHOD set_year.
        "Implementation of method set_year goes here...
      ENDMETHOD.
    ENDCLASS.

    Notice that the "date" attribute is defined in the PRIVATE SECTION of the class. Now, any accesses to the "date" attribute must go through public instance methods. This ensures that an external program cannot accidentally (or purposefully) modify the value incorrectly. It also makes the date API easier to use as client programs now only need to define dates like this:

    DATA: lr_date TYPE REF TO lcl_date.

    With a simple API like this, it's not such a big deal. However, consider how you might implement an API for working with SAP Business Partners, etc. Having the objects keep track of all that data simplifies API use considerably.

    Reflections

    As you have seen in this blog, encapsulation is a good engineering practice that you can use to develop quality reusable class libraries. I emphasize reusable here to demonstrate an important point. One of the most common reasons why a library is not reused is because it has too many dependencies. Developing classes using implementation hiding techniques forces you to enter into a mindset whereby classes begin to take on a certain amount of autonomy. In other words, you start to ask yourself questions like "What data does an object of my class need in order to do its job?", etc. Once you have figured this out, you design your interface in such a way as to only provide what the class with what it needs - reducing unnecessary dependencies along the way. The fewer dependencies a class has, the less likely things are to go wrong. And, as we will see in my next blog, it also allows us to expand our libraries in interesting ways without jeopardizing code that has been proven to work.

    Actions

    Filter Blog

    By date:
    By tag: