Not so long ago, I began working on a project which was very set in its "ASAP" ways of capturing documentation. This implies that new requirements trickle their way down in waterfall fashion from business analysts to developers and ultimately to the poor devils tasked with maintaining the system after the project is concluded. After having participated in all phases of this process over the course of about 12 years, I really got to thinking about whether or not the status quo is working. In other words, what sort of value are we getting by compiling all these documents? What follows are a series of observations I have made concerning these issues during my time working on SAP projects.
Observation #1: Throwing Requirements Over the Wall Doesn't Work
In Computer Science/Software Engineering courses, the software development lifecycle (SDLC) is portrayed as an iterative cycle in which developers play an active role in all phases of development (see below). However, it has been my experience that most SAP projects tend to wait to bring developers in until the design phase, if not later. Here, the thought is that business analysts will capture requirements in functional specifications, throw them over the wall to developers, and everything will proceed like clockwork.
There are several problems that I see with this approach:
- A lack of up-front dialog eliminates the possibility for both sides to find common ground in terms of vocabulary, functionality, and so on. This, to me, explains why most developers struggle to comprehend complex functional specifications: because the business analyst doesn't know how to write to his/her audience.
- Developers perform best whenever they understand the business domain they're working in. While it's certainly possible to code certain requirements without such knowledge, the quality of the product will almost always suffer. This is particularly noticeable in the long term as wrong assumptions lead to brittle code that is difficult to maintain and/or enhance.
- Without a developer presence, business analysts tend to make questionable technical assumptions which can trickle down into the implementation phase if there is not strong development leadership in place to catch them. And here, it is often too late. One extreme case of this was a project in which all of the development objects were scoped out and estimated before the design phase got started in earnest. Here, the analysts had worked in silos to identify all of the interfaces for the project, coming up with a comprehensive list which contained many duplicates. Alas, by the time the interface team got their hands on them, any opportunities for consolidation/reuse/SOA had long since sailed away.
- Finally, excluding the developers eliminates opportunities for innovation that are beyond the technical grasps of business analysts.
The bottom line here is that developers should have some level of participation during the requirements analysis phase. Maybe they don't attend every meeting, but at least some presence makes a big difference.
Observation #2: Organizing Documentation by RICEF Category is a Mistake
A common blueprinting exercise for many SAP projects is to identify all of the RICEF development objects and parcel out the work from there. While this may work OK from a project perspective, I find that it leads to much fragmentation when it comes to documentation gathering. This is particularly evident for "enhancement" requirements which may span multiple development objects (e.g. add a new field to a database table, extend an IDoc to bring it in, add the field to a WDA screen, etc.). Frequently, I see such enhancement requirements documented within a series of enhancement specs. In the long run, this begs the following question: where do developers go to find documentation concerning a particular function within the system?
To put this into perspective, allow me to paint a picture of a defect I was tasked with solving some years ago. The defect in question was related to a series of enhancements applied to the PR-->PO conversion process. In this particular project, there was no master document which described this process from nuts to bolts. Instead, what little documentation that existed about this process was scattered across various enhancement specs (some of which were not PR-PO related at first glance). Not finding what I needed in the documentation, I decided to go into the BAdI code and see if I could make sense of things from there. Ultimately, what I discovered was that the defect was introduced by one enhancement that was ignorant of another. Literally, the right hand had no idea what the left hand was doing.
While this is a rather extreme case, there are several take-aways to consider from all this:
- Documents should be assigned to an organizational structure based on meaningful elements such as business unit, business objects, etc. as opposed to fleeting project constructs which will eventually become obsolete.
- If documentation is not well organized, then it is likely that other documents will come along and render certain aspects of a document obsolete. For instance, in the PR-->PO example above, there should have been one comprehensive set of documentation that was consistently updated over the lifespan of the project.
- In order for documentation to have value, it must be accessible to the people who need to use it. If developers can never find what they're looking for, then all those documents will do nothing more than clog up some file share out on the network, never to be read again.
As is the case with any writing exercise, it is very important when writing documentation to know who your audience is. What is it that they need to know? How do we ensure that they can find what they're looking for? How do we make sure it is up-to-date and reliable? These are the questions we must be asking ourselves.
Observation #3: Fluff Makes Things Worse with Tech Specs
I frequently encounter a lot of tech specs which span 10-20+ pages and yet don't really say anything useful. Basically, they just contain lots of boilerplate sections, regurgitated content from the functional spec, and so on. In general, I find the following types of content to be pointless when it comes to capturing technical documentation:
- Audit trails which contain information about CTS requests. So long as the pertinent development objects are identified in the tech spec, any competent developer should be able to track down transport import history in the system. To force developers to go back and annotate this information after the fact is a waste of time.
- Pseudo code that isn't pseudo code. Here, I'm talking about those sections of the document where developers literally copy-and-paste ABAP code into the document in order to satisfy some project-level requirement that states that every section of the tech spec must be filled in. Pseudo code is useful for describing complex algorithms, etc. It uses, by definition, a simplified syntax which makes it much easier to read by technical and non-technical readers alike. If readers want to see how the code works, they should read the code. After all, it is the one artifact that is most likely to be kept up-to-date.
- Flow charts which go on for pages at a time. Such charts should be broken up into smaller pieces. Or, better yet, refactored into a better modeling notation such as UML or FMC.
- Extensive narrative text. As the saying goes, a picture's worth a thousand words. I find that I can convey the majority of my meaning in a tech spec to a handful of UML diagrams: a package diagram to identify the key development objects, a class diagram to describe their relationships, and sequence/activity diagrams to illustrate behaviors. Naturally, some narrative text is called for in order to explain key aspects of the diagram, etc. On the other hand, an extensive play-by-play narrative of the code is wasting everyone's time. If developers want to dig down to that level of detail, they should go straight to the source.
- Regurgitated and redundant reference materials. For example, I recently encountered an interface tech spec which contained pages and pages of reference material providing an executive summary of what SAP NetWeaver PI is and how it works. To me, this goes back to understanding who your audience is. The assumption going in for a PI interface tech spec has got to be that the reader has some clue as to what PI is and how to use it. If they don't, then your tech spec is not the right place to acquire such knowledge. Any time spent compiling this information in each and every tech spec is a waste. This also goes for tech specs which copy-and-paste the functional spec before proceeding. Here, a simple hyperlink should suffice. Plus, it makes sure that the reader always finds the latest and greatest information straight from the source.
- Tutorial-like instructions for routine development tasks. Once again, the point of a tech spec is to provide a meaningful artifact for maintenance staff to consult from time-to-time while supporting the system long term. Here, our objective is clear: we need to provide these readers with information to understand why the code was written the way it was. If readers don't know how to open up Transaction SE80, then they have much bigger problems on their hands. Some tutorial-like instructions are warranted for particularly complex tasks, but we should be providing click-by-click instructions for routine tasks such as creating a function module, etc.
Sometimes, less is more, and more is less. This goes beyond simple laziness; it's about effective writing. A concise tech spec that is to the point will be much easier to consume than a bloated document containing redundant information that the reader has to sift through. Plus, from a project perspective, you have to define realistic expectations for developers tasked with maintaining these monstrosities. Some of these "fluff" sections are excellent candidates for getting out of sync during the course of a project.
Observation #4: Unit Test Documents are Fine, but Automated Unit Tests are Better
The great thing about unit tests in general is that they force developers to come up with test cases to ensure that their development objects work like they're supposed to. For many projects, this exercise culminates in a unit test document which is then used as a guide for manually testing code changes throughout the object's lifecycle. While this is a worthy goal, its effectiveness is limited to the due diligence of the developers who maintain them. Here, it is easy to fake results and sneak shoddy objects past this QC gate so that testing becomes the problem of an unsuspecting business analyst.
To me, if you're going to go to the trouble of creating these artifacts, you might as well go the extra mile and create automated unit tests in ABAP Unit, etc. That way, unit tests become more than just a piece of paper; they become an tangible construct that can be used to automate a unit test at a click of a button. This saves testing time and ensures more predictable results from test-to-test. Plus, it takes a lot of the guesswork out of interpreting test results for future developers who may not be as familiar with the technical underpinnings of a particular development objects.
Observation #5: Good ABAP Code is the Best Documentation
At the end of the day, the definitive source on any development effort is going to be the source code. After all, it's the most likely artifact to get consistently updated. Given this, it's important that the code be maintained in such a way that it is highly readable. Here are a few tips for ensuring readability in your code:
- Variables should have meaningful names that are intuitive to users.
- Good modularization techniques should help break complex requirements up into smaller chunks that are easier to understand. Being an OO aficionado, I find tremendous value in modeling problems using classes which assume certain responsibilities within the problem domain. Such anthropomorphism makes it easier to translate between the human world and the code world. Method calls on such classes read like statements we would make in normal conversation. For example,
IF po->is_locked( ) EQ abap_true.reads a whole lot cleaner than
SELECT flag from EKKO...
- It almost goes without saying that comments are highly important. Here though, I'd go a step further and say that good comments are important. Redundant or obvious comments are a waste of time.
- Indentation and white space are important. It should be easy to group sections of code visually for the reader.
- Don't be afraid to rip out portions of a module if the code becomes obsolete. There's nothing worse than having to scroll through pages of code that has been commented out over time. That's what version history is for. There are plenty of other places for developers to go back and find out why something changed.
- Data structures are important. I often find programs that define pages and pages worth of global variables that should have been encapsulated into a structure or object type. Such grouping makes the variable uses more intuitive and the code more concise.
- A class-based exception concept is preferred to having a series of subroutine calls followed by IF statements checking the values of flags. Such logical units of work should be wrapped in a TRY statement to separate the exception handling logic from the main programming logic.
- Modules should have high cohesion, which is to say that they should do one thing and one thing only.
- The ABAP workbench and CTS provide excellent documentation resources to provide documentation for function modules, classes, methods, transports, and more. There's no better place to provide documentation for a particular module than right here. After all, as a developer, my first inclination is always going to be to go right to the source. Plus, with SAP's excellent CTS, it's easy to make sure that this documentation gets copied along with the rest of the code reliably.
More than any other artifact, the code is the one place where good documentation practices can't slide.
Before I conclude this blog post, I would be remiss if I didn't say that I think documentation is very important. As a developer, I feel that it is my responsibility to leave behind a series of artifacts which accurately portray the design and implementation decisions that went into my development process. In my experience, these artifacts can take on many different forms depending on the context. As such, I think it important for projects to provide developers with the flexibility to produce something that is meaningful as opposed to a series of boilerplate documents which aren't worth the paper they're printed on.
Having worked in other software disciplines, I find other methodologies such as RUP much more effective than the ones used on the typical SAP project. I say this not to point the finger at SAP since SAP has long since moved on from the waterfall approach typical of ASAP projects of the late 80's and 90's. More often than not, these outdated practices are carryovers from consulting practices/development shops that have "always done it this way". These days, I don't think that's going to cut it anymore.