trond.stroemme

6 Posts

Object Orientation and Performance - Mission Impossible?


Why is OO perceived as incompatible with high performance? I asked myself the question after reading one of the comments to the blog ABAP Trilemma, where the author discusses the weighted relevance of Security, Performance and Design. The comment related to the well-known "truth" (some would call it Conventional Wisdom) that Object Orientation and Performance does not go well together.

I posted a brief reply to this comment, but realized this is too important a subject to be left alone. Hence this (hopefully) provoking rant.

The two tongues of ABAP

The ABAP world is unique in the sense that it spans the cleft between the two main design paradigms in computer programming; Procedural and Object Oriented. To my knowledge, no other language has such a schizophrenic group of followers as ABAP. On one end, there are the "Object Evangelists", a mix of converted oldtimers who have seen the light and the younger generation of developers who never bothered with any of the stone-age languages in the first place. On the other side, we have the "Procedural Protestants", firmly rooted in the lore of Cobol, PL/1 and other legacies of the largely IBM mainframe way of doing things - the origins of SAP.

The comment focused on the well-known "fact" that Object Orientation does not mix well with high performance. Good design (and strangely everyone - regardless of camp affiliation - seem to agree that this means Object oriented programming) implicitly means we'll have to sacrifice something on the performance side. If it looks nice, it won't run well.

Should we believe this? Honestly, I don't think so. Not anymore. Let's look at what the above "truth" implies. I will do this by postulating a few theses of my own.


1. Applications dealing with high data volumes can be made less complex with OO.

Most of the applications dealing with high data volumes tend to be fairly straight-forward, conceptually. They read data, process, then provide some form of output or data update. This should actually provide a good reason for dealing with it in an OO-centric way. The data access itself is usually down to relatively simple select statements (I'll deal with that later). What complicates matters is the actual manipulation of the data, and it's my humble opinion that this is far better handled by architecting a proper OO development. Defining and using business objects and their methods will simplify the application, as opposed to writing a (perceived) streamlined classical ABAP, complete with subroutines and call to function modules. The OO-based application will be easier to read, maintain, and should not bog down your system perceivably. Even if it does, there are other techniques you can use, such as Shared Memory Objects, parallel processing (qRFC's) and so on. None of these techniques are tied to classical, procedural ABAP.


2. Performance issues are largely related to (persistent) data access.

Most of the performance-related issues with large data volumes relate to the DB access itself. This is largely independent of the application design. Considerations such as intelligent table access via proper indexes, reading a limited set of fields, proper use of indexed internal tables and so on can easily be done also within an OO context. If the report or application is going to be run consecutively with varying select criteria or processing parameters, one should consider building a shared memory object to hold the desired data, as opposed to repeatedly reading large data volumes from data base tables. Again, splitting the processing using tRFC's or the newer qRFC technique should be investigated. In any way, the DB access issues are not related to how the application itself is designed.


3. What about the other aspects of "performance"?

There are different flavours of "performance". What about reuse? Ease of maintenance? Total cost of ownership for procedural vs OO? I'd much rather maintain and enhance an (well-written and properly structured) OO application than a 5000-line procedural monster. Anytime. Also, I've never seen an ABAP application that has not been modified at a later point in time. In more than 95% of the cases, the modifications have been done by another developer than the creator of the original application. Usually from a different service provider, as well. I believe this speaks for itself.


4. The "OO is not performance-friendly" statement is used as an excuse to avoid having to learn OO.

Lastly, and maybe most importantly, I believe the "OO is not performance-friendly" mantra is - to a large extent - being put forth by those developers/managers/devco's who are not too comfortable with the OO world themselves, in order to avoid having to deal with it. It's convenient to chuck the whole OO discussion down the drains by using a cheap argument. Also, I believe that most of the ABAP'ers dealing with performance-related developments are still very firmly rooted in the procedural world. No offense intended.


Conclusion

Sure, there are cases where a short, 50-lines procedural ABAP does the trick, and can be written in 5 minutes - as opposed to doing an OO design approach. But these are exceptions. Normally, you don't get assignments like this. Your standard development project spans weeks, if not months, and involve whole teams of programmers, not just you.

This last statement also shows the OO-induced shift towards a more community-centric way of working (think Scrum, Agile, XP). Speaking in OO is, to some extent, a social experience. Your team is developing a community of entities (objects) that work together to resolve a common goal. Creating a behemoth of a procedural beast is more often than not a fairly lonely task, undertaken in splendid isolation in a cubicle or office at the end of the corridor. As we did in the past.

Imagine this:

You've been working long and hard on a very important development. You've created your classes, WDA's, programs, function modules, and the related DD elements, domains, table types and structures.

Everything looks fine. Your transport is ready to go. You release it to the Q system, and everything still looks fine. Users are happy, and so are you. You get the green light. The transport makes it to production. You leave the office, happy that yet another accomplishment is done.

Next morning...

...you arrive at the office. You've barely had time to gulp down those first few drops of caffeine before your mailbox starts to inflate. Or the phone rings. Or both. -Hey, we have situation. In Prod. And it seems to be, uh, your fault.

Or your development, more precisely.

What??

MY development? Can't be. It was tested. Double checked. It's watertight.

Except we're not talking functional issues. The thing failed to generate. It won't work. Why? Because one of your tables refer to domain ZZ_WHATEVER, and ZZ_WHATEVER doesn't seem to be included in the transports...

Uh. Right. OK. Um...

Turns out ZZ_WHATEVER is on another transport, created by your colleague Bob, in the next cubicle. Now good old Bob created ZZ_WHATEVER for one of his own projects, and it looked so good you decided to use it for one of your own table fields.

No use re-inventing the wheel, right?

Problem is...

...although Bob's transports have gone all the way to Q, they're still in test phase... so that nice domain of his, ZZ_WHATEVER, is not in Prod. Which is why your table fails to generate.

I guess this situation has occurred to many of you. Every once in a while, it might happen that a transport released to a downstreams system “misses” one or two vital components. This could happen for several reasons:

  • you have created a new development that re-uses an existing class or DD element, which has not been previously transported
  • you want to import the development into a freshly installed system (without using SAPlink), and you have manually compiled a transport containing all the parts, but forgotten one or two pieces
  • an emergency fix has been implemented that relies on some other custom development objects, and you didn’t think of adding these to the current transport (or maybe they are locked in other transports)

The solution (well, one solution, anyway)

  image

 Whatever the reason is, it would be nice to have a tool to analyze your transport, before releasing it, just to check for inconsistencies. This is where the Transport Consistency Analyzer come in handy. It will do the following:

  1. create a list of all objects in the transport
  2. scan all code lines (programs, includes, functions (including the whole function group), classes)
  3. iteratively scan all objects referred to or used in the master object list (down to the n’th level) to obtain a complete list of code objects
  4. scan the final, complete list of objects and compile an exhaustive list of all DD objects, functions, programs, includes, classes etc. that are used or referred
  5. provide the opportunity to add the missing pieces to your original transport – or create a new one, automatically

image

How & Where?

I started working on this tool more than 2 years ago. It was originally a small(ish) report, but has been properly "Object-Oriented" along the way. Now, I've decided to set up a project on Code Exchange for it. The TCA is, in it’s current version (which I humbly call 0.9), already a tool that does all of the above. It is currently in “test phase”, and is being continuously worked on. I am providing it as a Code Exchange project because I believe it already has a certain value, and would welcome inputs and comments – as well as any corrections that the community might find it worthwhile to point out!

Please note that as this is still a preliminary version, there might be bugs and issues. I take no responsibility for any flaws or incorrect results, or their possible consequences, and would like to encourage ABAP geeks out there to examine the results closely before relying on the output of the program. Nevertheless, I believe the tool provides decent functionality as of now, and that it has a certain “business value”.

Here's a direct link: Transport Consistency Analyzer

Collaborative projects like the Code Exchange are one of the many areas where SDN/SCN proves it's undisputed value to the SAP community. From the humble beginnings, it is now becoming a vibrant scene for developers all around the world. Just one more reason to get involved and contribute!

Intro

Parallel processing is not a new concept, but one that is regularly overlooked when it comes to increasing ABAP performance. Why?

Your SAP system will (normally) have more than one process available at any given time. Still, most of us insist on using just one of them. This is a bit like a manufacturer relying on only one truck to bring the products from his plant to the shopping malls, when there's a whole fleet of trucks just standing by!

Not only that, but most SAP systems spans more than one application server, each with a range of (hopefully) available processes. So, what are we waiting for?

h2. OK, so my program takes forever to execute. How can I put it on steroids?

In this blog, I'll show a practical example for dealing with one of the dreaded father-and-son relations in the wonderful world of SAP FI: Accounting documents, tables BKPF-BSEG. Prowling your way through these tables can really take its toll, both on the system itself and your patience. Not to mention that of the customer breathing down your neck.

 

There are numerous other blogs and papers about parallel processing using RFC's on SDN, and one of the best (if not the best) is Thorsten Franz' blog Calling Function Modules in Parallel Universes.

I actually suggest you start there for some very interesting background info on the merits (and pitfalls) of using RFC's. There's also a link to Horst Keller's blog series and the official SAP documentation. In addition, the excellent book "ABAP Cookbook" from SAP Press (by James Wood) outlines the principles of using asynchronous RFCs for parallel processing (as well as providing loads of other cool stuff for enhancing your ABAP skill set!) A highly recommended read.

Using asynchronous RFC's without caution is not recommended, and you risk bogging down the system as well as running into errors that can be difficult to resolve. However, if you do decide to use parallel processing, the following might be a good starting point. I'll try to keep things simple and explain every step along the way.

h2. The sample program

We'll create a program to display info from BKPF/BSEG. The program will read all entries from BKPF (feel free to introduce your own selection criteria here, such as company code and/or fiscal year), and then retrieve all related entries from BSEG. This second step will be done by calling a function module via RFC, repeatedly, in parallel. We will try to balance the workload based on the number of documents in the tables, and the available processes on our SAP application servers.
Finally, we will examine the runtime analysis of the program and compare to a standard single process execution.


The test program basically consists of the following steps:

 

    • Call function module SPBT_INITIALIZE to find out how many available processes we can use
    • Split the number of documents into handy packages and call an RFC a number of times in parallel
    • Wait for the results from the called RFC's and merge them all back into the final report

The program is fairly straightforward. It reads BKPF, tries to split the retrieved BKPF entries into nice "packages" based on the key fields BUKRS and GJAHR. These are the two main parameters for our RFC-enabled function module - we're building range tables for them in order to facilitate our work. The idea is to pass these two as ranges to the RFC-enabled function reading BSEG, so that the number of documents passed to each call of the function is more or less consistent. Since the number of financial documents will vary with company codes and fiscal years, we cannot ensure a 100% even workload, but this is just an example.


Based on the available resources (number of processes for all application servers, which we find using function SPBT_INITIALIZE), we then start to kick off calls to the RFC-enabled function module. This is done a number of times in parallel, using the CALL FUNCTION STARTING NEW TASK... PERFORMING ... ON END OF TASK. By using this feature, we ensure that the calling program executes a specific form whenever the control is passed back from the RFC "bubble" to the calling program (common sense states you should use object orientation, and thus specify a method, but for our example I find the classical procedural program better for illustration purposes).

 

What happens when the aRFC finishes, is the following:

    1. Control is passed back to the calling program
    2. The form (or method) specified in the CALL TRANSACTION statement (addition PERFORMING ... ON END OF TASK) is called. This form enables you to retrieve any returning or exporting parameters from the called RFC, and use them in the main processing.

 

By splitting the workload into sizeable chunks, we can execute a multitude of workloads simultaneously, thereby reducing the total execution time to a fraction of the time traditionally used. In my example, I was able to run this report in less than 5% of the time it took running it in one single process.


The program and function module are presented below. I've done my best to insert comments in order to explain what's going on, and hope you can use this as a template.

The function module has been created as an RFC function (check the Remote-Enabled Module box on the Attributes tab). Besides this, there's nothing special about it.

&----
*& Report  ZTST_CALL_ASYNC_RFC *& &----
*& A small test program for calling function modules a number of times *& in parallel. *& *& The program calls the RFC-enabled function ZTST_READ_BSEG - which *& reads entries from table BSEG and calculates totals *& *& A log table, ZTST_RFC_LOG, is used for logging the progress, both *& from within this program and the function itself. *& *& The program will launch the function on all available app servers. &----

 

report  ztst_call_async_rfc.

 

type-pools: abap.

 

 

types: begin of t_bkpf,          bukrs  type bukrs,          gjahr  type gjahr,          belnr  type belnr_d,        end of t_bkpf,        tab_bkpf type table of t_bkpf.

 

types: begin of t_bseg,          bukrs type bukrs,          gjahr type gjahr,          dmbtr type dmbtr,        end of t_bseg.

 

types: begin of t_stat,          bukrs type bukrs,          gjahr type gjahr,          count type i,        end of t_stat.

 

types: begin of t_tasklist,          taskname(4) type c,          rfcdest     like rfcsi-rfcdest,          rfchost     like rfcsi-rfchost,          result      type char50,       end of t_tasklist.

 

data:       lv_max_processes type i,       lv_free_processes type i,       lv_number_of_processes_in_use type i,       lv_started_rfc_calls type i value 0,       lv_finished_rfc_calls type i value 0,       lv_exception_flag(1) type c,       lv_taskname(4) type n value '0001',       lt_tasklist type table of t_tasklist,       lv_index type i,       lt_bkpf type tab_bkpf,       lv_lines_in_bkpf type i value 0,       lv_records_pr_rfc type i value 0,       lv_loop_pass type i value 0,       lv_gjahr type gjahr,       lv_belnr_start type belnr_d,       lv_belnr_end type belnr_d,       lt_results_total type ztst_bseg_results_tt,       lv_total_so_far type i,       lv_bukrs_range type ztst_bukrs_range,       lt_bukrs_range type ztst_bukrs_range_tt,       lv_gjahr_range type ztst_gjahr_range,       lt_gjahr_range type ztst_gjahr_range_tt,       lv_bseg type t_bseg,       lt_results type ztst_bseg_results_tt,       lv_stat type t_stat,       lt_stat type table of t_stat,       lv_sum type i.

 

field-symbols: type ztst_bseg_results_s.

 

parameters: p_para as checkbox default 'X'.

 

 

start-of-selection.

 

  perform initial_selection.

 

 

  • The parameter P_PARA allows you to run in sequential mode for comparison purposes!
  if p_para = abap_true.

 

 

  • Start by retrieving number of maximum and available processes
    call function 'SPBT_INITIALIZE'       exporting         group_name                     = ''       importing         max_pbt_wps                    = lv_max_processes         free_pbt_wps                   = lv_free_processes       exceptions         invalid_group_name             = 1         internal_error                 = 2         pbt_env_already_initialized    = 3         currently_no_resources_avail   = 4         no_pbt_resources_found         = 5         cant_init_different_pbt_groups = 6         others                         = 7.

 

    if sy-subrc <> 0.       message id sy-msgid type sy-msgty number sy-msgno               with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.     else.       write : / 'Max processes: ', 50 lv_max_processes right-justified.       write : / 'Free processes: ', 50 lv_free_processes right-justified.       uline.

 

  • Clear the log table of old entries
      delete from ztst_rfc_log where taskname <> '0000'.       delete from ztst_rfc_log where taskname = '0000'.

 

      lv_records_pr_rfc = lv_lines_in_bkpf / 1000.       write : / 'Number of estimated RFC calls: ', 50 lv_records_pr_rfc right-justified.       uline.

 

      sort lt_bkpf.       move 0 to lv_index.

 

 

  • Accumulate total number of BKPF per BUKRS and GJAHR
      loop at lt_bkpf assigning -count right-justified.       endloop.       uline.

 

      skip 2.       write : / 'Log messages during execution:'.       uline.

 

 

  • Main loop - here, we loop at LT_STAT, which contains number of documents per company code
  • and year (BUKRS and GJAHR). We use these figures to build ranges for BUKRS and GJAHR, which
  • are then used when calling the RFC function module
  • If you do not need to programmatically calculate the number of entries for each RFC call,
  • for instance when processing a table sequentially, you can replace this logic with something simpler,
  • a.k.a. Loop at table, call RFC for each 1000 entries.
      loop at lt_stat assigning -count + lv_total_so_far ) < 1500.
  • Means we have previous entries in ranges, but < 1500 total (including the current record).
  • We add current record to previous ranges and run all of them.
            move 'I' to lv_bukrs_range-sign.             move 'EQ' to lv_bukrs_range-option.             move 0.   "If anything at all in previous ranges, run for these first & flush               write : / 'Calling RFC for previous range', 60 lv_total_so_far right-justified.               perform call_rfc. " with previous ranges only               refresh lt_bukrs_range.               refresh lt_gjahr_range.             endif.

 

  • Now, run RFC for BUKRS/GJAHR current record (which has more than 1000 in count)
            move 'I' to lv_bukrs_range-sign.             move 'EQ' to lv_bukrs_range-option.             move -gjahr to lv_gjahr_range-low.             append lv_gjahr_range to lt_gjahr_range.           endif.         endif.

 

      endloop.

 

    endif.

 

 

  • That's it! We've called our RFC a number of times (hopefully more than one).
  • Now, all that remains is to wait until all RFC's have finished.
    wait until lv_finished_rfc_calls = lv_loop_pass.

 

  • Write the contents of TASKLIST to show which servers were used and how things went...
    skip 2.     write : / 'Result of RFC calls:'.     uline.     loop at lt_tasklist assigning -result.     endloop.     skip.

 

 

 

 

  • Booooring... sequential mode - for performance comparison only (try runtime analysis on each logic)
  else.

 

    loop at lt_bkpf assigning -gjahr.

 

        divide lv_bseg-dmbtr by 1000000.         " To avoid overflow when summing up...         collect lv_bseg into lt_results_total.       endselect.     endloop.

 

  endif.

 

 

  • Final touch: print the results of all our efforts.
  sort lt_results_total.

 

  skip 2.   write : / 'Results from BSEG:'.   uline.   write : / 'Company', 10 'Year', 25 'Amount in mill.'.   loop at lt_results_total assigning -dmbtr right-justified.   endloop.

 

 

&----
*&      Form  initial_selection &----
  •       text
----
form initial_selection.

 

  select bukrs belnr gjahr from bkpf into corresponding fields of table lt_bkpf.

 

  describe table lt_bkpf lines lv_lines_in_bkpf.   write : / 'Number of records in BKPF:', 50 lv_lines_in_bkpf right-justified.

 

endform.                    "initial_selection

 

&----
*&      Form  call_rfc &----
  •       text
----
form call_rfc.

 

  add 1 to lv_number_of_processes_in_use.

 

  • Note that it might not be a good idea to use ALL free processes, such as here.
  • Doing so might cause minor inconveniences for other users, or nasty phone calls
  • from Basis....
  • A better idea would be to reduce lv_free_processes by, say, 5 before starting.

 

  if lv_number_of_processes_in_use > lv_free_processes.     write : / 'Waiting; number of processes > ', lv_number_of_processes_in_use.     wait until lv_number_of_processes_in_use < lv_free_processes.     write : / 'Waiting over, number of processes =', lv_number_of_processes_in_use.   endif.

 

*   call function 'ZTST_READ_FINANCIAL_DOCS'     starting new task lv_taskname     destination in group default     performing receive_results_from_rfc on end of task     exporting       im_taskname           = lv_taskname       im_bukrs              = lt_bukrs_range       im_gjahr              = lt_gjahr_range
  • Note that we are not using the IMPORTING parameter here;
  • instead it's used when doing RECEIVE RESULT in form RECEIVE_RESULTS_FROM_RFC
    exceptions       communication_failure = 1       system_failure        = 2       resource_failure      = 3.

 

  case sy-subrc.

 

    when 0.       write : / 'Started new task, task name ', lv_taskname.

 

      append initial line to lt_tasklist assigning -taskname = lv_taskname.

 

  • Retrieve the name of the server
      call function 'SPBT_GET_PP_DESTINATION'         importing           rfcdest = -rfcdest         exceptions           others  = 1.

 

      lv_started_rfc_calls = lv_started_rfc_calls + 1.       add 1 to lv_taskname.

 

    when 1 or 2.           "Communications failure
  • This could mean an app server is unavailable; no real need to handle this situation in most cases.
  • (Subsequent calls to the same server will fail, but the FM should run nicely on all available servers).

 

    when 3.                "No available dialog processes right now - wait!       if lv_exception_flag = space.         lv_exception_flag = 'X'.         write : / 'No more processes available, waiting...'.         wait until lv_finished_rfc_calls >= lv_started_rfc_calls.       else.                "Second attempt         write : / 'Still no more processes available, waiting...'.         wait until lv_finished_rfc_calls >= lv_started_rfc_calls.

 

        if sy-subrc = 0.   "Wait successful - processing continues           clear lv_exception_flag.         else.              "Wait failed - something is wrong with RFC processing. Aborting.           write : / 'No RFC calls completed - aborting processing!'.           exit.         endif.       endif.   endcase.

 

endform.                    "call_rfc * &----
*&      Form  RECEIVE_RESULTS_FROM_RFC &----
  •       Called when we return from the aRFC.
----
  •      -->VALUE         text
  •      -->(P_TASKNAME)  text
----
form receive_results_from_rfc using value(p_taskname).

 

  • Note: WRITE statements will not work in this form!

 

  data lv_netwr type netwr_ap.   data lv_netwr_num(18) type n.   data lt_results type ztst_bseg_results_tt.

 

  lv_number_of_processes_in_use = lv_number_of_processes_in_use - 1.

 

  • Update the TASKLIST table, which is used for logging
  read table lt_tasklist with key taskname = p_taskname assigning -result = 'Error in task execution'.     endif.   endif.

 

  • Receive the results from the RFC
  receive results from function 'ZTST_READ_FINANCIAL_DOCS'     importing re_results = lt_results.      " <--- receiving the result from the RFC!

 

  • Loop at partial results; include in our totals table
  loop at lt_results assigning function ztst_read_financial_docs . *"----
""Local Interface: *"  IMPORTING *"     VALUE(IM_TASKNAME) TYPE  NUMC4 *"     VALUE(IM_BUKRS) TYPE  ZTST_BUKRS_RANGE_TT *"     VALUE(IM_GJAHR) TYPE  ZTST_GJAHR_RANGE_TT *"  EXPORTING *"     VALUE(RE_RESULTS) TYPE  ZTST_BSEG_RESULTS_TT *"  CHANGING *"     VALUE(CH_NETWR) TYPE  NETWR_AP OPTIONAL *"----

 

 

  • This is a function module used by program ZTST_CALL_ASYNC_RFC
  • for demo purposes. The idea is to show how to take long-processing
  • programs and split them up into parallel processes, by calling
  • RFC's asynchronously. The calling program will call this function
  • module a number of times, then collect the results and process them.

 

 

  types: begin of t_bseg,            bukrs type bukrs,            gjahr type gjahr,            dmbtr type dmbtr,          end of t_bseg.

 

  data lt_bseg type table of t_bseg.   data lv_rfc_log type ztst_rfc_log.   data lv_filename type string.

 

  field-symbols h2. Structure for the exporting parameter (table)

Checking the system load

During program run, you can check your RFC's with transaction SM66, which shows all processes across the application servers of your system.

h2. Final word: run time analysis

Try using the run time analysis on the program, both when selecting parallel mode and when un-checking P_PARA (which causes a normal select within the main program). The runtime analysis won't show the additional load of the RFC modules running in parallel, but the total program execution time is far lower - and this, after all, is the main point of splitting a workload into separate parallel tasks.

Running in sequential mode (no parallel RFC's):

!https://weblogs.sdn.sap.com/weblogs/images/252050646/img5.GIF|height=275|alt=Sequential run|width=617|src=https://weblogs.sdn.sap.com/weblogs/images/252050646/img5.GIF!

Running in parallel mode:

!https://weblogs.sdn.sap.com/weblogs/images/252050646/img6.GIF|height=199|alt=With parallel processing|width=626|src=https://weblogs.sdn.sap.com/weblogs/images/252050646/img6.GIF!

As you can see, the run time is dramatically reduced. Total system load may amount to the same (and should actually be slightly higher, with the overhead of the separate RFC's), but it's the total execution time that counts. Here, the execution time is roughly 10% when using asynchronous RFC's as compared to a classical "one-in-all" process.

The above tests were run in a system with approximately 35.000 entries in BKPF, and 100.000 in BSEG.

h2. Words of warning (again)


A few words on transactional RFC's: There are situations when you cannot use this technique. Commits cannot be performed inside an RFC - this would conflict with the session in which the main program is running. You can find more info on these topics by checking SAp help for RFC programming. However, for the larger part of processor-intensive developments, it is a technique that is sadly overlooked. I recommend everyone to give it a try, provided they follow the guidelines provided by SAP on the topic.

h2. Additional reading:

In addition to the blogs and resources mentioned at the start, the following is worth checking out:

 

Horst Keller: Application Server, what Application Server?

 

Rumours are that here will, from WAS version 8.0, be 3 different categories of ABAP classes: Upper, Middle, and Lower. What follows is an exclusive heads-up regarding these revolutionary new features of ABAP!

 

Upper classes:

  • Upper classes can invoke methods of middle and lower classes (but only call favours of other upper classes).
  • Upper classes are always abstract.
  • Upper classes are allowed to call a specific default method of lower classes: Abuse.
  • Upper classes normally refuse to be friends with most middle, and all lower classes. This will result in compilation errors (for middle class friendships) and system dumps (for lower class friendships).
  • Upper classes will occasionally try to allocate a higher amount of system resources for their own use - as long as this is not noticed by the dispatcher. Whenever attention is brought to this point, the upper class in question will instantly re-model itself or automatically re-implement itself in a different application server with more generous system resources.
  • The attributes of the upper classes are generally more beautiful and well-shaped than those of the middle and lower classes.
  • Some of the methods of upper classes are completely empty.

 

Middle classes:

  • Middle classes can only invoke methods of other middle or lower classes.
  • Middle classes usually have a wide range of methods, but are somewhat more demanding on system resources than lower classes (see below).
  • Middle classes are relatively stable compared with lower (and upper) classes, especially during times of system stability. In an unstable system, however, they will be more prone to failing.
  • Middle classes will perform noticeably better when co-existing in an application with upper classes (especially when their methods are called by upper classes). Less so if they are combined with lower classes, whereby they become more likely not to respond.

 

Lower classes:

  • Lower classes can only call methods of other lower classes, never those of middle or upper classes. If a lower class tries to call the methods of an upper class, the upper class will ignore it. Lower classes calling methods of a middle class may result in the middle class throwing an exception.
  • Lower classes cannot even exist in an application containing upper classes. Such applications will not compile.
  • (The concept of having a sub-type of lower classes called a "working class" was abandoned early on due to a high percentage of such implementations were shown to not respond to their own methods).
  • Lower classes generally have very limited sets of methods, but more (and larger) attributes than upper or middle classes. Besides, the attributes of the lower classes are generally uglier, since they usually only inherit from other lower classes (never from upper classes). In general, as the system grows, the functionality of most lower classes will become less desirable and the classes will eventually make themselves redundant.

 

Epilogue:

More and more applications now rely on a specific type of interfaces which are implemented via so-called Outsourced Classes (classes that are implemented in a different application server entirely, and only called upon when needed). This application server usually runs in a different time zone.

Search helps and value lists in WDA and how to program them is a recurring theme in the WDA forum. I have tried to put together some tips and how-to’s related to the subject in this blog series. Note that this is not an exhaustive tutorial – if anyone has complementary info, they’re very welcome to comment (or take the blogs further!) In order to warm up, I will start by describing how to create a simple OVS search help and tailor it to your needs. In my opinion, an OVS is the easiest way to hack a search help into fitting specific criteria, some of which can be determined at run-time. In the next blog, I will discuss how to generate dynamic value sets for dropdown lists, both in stand-alone fields and inside a table. h4. A brief overview of search helps  Basically, there are 4 types of search helps: Automatic, where you use the built-in search help for the DD element; Dictionary search help, where you can specify a specific DD search help (for instance, your own); OVS, where you use component WDR_OVS and tailor it to suit your needs; and finally Freely programmed search helps. In this starter blog, I'll concentrate on the first 3, with specific focus on OVS. The Automatic and Dictionary search helps are fairly simple to use. If your context field is typed on a DD element which has a search help or where the domain has a value help (check table), the F4 key will bring up the related search help. This means you do not have to explicitly do anything in order to get the job done: Sample F4 DD-based search help on Airline codes (field SFLIGHT-CARRID): image The third search help type, and the focus of this blog, is the OVS – Object Value Selector. You use it via the WDR_OVS component. This pre-generates most of the code for you, and all you have to worry about is programming the select logic for the value list (similar to creating a search help user exit). h4. The OVS search help example  Start by defining the WDR_OVS component in your WDA property screen: OVS – Object Value Selector – is used via the WDR_OVS component. This search help is used whenever you want to use the OVS framework – it will provide you with a general method for retrieving the values to display, and you can modify the value list (a bit like creating a search help user exit). Start by including the WDR_OVS component in your WDA property screen: image Then, in the view, create a reference to the same component (related to the Interfacecontroller): image In the view’s context, specify OVS as the search help type and use the name of your OVS component: image Finally, the last part of the configuration work: create an event handler in the view’s methods list by selecting the OVS from the (grey) Event column. Give your method a name as well: image Now for the fun part. Double-clicking on the OVS_AIRPLANE method reveals that a wizard has already been at work. Most of the code has in fact been pre-generated for you – all you have to do is fill in the parts needed to tailor your OVS search help to your specific needs. We want to find all airplane types that have a minimum number of seats as defined by the user in the search help. Here’s the code, after I made my own modifications to the pre-generated OVS template:    method ovs_airplane.  * Structure for search help input   types:     begin of lty_stru_input, *   add fields for the display of your search input here      minimum_seats type s_seatsmax,     end of lty_stru_input.  * Structure for result list from search help   types:     begin of lty_stru_list, *   add fields for the selection list here       planetype type s_planetye,     end of lty_stru_list.    data: ls_search_input  type lty_stru_input,         lt_select_list   type standard table of lty_stru_list,         ls_text          type wdr_name_value,         lt_label_texts   type wdr_name_value_list,         lt_column_texts  type wdr_name_value_list,         lv_window_title  type string,         lv_group_header  type string,         lv_table_header  type string.    field-symbols: co_phase_0.  "configuration phase, may be omitted  * Set texts for Search help box       ls_text-name = `MINIMUM_SEATS`.  "must match a field name of search       ls_text-value = `Minimum number of seats`. "wd_assist->get_text( `001` ).       insert ls_text into table lt_label_texts.        lv_window_title = 'Airplanes with more seats than...'. *      lv_group_header = wd_assist->get_text( `004` ). *      lv_table_header = wd_assist->get_text( `005` ).        ovs_callback_object->set_configuration(                 label_texts  = lt_label_texts                 column_texts = lt_column_texts                 group_header = lv_group_header                 window_title = lv_window_title                 table_header = lv_table_header                 col_count    = 2                 row_count    = 20 ).       when if_wd_ovs=>co_phase_1.  "set search structure and defaults        ovs_callback_object->context_element->get_static_attributes(           importing static_attributes = ls_search_input ).        ovs_callback_object->set_input_structure(           input = ls_search_input ).       when if_wd_ovs=>co_phase_2.  "Select value list based on user input        if ovs_callback_object->query_parameters is not bound. ******** TODO exception handling       endif.        assign ovs_callback_object->query_parameters->*                               to  is assigned. ******** TODO exception handling       endif.  * Here is where we select plane types based on our seats requirement!       select planetype from saplane           into corresponding fields of table lt_select_list         where seatsmax > selection is not bound. ******** TODO exception handling       endif.        assign ovs_callback_object->selection->* to *Lty_stru_input* is the input structure of our search help. Here, we define which search fields should be present. In our example, this is the minimum number of seats, but adding more fields here enables you to refine the search help. In our example, we will allow the user to specify the minimum number of seats for the airplane type they search for.    *Lty_stru_list* is the structure of the resulting list of values. Normally you would define at least 2 fields here; namely the search value and it’s accompanying text. There's nothing wrong with displaying more than two columns, if you want the list of values to contain more detailed info.   Resulting search help: search help input Search result: search result

Scrum is gaining support across the IT community as a long-awaited and logical evolution from the hierarchical development methodologies of the previous decades. The reasons for its popularity are simple: it empowers the users, the very group of people that will have to live and interact with the results of the system delivery. By enabling this group to actively participate in the design and building of the systems, we allow them to re-define and shape their own reality. It's all about power to the people, as opposed to the former methodologies, where a small group of technocrats imposed their own interpretations of what users "needed", then spent months (or years) trying to come up with complex solutions that, more often than not, failed to meet major expectations or even lacked substantial functionality.

Scrum listens to rapidly changing demands, delivers functionality that works, and adapts solutions whenever the situation calls for it. So why not try it in other areas?

Having worked in Switzerland for more than a decade, one aspect of the country has struck me as particularly appealing: the principle of direct democracy. Switzerland is blissfully devoid of that pyramidal structure you find almost anywhere else; a small elite, usually consisting of a few hundred people, defining and enforcing laws and political directions. Instead, the country has a tradition of public votes on all levels of government, ensuring the possibility of the electorate to not only choose their preferred "rulers" but also to participate in the outcome of important political decisions.

Swiss politicians are functionaries more than rulers. If the people, or "users", signals a thumbs down, the political course is appropriately altered. Based on specific numbers of collected signatures, the "users" can request additional "features", or cancel existing ones. When the (tax) bill arrives, they can rest relatively assured that the amounts they pay reflects the will of the majority, not a remote elite with sometimes obscure motives.

This bears a striking resemblance to Scrum. In a well-performing Scrum project, developers and users come together to find common ground, and to review progress and project direction. Features that are not needed, or not considered worth paying for, are scrapped or rejected. On the other hand, if the users decide on added or altered functionality, these new requests are taken into account (costs and time allowing). The final product can differ considerably from the early sketches, based on this continuous feedback and consensus-driven input from the major stakeholders - the users.

On the other hand, political top-down systems, or what our rulers love to call "representative democracies", closely resemble the ancient development methodologies of the past. Political parties set out their directions in stone, finely honing their manifests, with little or no room for dissent. Once elected, they may of course deviate from their own ideologies, usually fuelled by a mix of failing to understand real-world constraints and the need for political concessions in order to stay in power, but one thing remains sure: a government never asks it's people for any kind of advice. Once election day is over, the government sets out implementing their own ideas of what they perceive the "users" of needing. Not once are said users allowed to actively provide corrective feedback or input in order to alter the political decisions, or even asked our opinion on specific issues - even if the outcome has huge national or local impact.

The result is an evil circle: a set of decisions which more often than not fail to resound with the majority of the population, and a tendency among the electorate to subsequently deride and despise the enforcing politicians. People are faced with a multitude of regulations, rules and laws they fail to grasp and never asked for in the first place, as well as costly projects that only benefits small sub-groups among the electorate, instead of being given the opportunity to provide continuous feedback and changing the features of our "system". Because of this, we despise and loath the political figureheads for their lack of competence, common sense and inability to connect with us.

Switzerland got it right from the beginning. For more than 700 years, this country has actively pursued Scrum as the fundamental base of it's political system. Decisions are implemented based on consensus, and the people provide a constant flow of feedback. Granted, this sometimes slows down the implementation, but at the end of the day, the "users" can rest fairly assured that the bill they're footing is a reflection of what they asked for, not the result of a governmental group of bureaucrats with a fixed idea of what suits them best.

Strange, then, that even as Scrum gains in popularity across the development environment, only the Swiss prevail in using it as a political ideology.

Time for a revolution, anyone?

Actions