Application Development Blog Posts
Learn and share on deeper, cross technology development topics such as integration and connectivity, automation, cloud extensibility, developing at scale, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member194613
Active Contributor
Imagine the following situation: An important customer had been using your program without complaints until last week, when he had to process a very large contract with 20.000 positions. The customer was very disappointed, because this contract took 15 hours to process. The problem was a complete surprise to you because you expected a processing time of less than 2 hours from an extrapolation you made earlier with contracts of 100 and 1.000 positions. What happened and how can you avoid such problems in the future? 

=> This Web log will explain why this behavior is possible, and why a normal performance analysis is insufficient to identify such a problem. A new method will be explained on how such problems can be identified in test systems.

1. Runtimes of Different Contract Sizes

To understand the above problem better lets look at detailed measurements of the uncorrected program that have been executed for different contract sizes n between 1 and 100.000 positions. After the problem was solved, the measurements were repeated, giving the data for the corrected program. The results are listed below.

From these detailed measurements of the uncorrected program, it becomes clear that up to n=1.000 one could conclude that the program scales linearly and a linear extrapolation is reasonable. An extrapolation to n= 20.000 positions gives a runtime of 259s*20= 5.200s or one hour and a half. But the reality is dramatically different, the program needs 10-times longer than expected, i.e. about 14 hours. The data for the corrected program is, of course, only available after an analysis was carried out and the problem has been removed. You can see in the table above that the correctedprogramis even faster than what we expected from the linear extrapolation, in this example by a factor of 1.5.

=> Now we want to understand the reasons for this behavior. Then we would like to find an analysis method, which tells us whether a linear extrapolation can be applied or not. If the linear extrapolation is not applicable, we would like to know how the problem can be identified and solved.

2. Scaling Behavior of ABAP Coding

To understand the problems of extrapolating the runtime of a program better, we need to check the way operations in a program depend on the amount of processed data. For example, some operations need always the same runtime and are completely independent of the amount of application data. Other operations process the all application data their runtime is therefore proportional to the amount of data.

All in all, there are the following dependencies of the runtime t on the amount of application data n where O(x) means order of x:

  • t = O(1): Operations that do not depend on the amount of data.
  • t = O(n): Operations with a direct dependence on the amount of processed data. Examples are the complete processing of an internal table of size n or operations which are called once per position etc.
  • t = O(n*n): Operations with a quadratic dependence on the amount of processed data. They appear, for example, if nested loops on internal tables of application data are processed.
  • Higher dependencies are possible, but not relevant.

If you think about this even more, then you will notice that logarithmic dependencies also play an important role in ABAP programs: A read on an internal table with a binary search has a logarithmic dependence on the table size, a sort of an internal table has a linear times logarithmic dependence. So in principle there are even more dependencies of the runtime t on the amount of application data n:

  • t = O(log n): Operations with a logarithmic dependence appear if the application data are processed with binary searches.
  • t = O(n*log n): Operations with nested loops, where the inner operation is a binary search, or sorts of tables.
  • Higher dependencies.

However, in practice with inaccurate measurements and insufficient amounts of data and statistics it is too difficult to detect the logarithmic dependence, for example O(n) and O(n*log n) are very close together. We will take the logarithms into account by larger limits.

We can describe the total runtime T as a sum of contributions with these dependencies:

 T (n) = A * 1 + B * n + C * n*n + … (1)

The two figures below show the complete behavior of our illustration example, figure 1 has linear scales and figure 2 has logarithmic scales. As you can see the logarithmic scales are more suitable to display scaling behavior. Both figures display the same lines:

  • The constant and linear contributions are shown in black.
  • The red line is a small quadratic dependence.
  • The blue lines are the totals, the solid one includes the quadratic contribution (uncorrected version), the dashed line does not include it (corrected version).

 The prefactors A, B, and C in formula 1 are chosen so they fit our example. The factor C for the quadratic term is very small, meaning there are only a few quadratic operations. They contribute nearly nothing as long as n is small, This is the problematic case for an analysis, test cases with large quadratic contributions are easy to analyze, their nonlinearity can hardly be overlooked as it becomes obvious even in rather small tests.

Figure 1: Behavior of example with linear scales

Figure 2: Behavior of example with logarithmic scales

=> The parts of a programcan have different dependencies on the amount of processed data n. The relationship can be independent of n, proportional to n or even quadratic in n. Now let us discuss the quadratic dependence in more detail.

3. Acceptable Scaling Behavior in Business Applications

If there is a quadratic dependence on the amount of application data n, then the processing of larger amounts of data becomes extremely slow. Therefore, it is important to check whether operations with quadratic dependencies are necessary at all.

A task which appears in many programs is the comparison of two internal tables both containing a number of entries scaling with n. The solution will start with a loop on table itab1, which must be fully processed, giving a linear contribution. But normally it is not necessary to check all lines of the second table itab2 for each line of table itab1. Usually only one or a few corresponding lines of table itab2 must be checked per line of itab1. The total number of checks is therefore not in the order of n*n, but in the order of n. A comparison of two tables can be solved by an operation scaling with n*log(n), if the corresponding line can be found efficiently. For example a READ or a LOOP AT WHERE on a sorted table scales with log(n).

Of course there are tasks which require quadratic coding, for example some optimization tasks. However, tasks of this type are very rare in business applications and usually it will be known that a special task can only be solved by nonlinear coding. In all other cases, nonlinear coding must be avoided.

=> For most business transactions the highest dependence of the runtime on the application data is n*log(n). Quadratic orders are exceptions and can be considered as programming bugs in most cases.

4. Analysis with Large Amounts of Data

How was the problem solved in the customer system? As it can be seen from the figures the quadratic contribution dominates the whole runtime for large n. At n=20.000 it requires about 93% of the total runtime. If debugging is started from the application monitor (SM50) at random the probability that it will stop at the coding position containing the quadratic operation is 93%. Once the coding is identified, then it is usually quite easy to find a solution. Fortunately, it was only one code position which was quadratic, so the solution to the customer problem was found quite fast.

Can we use this so-called Monte-Carlo debugging, i.e. to start the debugger at random to find coding which consumes large portions of the runtime?

  • It is quite convenient and simple if large enough test cases are available.
  • It requires test cases of the similar sizes as the largest customer use cases.
  • Large test cases are often not realizable, because it can be difficult to generate consistent test data. Also the hardware of test systems, especially the memory, might not support very large test runs.
  • It is not that easy to identify more than one nonlinear code position per test run.

Therefore realistic tests with really large amounts of data are not an option. We need a way to identify nonlinear coding more efficiently with rather small amounts of test data.

=> While a problem with nonlinear coding can quite easily be identified in the customer system when very large amounts of data are processed, this is not a suitable method to identify nonlinear coding in a test system. A method is needed which works on small numbers of test data.

5. Detailed Runtime Measurements to Identify Nonlinear Coding

We have seen that the total runtime is insufficient to identify nonlinearities. We would like to measure much smaller runtime parts, i.e.

        T (n) = t_1 (n) + t_2 (n) + t_3 (n) + ...

If these t_i(n) are small enough, then we can assume that the runtime of an operation has one certain scaling dependency, i.e. constant, linear or quadratic. They are not mixtures of dependencies as the total runtime T (see formula 1), i.e.

        t_i (n) = a f_i(n)    and   f_i (n) = 1, n or n*n 

To detect nonlinearity, we execute two runtime measurements with different values for n, i.e. n1 and n2. We know that it is not sufficient to compare the total runtime T(n), but all the individual runtimes t_i(n). From their ratios it is possible to identify the scaling dependence, i.e.

        Ratio_i (n2,n1) = t_i (n2) / t_i (n1)

From the ratio it is very easy to deduct the n-dependence of the operation. The measured ratios will show nearly all possible values between 0 and infinity, therefore we define intervals to separate the different scaling behaviors. This is most easily done if the factor between n2/n1 is 10. The factor 2 in the intervals takes the measurement variability and the logarithmic dependencies into account.

=> If we can measure the runtime of small operations, then the scaling can be determined by a comparison of these small runtimes coming from two measurements with different amounts of data.

 6. ABAP Runtime Analysis Offers Detailed Measurements

The ABAP Runtime Analysis (SE30) measures runtimes with a very fine granularity, so we can assume that their runtimes show only simple n-dependencies (as in formula 2).

The only thing which is now missing is a tool, which does the automatic comparison of thousands of detailed runtimes t_i(n). For every t_i(n1) from measurement 1 it has to find the corresponding time t_i(n2) from measurement 2. Such a tool is available and will be discussed in a forthcoming paper.

=> The ABAP Runtime Analysis offers detailed measurements. Together with a small tool it allows the detailed analysis of programs with respect to nonlinear coding. The guarantee that a program scales linearly is one of the most important prerequisites for a good performance.

 

 

5 Comments