Currently Being Moderated

Welcome to the third post in my blog series on the SQL Monitor. Last time I showed you how to use the SQL Monitor to find and analyze your system’s most time-consuming requests – that means business processes. In addition to this request-based approach, it can also be useful to perform an analysis on the SQL statement level. This is owing to the fact that an SQL statement might be triggered by several different requests. Hence, if such a statement exhibits a particularly poor performance, this can affect multiple business processes.

 

One of the common reasons for miserable performance is reading unnecessary database records. Therefore, today we will scan our demo system for SQL statements that are expensive due to the huge amount of accessed data. In particular, we want to answer the following questions:

 

  1. Which SQL statements have read or modified the most database records?
  2. Which requests trigger the top SQL statement in terms of accessed database records?
  3. How can we improve this statement’s performance?

 

Question 1

 

"Which SQL statements have read or modified the most database records?"

 

As before, the starting point of our analysis is the SQL Monitor’s data display transaction SQLMD. To answer the first question, we need an overview of the SQL statements executed in our system ordered by the total number of accessed database records. The first requirement – that is grouping the data by SQL statement – can easily be met by selecting to aggregate by source position on the selection screen (section “Aggregation”). Establishing the appropriate ordering, however, requires some manual steps on our Netweaver 7.40 SP3 demo system. This is because in the aforementioned release there is no option to sort the data by the total number of accessed database records in section “Order By” (a corresponding option is planned for Netweaver 7.40 SP5). The workaround is to clear the field “Maximal Number of Records” which will cause all available monitoring records to be displayed. Afterwards we can manually sort the result list in the ALV.

 

Note that if you have a large number of monitoring records in your system, SQLMD might require a substantial amount of time to build the result list. In such a case you may be able to speed up the process by setting a high number (for instance 10,000) in the field “Maximal Number of Records” instead of clearing it completely.

 

Altogether, the configured selection screen looks like this:

Capture1.PNG

Screenshot 1: Configuration of the selection screen in transaction SQLMD.

 

After hitting F8 we are then presented the list of all SQL statements that were executed in our system. To obtain the desired ordering, all we need to do is locate the column “Total Records” and sort it in descending order. A word of advice: Be careful not to confuse the columns “Records” and “Total Records”. As explained in my last post, the former denotes the number of available detail records while the latter indicates the number of accessed database records. For our demo system the final result is depicted in the following screenshot.

Capture2.PNG

Screenshot 2: List of SQL statements ordered by the total number of accessed database records.

 

This list provides an overview of SQL statements that have caused a large amount of data to be transferred between the database and the application server. As you can see, all of these statements are located in custom code. The two dominating statements are at the top of the list and in total each has accessed more than a billion database records. Hence, question number one is answered.

 

Question 2

 

"Which requests trigger the top SQL statement in terms of accessed database records?"

 

Turning to the second question, we now focus on the statement at the very top of the list which is located in include ZSQLM_TEST11 line number 34. We are interested to know which request entry points have caused the statement to be executed. If you have worked through my previous post, you might already guess that this information is just a single click away. Remember the column “Records”? As explained in the aforementioned post, this is a generic column which denotes the number of available detail records. Since we have chosen to aggregate the data by the source position, in our case the detail records are the requests that have triggered the SQL statement. To drill down into these detail records, all you have to do is click the hotspot link in the “Records” column. It couldn’t be any easier, could it?

 

For the SQL statement at the top of the list, the “Records” column indicates that there are three different driving requests – that means business processes. In particular, clicking the hotspot link takes us to the following list:

Capture3.PNG

Screenshot 3: List of request entry points that have caused the top SQL statement to be executed.

 

As you can see, all of the driving entry points caused our statement to access an average of about 25,000 database records on each execution (column “Mean Recs.”). The majority of executions – and thereby the majority of accessed database records – were caused by the report ZSQLM_TEST11. In addition, the statement was also triggered by an RFC module and a transaction both of which are, however, almost negligible due to their low number of executions. Thus, if we optimized our top statement’s performance, the request that would benefit most is the report ZSQLM_TEST11. These observations answer the second of our questions.

 

Question 3

 

"How can we improve this statement’s performance?"

 

Focusing on the last question, let us now turn back to the SQL statement itself and think about how we could speed it up a little. Your first impulse might be to investigate the code but hold on a second since before digging through the ABAP sources the SQL Monitor may already provide you with valuable insights!

 

When dealing with SQL statements that process an immense number of database records, it is advisable to check the maximum and minimum number of accessed records in the columns “Max. Records” and “Min. Records”, respectively. As you can see from the second screenshot, for our top statement the maximum amounts to 75,000 records while the minimum is 0. Moreover, the third screenshot indicates that the statement accessed the database table ZSQLM_TEST_USERS (column “Table Names”). Checking the table contents with transaction SE16, we realize that it contains exactly 75,000 records.Capture5.PNG

Screenshot 4: Number of records contained in the table accessed by the top SQL statement.

 

Hence, sometimes our top statement accessed all the contents of the database table while other times it accessed nothing. This is very suspicious and especially the fact that at times the whole content of the database table was accessed indicates that our top statement may involve an empty FOR ALL ENTRIES table. To check this assumption we can navigate to the source code by performing a double click either on the top statement itself in the overview (second screenshot) or on one of its driving requests in the detail view (third screenshot).

Capture4.PNG

Screenshot 5: Source code for the top statement.

 

Just as suspected the statement is a SELECT FOR ALL ENTRIES without any prior content check on the FOR ALL ENTRIES table. When the FOR ALL ENTRIES table is empty, the SELECT yields all the records contained in the database table. What’s particularly important is that an empty FOR ALL ENTRIES table invalidates the WHERE clause completely even if it contains additional conditions. In the majority of cases this is, however, not what the developer intended, especially when the database table contains even more records than in our case. To put it straight, this is not just a statement with room for performance optimization – it’s a bug. To fix it, all you need to do is wrap the SELECT in an IF statement to make sure it is never executed with an empty FOR ALL ENTRIES table. If you really want to access all database records for an empty FOR ALL ENTRIES table, add an ELSE branch and use a plain SELECT without any WHERE clause. This makes your code much more robust and readable and, finally, answers the third and last question.

 

Wrap-Up

Stepping through today’s scenario, I showed you how to analyze the SQL Monitor data starting from the SQL statement level. For this purpose we generated a list of all SQL statements executed in our system and sorted the results by the total number of accessed database records. Focusing on the top statement we used a simple one-click drill-down operation to obtain the list of requests (business processes) that caused the statement to be executed. Furthermore, we leveraged the monitoring data to reveal that the top statement involves an empty FOR ALL ENTRIES table.

 

One final remark: In view of SAP HANA, sorting the SQL Monitor data by the total number of accessed database records can also be useful when using aggregation by request. This allows you to locate data-intense requests which might significantly benefit from a code push-down onto the database.

 

That’s it for today. If you still feel like walking through another hands-on scenario (I hope you do), don’t miss my next post in ABAP for SAP HANA.

 

Couldn't find the next blog post? Here it is: Hands-On with the SQL Monitor Part 3: Analysis of Individual Database Tables

Comments

Actions

Filter Blog

By author:
By date: By tag: