andreas.vogel

4 Posts

Have you ever asked the question "Who uses all these Z* reports? Are they used at all?" Well, ST03N could give you the answer, but sometimes you want get the result in a more condensed format. So you implement your own report to utilize the aggregated statistical records of ST03N calculating your own metrics and reports.

Probably you have noticed my last How to read ST03N datasets from DB where the API for NW2004s was presented. This blog is about NW2004 and previous releases.

The function SAPWL_WORKLOAD_GET_STATISTIC provides almost all the tables used by ST03N. This API is available for SAP_BASIS 640 (NW2004) and previous releases (with small changes). Implementing a small program you can read the data, and extract whatever you are interested in. The code snippet [1] shows you the basic steps.

Let's have a short look at the interface of the function module. Before you can use this function you need to know the parameters.

PERIODTYPE specifies the type of the period you want to analyze. Valid values are D for days, W for weeks, and M for months.

STARTDATE is a date. Specify the day you are interested in.

Parameter HOSTID is not used any more. You can ignore this parameter.

INSTANCE is the name of an application server. Use the name TOTAL if you want to get the data of all application servers.

ONLY_APPLICATION_STATISTIC is a flag. Specify an X if you just want to read the table APPLICATION_STATISTIC.

The function returns many different tables. You can view all these data using ST03N. The lower left tree of ST03N titled Analysis Views shows the available datasets (see the following screenshot).

image

The first entry Workload Overview of the tree unfortunately needs a special treatment. To retrieve this data from the database you need to call another function SAPWL_WORKLOAD_GET_SUMMARY. The interface is shown here:

The importing parameter are the same as for SAPWL_WORKLOAD_GET_STATISTIC.

Now you are able to implement a program reading the ST03N datasets. But one question remains: Before you can read ST03N datasets you need to know which datasets are available. How can you read the contents of the workload database? The answer is easy: SAPWL_WORKLOAD_GET_DIRECTORY is the name of the function which gives you the table of contents. ST03N uses this function to build the directory tree of ST03N. An example is shown by the following screenshot.

image

Related Weblogs and Articles

[1] Code Snippet: Which Z/* reports are used in NW2004

[2] Statistical Records Part 1: Inside STAD

[3] Statistical Records Part 2: RFC Statistics

[4] How to read ST03N datasets from DB

[5] NW2004s Workload Statistics Collector: Implementing a BAdI as user exit

Have you ever asked the question "Who uses all these Z* reports? Are they used at all?" Well, ST03N could give you the answer, but sometimes you might get the result in a more condensed format. So you implement your own report to utilize the aggregated statictical records of ST03N calculating your own metrics and reports.

With NetWeaver 2004s (SAP_BASIS 700) a new API is available. The function SWNC_COLLECTOR_GET_AGGREGATES provides all the tables used by ST03N. Implementing a small program you can read the data, and extract whatever you are interested in. The code snippet shows you the basic steps.

Let's have a short look at the interface of the function module. Before you can use this function you need to know the parameters. And you might read the documentation provided with the function in the ABAP workbench.

The parameter COMPONENT is used to specify the application server. Use the name TOTAL if you want to get the data of all application servers.

The Parameter ASSIGNDSYS is optional. Leave it empty, or specify the SYSID of your system.

PERIODTYPE specifies the type of the period you want to analyze. Valid values are D for days, W for weeks, and M for months.

PERIODSTRT is a date. Specify the day you are interested in.

The parameter SUMMARY_ONLY is optional. Specify an X if you just want to read the table TASKTYPE.

Parameter FACTOR is optional. The default value is 1000. This is used to scale all time metrics from microseconds to milliseconds. Leave it as it is.

Table USERTCODE
This table has four key fields: TASKTYPE, ACCOUNT, MANDT, and ENTRY_ID. TASKTYPE is a byte which can be decoded using CL_SWNC_COLLECTOR_INFO=>TRANSLATE_TASKTYPE, and ENTRY_ID contains the transaction code or a report name. The code snippet shows how this can be decoded. As data fields the table contains metrics like response time, cpu time and many others.
Authorization
The ST03N data contains user names. You will need the profile S_TOOLS_EX to see all the user names, otherwise the user names will be encrypted.
Related Weblogs

[1]Code snippet: Which Z* reports are used?

[2] Statistical Records Part 1: Inside STAD

[3] Statistical Records Part 2: RFC Statistics

[4] NW2004s Workload Statistics Collector: Implementing a BAdI as user exit

My Statistical Records Part 1: Inside STAD [1] introduced the statistical records of an ABAP application server. This time I would like to talk about the internal structure of the statistical records. As an example the RFC records are introduced. This blog is about SAP_BASIS 700 which comes with Netweaver 2004s. The overall concept described here has not changed very much since release 4.6C, but details could be different.

Structure of Statistical Records

First let's have a look at the structure of the statistical records. A statistical record consists of several parts. The first part is the so-called main record where we find information like transaction code and program name, start and end time, response time, etc. This main record is part of every statistical record.

Additionally we may find other optional parts. Whenever database operations have been performed we will get a database subrecord for this dialog step. If stored procedures were executed on the database we will get a subrecord for DB procedures. If a report has been executed as a batch job we will get a batch subrecord, telling us e. g. the job name and job ID. The following picture illustrates the structure of a statistical record.

image

All these subrecords are optional, and the SAP kernel will generate these on demand. This saves a lot of disk space in the file system and speeds up writing and reading the statistics file. Here is the complete list of subrecords:

  • batch subrecords
  • database subrecords
  • db table subrecords
  • db procedure subrecords
  • spool print subrecords
  • spool activity subrecords
  • ADM message subrecords
  • client information subrecords
  • RFC client subrecords
  • RFC server subrecords
  • RFC client destination subrecords
  • RFC server destionation subrecords
  • http client subrecords
  • http server subrecords
  • http client destination subrecords
  • http server destionation subrecords
  • smtp client subrecords
  • smtp server subrecords
  • smtp client destination subrecords
  • smtp server destionation subrecords
  • VMC subrecords
  • ESI subrecords (SAP_BASIS 710)
  • ESI destination subrecords (SAP_BASIS 710)

Don't get confused, we'll not discuss all of them in detail now. Instead let's have a closer look at the RFC subrecords.

What are RFC statistics records?

The RFC statistical records can tell us some details of the RFC activity of a program. If a program performs RFC calls then you will get RFC client and RFC client destination records. An RFC server process receives RFC calls and will write RFC server and RFC server destination records. If the RFC server process itself performs RFC calls then it is a client and a server simultaneously. In this case you will see all four different RFC subrecords within one statistical record. STAD displays the RFC overview, if RFC subrecords are present.

RFC Client
Let's start with the RFC client records. The following picture is a screenshot from STAD for an overview of RFC client process.

image

We can see that 30 RFC calls have been performed over (at least) 5 connections to 5 destinations for this dialog step. The calling time for the client was 7.212 ms in total, while the remote execution time (for all servers) was 6.688 ms. The calling time includes codepage conversions and network transport. The remote execution time is measured by the RFC server and includes authority check and function execution only. 11.330 bytes have been sent to the servers, and 14.225 bytes have been received. The idle time is the time between two subsequent RFC calls for an open connection. There are two highlighted fields on this screen: Connections and Calls. You may click on these fields to display more details. Let's continue with the calls, which will display the RFC client records.

An RFC client record describes exactly one single RFC call. The most important information is the function name, the execution time of this function, and the amount of data transported in both directions. The following screenshot of STAD shows an RFC client record.

image

This screen shows an RFC call of a function called START_TCOLL_REPORT. The Call number is 6, indicating that this call is the 6th call over this particular RFC connection. The Connection ID of this connection is displayed (a GUID) as well as the name of the destination. The calling time and the remote execution time are given, and a few hundred bytes have been transferred between client and server. The time for data transfer (data send time and data receive time) is less than 1 ms. The field communication step shows that sending and receiving happened during the same dialog step which should be the normal situation for a synchronous RFC. For asynchronous RFC sending and receiving may happen during different dialog steps.

Of course it is not possible to log each individual RFC call in this way. The amount of data would be too large to handle. Instead the SAP kernel logs the five most expensive RFC client records only, all others are discarded. Expensive means "expensive in terms of execution time". The profile parameter stat/rfcrec controls the maximum number of RFC subrecords (default value = 5). Increasing this value is dangerous, because the file system will be flooded with statistical records. And it affects the system performance, because all these statistical records have to be processed by the SAP workload collector (which will be the topic of another blog).

Instead of logging every single RFC call the RFC client information is summarized, and for each RFC connection an RFC client destination record is written. From this record we can see how many RFC calls were performed for this particular connection, how many bytes have been transported in total, and of course the total execution time. Again the maximum number of RFC client destination records is limited by the profile parameter stat/rfcrec. If a client performs RFC calls across more than 5 connections than only the 5 most expensive connections (in terms of execution time) are logged. The following screenshots of STAD shows an RFC client destination record.

image

This screen shows the details for a particular RFC connection. We can see the connection ID, the type of RFC (synchronous), the types of the local and the partner engines (R/3 System in both cases), and the combination of user name and client number used for logon. Destination is the name of the RFC destination (see Tx SM59). Instance is the name of the local application server from where this call was issued, and IP address is the local IP address. From the remote application server we see Partner instance, Partner IP address, and Partner Release (not all RFC server will provide this information, but a SAP application server will do so). In total six calls have been performed for this particular connection during this dialog step, and the total calling time was 2624 ms.

RFC Server

To show the complete picture the following STAD screenshots depicts the RFC server records. Again we start with the overview.

image

This screen shows the overview for an RFC server process during a dilaog step. One incoming RFC call from one connection has been received. The RFC server has received 12.415.680 bytes, and the server execution time (authority check and function module execution) was 711 ms, while the calling time (including code page conversions and network transfer) was 6.656 ms.

Let's go to the details of that particular RFC call (click on the hotspot behind the field Calls). The following screen is very similar to the one of an RFC client. We can see the name of the executed function module, the connection ID, and the destination name NONE (an RFC call within the same application server) as well as the performance metrics.

image

The next screenshot shows the RFC server destination record. It shows the details of the RFC connection and the summary of all RFC calls for this connection. We can see that this connection has been used for an asynchronous call without a response. The details of this screen are similar to the RFC client destination record (see above).

image

Please note that the lifetime of an RFC connection is independent of a dialog step. Depending of the scenario and the implementation an RFC connection may be used during several subsequent dialog steps of an application or program. The RFC client/server destination records are created in the context of a dialog step and will always show the activity for a connection during this dialog step.

There are some details of an RFC server process which are important to know. Let's assume an RFC client performs several calls to an RFC server. If the connection is not closed explicitly then it will be used for all these calls. But when will the RFC server process write a statistical record? After each RFC call? No, that's not the case. The SAP kernel will write a statistical record after the roll-out of the RFC server process from the work process, and the roll-out will occur if no subsequent RFC call is received within a timeframe of 500 ms. At that point the data for the statistical record is collected, and this includes the RFC subrecords, of course. If subsequent RFC calls follow within 500 ms then the context of the RFC server will not be rolled out from the work process. This is a performance optimization, because it avoids frequent roll-out and roll-in which would increase the RFC execution time. Please note that the overall response time of an RFC server process may be 500 ms larger than the calling time. The overall response time is shown by STAD on the main list, while the calling time is shown with the RFC details (see the RFC server destination record).

The design principle of client/server single records and client/server destination records has also been used in a similar way for the http and smtp subrecords. Check it out, STAD will display all of them.

To learn more about RFC please read the interesting article by Masoud Aghadavoodi Jolfaei and Eduard Neuwirt [3]: Master the five remote function call (RFC) types in ABAP.

If you want to learn more about SAP performance optimization the following book might be interesting for you: T. Schneider, SAP Performance Optimization Guide, 4th ed., SAP PRESS, ISBN 978-1-59229-069-7

Related Weblogs and Articles

[1] Statistical Records Part 1: Inside STAD

[2] NW2004s Workload Statistics Collector: Implementing a BAdI as user exit

[3] Master the five remote function call (RFC) types in ABAP

What are statistical records?

 

The ABAP statistical records are very useful if you want to know what’s going on in your system. It’s a technical logging feature implemented in the SAP kernel, and completely independent of the applications. Each dialog step is recorded, an information like user and program name, response time, CPU time and more is logged. This information is very useful when you want to analyze the performance (or a performance bottleneck) of your SAP system. From the statistical records you can get a hint where you should continue with a detailed analysis: CPU or database performance, memory issues, analysis of a particular application, and more.

 

How are statistical records created?

 

Statistical records are created by the work processes (WP) of an ABAP application server. At the end of a dialog step the WP collects the necessary information like user ID, program name, start time and end time of the dialog step, calculates metrics like response and DB time, and stores all this together in a so-called statistical record. This record is then stored in a shared buffer. All WP of an ABAP application server share the same buffer, and the statistical records are stored in the buffer chronologically.


When the buffer is full it is flushed to a file in the file system. Each SAP application server has it’s own statistics files.


The following screenshot illustrates the architecture.

image

 

Where are they stored?

 

The SAP kernel uses not only one single file for the statistical records. Each hour a new file is created where the statistical records are stored. This keeps the file size small and allows a flexible reorganization of the files (see below).


 

The files are stored in a directory like /usr/sap/<SysID>/<Instance Directory>/DATA. Try transaction AL11 to display the DIR_DATA directory. You’ll find one file named stat and many files named statnnn (where nnn is a number). These are the statistics files. Don’t try to display these files, they contain binary data only.</p>

How are the stat files reorganized?

 

The kernel itself maintains the number of statistics files. The profile parameter stat/max_files is used to control the maximum number of statistic files. Each hour a new file is created and the oldest file is deleted automatically. Make sure you have enough disk space available for all the statistic files. Otherwise the syslog (transaction SM21) shows the following message:


 

> File: /usr/sap/BCE/D26/data/stat


 

Failed to write to file, error 0028</p>

 

How can I access statistical records?

 

Using transaction STAD you can access the statistics files and display the statistical records. Here is a screenshot of the start screen of STAD.

image

Just specify date, time, and length of the interesting interval and hit the enter key. If necessary you can narrow your search by specifying values for the filter parameters:

image

 

If you select the button Server selection you can select the application servers you want to analyze. By default the statistical records of all servers are collected by STAD. Be sure that the checkbox Include statistics from memory is checked. This will force a buffer flush (see above) before reading statistical records from the file system.


Now hit the Enter key. STAD presents the result as a simple list.

 

image

 

Each line represents one dialog step. You can see the starting time of that dialog step, the application server name, and the program or transaction name, respectively.


For each dialog step the several metrics are shown, e. g. response time, CPU time, and DB time (scroll to the right to see these columns).

 

image

 

Please note: you need the authorization profile S_TOOLS_EX to see all user names. Without this profile you will see only your own user name. All other user names will be deleted and STAD only displays “-?-“ instead.


There is one thing I would like to point out here. We have specified an interval starting from 10:04:00, but the list shows a statistical record from 09:59:47. Is this an error? No, because 09:59:47 is the start time of the dialog step. The response time of this dialog step is 621 seconds, so the step ended at 10:10:08. At that time the statistical record was created and written to the shared buffer. And 10:10:08 is definitely inside the interval we have specified for STAD. So, if you are looking for the statistical record of a long running batch job then you need to know when this job ended (e. g. from the job log). In STAD specifiy an appropriate interval around this point of time. STAD will display the records sorted by their start time.


If you double-click one line you will see the details of this statistical record. All metrics are shown. You will have to scroll down a few pages to see all of them. The following example just shows a small part of it.

 

image

 

 

The section is titled Analysis of time in work process. The executed program was SAPWL_ST03N. The dialog step was started at 15:24:24 and ended at 15:24:30. The response time was 6087 ms, while CPU time was 210 ms.

This information helps if you want to analyze the performance of a single dialog step. You can see why the response time is larger than 6 seconds. CPU consumption is not an issue here in this example, but DB time is very high. So this could be the point where an optimization could start.</p>

 

Summary

 

 

You can see that STAD shows many technical details. All metrics are measured during the execution of the dialog step, and they describe what really happened. This makes the statistical record a valuable source of information about the performance of an application server. The statistical records are used to calculate aggregations (shown in ST03N), and they are used for SAP services like Going Live and Earlywatch Alert.</p>

 

 

Actions