Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
jana_richter
Employee
Employee
0 Kudos

Why do you measure AccAD performance improvements? 

After you successfully finished planning and setting up accelerated application delivery (AccAD), you want to test how it delivers your applications to the remote office. You don’t only want to see whether the applications are delivered at all, but also measure how much AccAD improves the access for your remote users. Reliable and meaningful measurements are the key to observe the improvements in response times over WAN. Moreover, they help you to compare AccAD properly to alternatives in your landscape (e.g. the direct WAN access or Non-SAP accelerators) and in the end to convince stakeholders and management about your future direction.

As simple as this may sound: this step of measuring the actual improvements proofed to be critical and prone to errors in many projects. There are some potential traps that you might face.

Planning your Tests

First of all, we would like to strongly recommend establishing a detailed test plan and nominating people that are responsible for the test measurements. This is a mandatory step to ensure identical test cases and test fulfillment and the basis for comparable results. After you defined a test scenario and the according test documentation, you might want to create a special test user and role setup that have static roles and permissions. With this approach you can reuse the test scenario for further tests in the future, e.g. you might want to run the same test with competitive products several months later in order to compare the solutions. Otherwise - depending on the speed of changes - test cases have to be redefined too often.

Guidelines and Tips 

In addition, we would like to give you some additional guidelines and insights that we learned in Pilot and Ramp-Up projects for accelerated application delivery:

  • Scenarios: Define all relevant scenarios in advance to configure AccAD accordingly and provide comparable results. This includes the important applications that you would like to deliver and the given baselines (e.g. Local Area Network, or Non-SAP application delivery tools or WAN accelerators). Thus you can ensure that the measurements eventually provide meaningful results and insights.
  • Network Conditions: Ensure that the network conditions of the scenarios are known to make them comparable. Thus please note down the bandwidth, latency and packet loss values for all remote offices that you are going to include into your measurements. Moreover, in case you simulate the network conditions with WAN emulation tools, make sure that they apply for requests and responses – thus both directions of communication.
  • Latency or RTT: Clarify whether you measure the real latency or the roundtrip time (RTT). Latency is only the distance between user and application (one-way), whereas the roundtrip time is the time from the users’ client to the application server back to the client. Thus it is highly important to know whether you use latency or RTT in order to really compare your results later-on. 
  • LAN baseline: Provide direct LAN measurements as a baseline. Here you can see whether applications show performance issues already in LAN environment and thus tuning on the application itself may be necessary. In general, WAN access over AccAD can only reach the magnitude of LAN (of course with some overhead).
  • LAN + AccAD: Provide measurements with AccAD in LAN (between LAN user and application) to ensure smooth operations and test the new component without adding network conditions yet. You can already see whether any offloading effects apply in your landscape.
  • Identical conditions for all scenarios: Carefully test all scenarios for LAN, WAN direct, AccAD and potential other acceleration technologies under same conditions (cache empty vs. filled, same transactions) - otherwise the results may not reflect real effects. This is especially important in case different people conduct the tests in LAN and WAN environments.
  • Series of Measurements: Warm up the AccAD cache! This means that only a series of measurements (e.g. 10 - 20 measurements) can provide reliable results, because only then the cache is filled and the compression algorithm adapts to the traffic that comes in (learning algorithm).
  • Browser Cache: Measure scenarios with filled and empty browser cache to reflect different situations (Monday morning scenario vs. Middle of the week). Even if only certain scenarios are interesting for your business we would recommend measuring both environments to get meaningful results and not over- or underestimate the AccAD improvements.
  • AccAD Cache: Application-aware acceleration (Knowledge Management or SAP Learning Solution): include measurements of empty AccAD cache (only compression effective) and filled AccAD cache e.g. from other user (compression and web caching in AccAD effective)
  • Testing Periods: Avoid periods for testing with high and/or varying system or network load (e.g. network peak times, backend batch load’s …).
  • Archive: Archive detailed http watch traces for all measurements to be able to investigate potential mismatches or issues in later stages. This might be especially useful in case you would like to open a customer message to SAP when some issues apply in specific scenarios.

Template for your Results 

In order to help you with capturing your measurements, we have attached an Excel-File to this blog. The first worksheet in this file illustrates the test environment and documents the test conditions. After this, you will see a sheet for the average measurements and a separate sheet which details the series of measurements. Moreover, we attached a checklist that summarizes the considerations we explained in the text above. Of course this sheet is only an exemplified template. For your project you should replace the values and adjust the tables to reflect your landscape.

As said before, we would recommend creating a detailed test plan in addition to this Excel. This ensures that all involved testers perform the same steps for measuring the improvements.