- 16 - Your general benchmark suite should be based on real functions used in the end application, but at
the same time should not rely on user input, as this can make measurements difficult. Any variability in input times or any other part of the application should either be eliminated from the
benchmarks or precisely identified and specified within the performance targets. There may be variability, but it must be controlled and reproducible.
1.6.3 The Benchmark Harness
There are tools for testing applications in various ways.
[2]
These tools focus mostly on testing the robustness of the application, but as long as they measure and report times, they can also be used for
performance testing. However, because their focus tends to be on robustness testing, many tools interfere with the applications performance, and you may not find a tool you can use adequately or
cost-effectively. If you cannot find an acceptable tool, the alternative is to build your own harness.
[2]
You can search the Web for java+perf+test to find performance-testing tools. In addition, some Java profilers are listed in Chapter 15
.
Your benchmark harness can be as simple as a class that sets some values and then starts the
main
method of your application. A slightly more sophisticated harness might turn on logging and timestamp all output for later analysis. GUI-run applications need a more complex harness and
require either an alternative way to execute the graphical functionality without going through the GUI which may depend on whether your design can support this, or a screen event capture and
playback tool several such tools exist
[3]
. In any case, the most important requirement is that your harness correctly reproduces user activity and data input and output. Normally, whatever
regression-testing apparatus you have and presumably are already using can be adapted to form a benchmark harness.
[3]
JDK 1.3 introduced a new
java.awt.Robot
class, which provides for generating native system-input events, primarily to support automated testing of Java GUIs.
The benchmark harness should not test the quality or robustness of the system. Operations should be normal: startup, shutdown, noninterrupted functionality. The harness should support the different
configurations your application operates under, and any randomized inputs should be controlled; but note that the random sequence used in tests should be reproducible. You should use a realistic
amount of randomized data and input. It is helpful if the benchmark harness includes support for logging statistics and easily allows new tests to be added. The harness should be able to reproduce
and simulate all user input, including GUI input, and should test the system across all scales of intended use, up to the maximum numbers of users, objects, throughputs, etc. You should also
validate your benchmarks, checking some of the values against actual clock time to ensure that no systematic or random bias has crept into the benchmark harness.
For the multiuser case, the benchmark harness must be able to simulate multiple users working, including variations in user access and execution patterns. Without this support for variations in
activity, the multiuser tests inevitably miss many bottlenecks encountered in actual deployment and, conversely, do encounter artificial bottlenecks that are never encountered in deployment, wasting
time and resources. It is critical in multiuser and distributed applications that the benchmark harness correctly reproduces user-activity variations, delays, and data flows.
1.6.4 Taking Measurements
Each run of your benchmarks needs to be under conditions that are as identical as possible; otherwise it becomes difficult to pinpoint why something is running faster or slower than in
another test. The benchmarks should be run multiple times, and the full list of results retained, not just the average and deviation or the ranged percentages. Also note the time of day that benchmarks
- 17 - are being run and any special conditions that apply, e.g., weekend or after hours in the office.
Sometimes the variation can give you useful information. It is essential that you always run an initial benchmark to precisely determine the initial times. This is important because, together with
your targets, the initial benchmarks specify how far you need to go and highlight how much you have achieved when you finish tuning.
It is more important to run all benchmarks under the same conditions than to achieve the end-user environment for those benchmarks, though you should try to target the expected environment. It is
possible to switch environments by running all benchmarks on an identical implementation of the application in two environments, thus rebasing your measurements. But this can be problematic: it
requires detailed analysis because different environments usually have different relative performance between functions thus your initial benchmarks could be relatively skewed compared
with the current measurements.
Each set of changes and preferably each individual change should be followed by a run of benchmarks to precisely identify improvements or degradations in the performance across all
functions. A particular optimization may improve the performance of some functions while at the same time degrading the performance of others, and obviously you need to know this. Each set of
changes should be driven by identifying exactly which bottleneck is to be improved and how much a speedup is expected. Using this methodology rigorously provides a precise target of your effort.
You need to verify that any particular change does improve performance. It is tempting to change something small that you are sure will give an obvious improvement, without bothering to
measure the performance change for that modification because its too much trouble to keep running tests. But you could easily be wrong. Jon Bentley once discovered that eliminating code
from some simple loops can actually slow them down.
[4]
If a change does not improve performance, you should revert back to the previous version.
[4]
Code Tuning in Context by Jon Bentley, Dr. Dobbs Journal, May 1999. An empty loop in C ran slower than one that contained an integer increment operation.
The benchmark suite should not interfere with the application. Be on the lookout for artificial performance problems caused by the benchmarks themselves. This is very common if no thought is
given to normal variation in usage. A typical situation might be benchmarking multiuser systems with lack of user simulation e.g., user delays not simulated causing much higher throughput than
would ever be seen; user data variation not simulated causing all tests to try to use the same data at the same time; activities artificially synchronized giving bursts of activity and inactivity; etc.. Be
careful not to measure artificial situations, such as full caches with exactly the data needed for the test e.g., running the test multiple times sequentially without clearing caches between runs. There
is little point in performing tests that hit only the cache, unless this is the type of work the users will always perform.
When tuning, you need to alter any benchmarks that are quick under five seconds so that the code applicable to the benchmark is tested repeatedly in a loop to get a more consistent measure of where
any problems lie. By comparing timings of the looped version with a single-run test, you can sometimes identify whether caches and startup effects are altering times in any significant way.
Optimizing code can introduce new bugs, so the application should be tested during the optimization phase. A particular optimization should not be considered valid until the application
using that optimizations code path has passed quality assessment.
- 18 - Optimizations should also be completely documented. It is often useful to retain the previous code
in comments for maintenance purposes, especially as some kinds of optimized code can be more difficult to understand and therefore to maintain.
It is typically better and easier to tune multiuser applications in single-user mode first. Many multiuser applications can obtain 90 of their final tuned performance if you tune in single-user
mode and then identify and tune just a few major multiuser bottlenecks which are typically a sort of give-and-take between single-user performance and general system throughput. Occasionally,
though, there will be serious conflicts that are revealed only during multiuser testing, such as transaction conflicts that can slow an application to a crawl. These may require a redesign or
rearchitecting of the application. For this reason, some basic multiuser tests should be run as early as possible to flush out potential multiuser-specific performance problems.
Tuning distributed applications requires access to the data being transferred across the various parts of the application. At the lowest level, this can be a packet sniffer on the network or server machine.
One step up from this is to wrap all the external communication points of the application so that you can record all data transfers. Relay servers are also useful. These are small applications that just re-
route data between two communication points. Most useful of all is a trace or debug mode in the communications layer that allows you to examine the higher-level calls and communication
between distributed parts.
1.7 What to Measure