- 76 -
[12]
With operating system-monitoring tools, you can see the system temporarily stalling when the operating system issues a disk-cache flush if lots of files are closed close together in time. If you use a single packed file for all classes and resources, you avoid this potential performance hit.
It is possible to further improve the classloading times by packing the classes into the ZIPJAR file in the order in which they are loaded by the application . You can determine the loading order by
running the application with the
-verbose
option, but note that this ordering is fragile: slight changes in the application can easily alter the loading order of classes. A further extension to this
idea is to include your own classloader that opens the ZIPJAR file itself and reads in all files sequentially, loading them into memory immediately. Perhaps the final version of this performance
improvement route is to dispense with the ZIPJAR filesystem: it is quicker to load the files if they are concatenated together in one big file, with a header at the start of the file giving the offsets and
names of the contained files. This is similar to the ZIP filesystem, but it is better if you read the header in one block, and read in and load the files directly rather than going through the
java.util.zip
classes. One further optimization to this classloading tactic is to start the classloader running in a separate
low-priority thread immediately after VM startup .
3.9 Performance Checklist
Many of these suggestions apply only after a bottleneck has been identified:
•
Test your benchmarks on each version of Java available to you classes, compiler, and VM to identify any performance improvements.
o
Test performance using the target VM or best practice VMs.
o
Include some tests of the garbage collector appropriate to your application, so that you can identify changes that minimize the cost of garbage collection in your
application.
o
Run your application with both the
-verbosegc
option and with full application tracing turned on to see when the garbage collector kicks in and what it is doing.
o
Vary the
-Xmx -Xms
option values to determine the optimal memory sizes for your application.
o
Avoid using the VM options that are detrimental to performance.
•
Replace generic classes with more specific implementations dedicated to the data type being manipulated, e.g., implement a
LongVector
to hold
long
s rather than use a
Vector
object with
Long
wrappers.
o
Extend collection classes to access internal arrays for queries on the class.
o
Replace collection objects with arrays where the collection object is a bottleneck.
•
Try various compilers. Look for compilers targeted at optimizing performance: these provide the cheapest significant speedup general to all runtime environments.
o
Use the
-O
option but always check that it does not produce slower code.
o
Identify the optimizations a compiler is capable of so that you do not negate the optimizations.
o
Use a decompiler to determine precisely the optimizations generated by a particular compiler.
o
Consider using a preprocessor to apply some standard compiler optimizations more precisely.
o
Remember that an optimizing compiler can only optimize algorithms, not change them. A better algorithm is usually faster than an optimized slow algorithm.
o
Include optimizing compilers from the early stages of development.
o
Make sure that the deployed classes have been compiled with the correct compilers.
- 77 -
•
Make sure that any loops using native method calls are converted so that the native call includes the loop instead of running the loop in Java. Any loop iteration parameters should
be passed to the native call.
•
Deliver classes in uncompressed format in ZIP or JAR files unless network download is significant, in which case files should be compressed.
•
Use a customized classloader running in a separate thread to load class files.
Chapter 4. Object Creation
The biggest difference between time and space is that you cant reuse time. —Merrick Furst
I thought that I didnt need to worry about memory allocation. Java is supposed to handle all that for me. This is a common perception, which is both true and false. Java handles low-level memory
allocation and deallocation and comes with a garbage collector. Further, it prevents access to these low-level memory-handling routines, making the memory safe. So memory access should not cause
corruption of data in other objects or in the running application, which is potentially the most serious problem that can occur with memory access violations. In a C or C++ program, problems of
illegal pointer manipulations can be a major headache e.g., deleting memory more than once, runaway pointers, bad casts. They are very difficult to track down and are likely to occur when
changes are made to existing code. Java deals with all these possible problems and, at worst, will throw an exception immediately if memory is incorrectly accessed.
However, Java does not prevent you from using excessive amounts of memory nor from cycling through too much memory e.g., creating and dereferencing many objects. Contrary to popular
opinion, you can get memory leaks by holding on to objects without releasing references. This stops the garbage collector from reclaiming those objects, resulting in increasing amounts of memory
being used.
[1]
In addition, Java does not provide for large numbers of objects to be created simultaneously as you could do in C by allocating a large buffer, which eliminates one powerful
technique for optimizing object creation.
[1]
Ethan Henry and Ed Lycklama have written a nice article discussing Java memory leaks in the February 2000 issue of Dr. Dobbs Journal. This article is available online from the Dr. Dobbs web site,
http:www.ddj.com .
Creating objects costs time and CPU effort for an application. Garbage collection and memory recycling cost more time and CPU effort. The difference in object usage between two algorithms
can make a huge difference. In Chapter 5
, I cover algorithms for appending basic data types to
StringBuffer
objects. These can be an order of magnitude faster than some of the conversions supplied with Java. A significant portion of the speedup is obtained by avoiding extra temporary
objects used and discarded during the data conversions.
[2] [2]
Up to Java 1.3. Data-conversion performance is targeted by JavaSoft, however, so some of the data conversions may speed up after 1.3.
Here are a few general guidelines for using object memory efficiently:
•
Avoid creating objects in frequently used routines. Because these routines are called frequently, you will likely be creating objects frequently, and consequently adding heavily
to the overall burden of object cycling. By rewriting such routines to avoid creating objects, possibly by passing in reusable objects as parameters, you can decrease object cycling.