- 73 - Prior to JDK 1.2, the
-O
option used with the Sun compiler did inline methods across classes, even if they were not compiled in the same compilation pass. This behavior led to bugs.
[8]
From JDK 1.2, the
-O
option no longer inlines methods across classes, even if they are compiled in the same compilation pass.
[8]
Primarily methods that accessed private or protected variables were incorrectly inlined into other classes, leading to runtime authorization exceptions.
Unfortunately, there is no way to directly specify which methods should be inlined, rather than relying on the compilers internal workings. I guess that in the future, some compiler vendors will
provide a mechanism that supports specifying which methods to inline, along with other preprocessor options. In the meantime, you can implement a preprocessor or use an existing one if
you require tighter control. Opportunities for inlining often occur inside bottlenecks especially in loops, as discussed previously. Selective inlining by hand can give an order-of-magnitude speedup
for some bottlenecks and no speedup at all in others.
The speedup obtained purely from inlining is usually only a few percent: 5 is fairly common. Some optimizing compilers are very aggressive about inlining code. They apply techniques such as
analyzing the entire program to alter and eliminate method calls in order to identify methods that can be coerced into being statically bound. Then these identified methods are inlined as much as
possible according to the compilers analysis. This technique has been shown to give a 50 speedup to some applications. Another inlining technique used is that by the HotSpot runtime, which
aggressively inlines code after a bottleneck has been identified.
3.5.3 Performance Effects From Runtime Options
Some runtime options can help your application to run faster. These include:
•
Options that allow the VM to have a bigger footprint
-Xmx -mx
is the main one, which allows a larger heap space; but see the comments in the following paragraph.
•
-
noverify
, which eliminates the overhead of verifying classes at classload time not available from 1.2.
Some options are detrimental to the application performance. These include:
•
The
-Xrunhprof
option, which makes applications run 10 to 1000 slower
-prof
in 1.1.
•
Removing the JIT compiler done with
-Djava.compiler=NONE
in JDK 1.2 and the
-nojit
option in 1.1.
• -debug
, which runs a slower VM with debugging enabled. Increasing the maximum heap size beyond the default of 16 MB usually improves performance for
applications that can use the extra space. However, there is a tradeoff in higher space-management costs to the VM object table access, garbage collections, etc., and at some point there is no longer
any benefit in increasing the maximum heap size. Increasing the heap size actually causes garbage collection to take longer, as it needs to examine more objects and a larger space. Up to now, I have
found no better method than trial and error to determine optimal maximum heap sizes for any particular application .
Beware of accidentally using the VM options detrimental to performance. I once had a customer who had a sudden 40 decrease in performance during tests. Their performance harness had a
configuration file that set up how the VM could be run, and this was accidentally set to include the
-prof
option on the standard tests as well as for the profiling tests. That was the cause of the
- 74 - sudden performance decrease, but it was not discovered until time had been wasted checking
software versions, system configurations, and other things.
3.6 Compile to Native Machine Code
If you know the target environments of your application, you have the option of taking your Java application and compiling it to a machine-code executable. There is a variety of these compilers
already available for various target platforms, and the list continues to grow. Check the computer magazines or follow the compiler links on good Java web sites. See also the compilers listed in
Chapter 15 . These compilers can often work directly from the bytecode i.e., the .class files
without the source code, so any third-party classes and beans you use can normally be included. If you follow this option, a standard technique to remain multiplatform is to start the application
from a batch file that checks the platform and installs or even starts the application binary appropriate for that platform, falling back to the standard Java runtime if no binary is available. Of
course, the batch file also needs to be multiplatform, but then you could build it in Java.
But prepare to be disappointed with the performance of a natively compiled executable compared to the latest JIT-enabled runtime VMs. The compiled executable still needs to handle garbage
collection, threads, exceptions, etc., all within the confines of the executable. These runtime features of Java do not necessarily compile efficiently into an executable. The performance of the
executable may well depend on how much effort the compiler vendor has made in making those Java features run efficiently in the context of a natively compiled executable. The latest adaptive
VMs have been shown to run some applications faster than running the equivalent natively compiled executable.
Advocates of the compile to native executable approach feel that the compiler optimizations will improve with time so that this approach will ultimately deliver the fastest applications. Luckily, this
is a win-win situation for the performance of Java applications: try out both approaches if appropriate to you, and choose the one that works best.
There are also several translators that convert Java programs into C . I only include a mention of these translators for completeness, as I have not tried any of them. They presumably enable you to
use a standard C compiler to compile to a variety of target platforms. However, most source code- to-source code translations between programming languages are suboptimal and do not usually
generate fast code.
3.7 Native Method Calls
For that extra zing in your application but probably not applet, try out calls to native code. Wave goodbye to 100 pure Java certification, and say hello to added complexity to your development
environment and deployment procedure. If you are already in this situation for reasons other than performance tuning, there is little overhead to taking this route in your project.
A couple of examples Ive seen where native method calls were used for performance reasons were intensive number-crunching for a scientific application and parsing large amounts of data in
restricted time. In these and other cases, the runtime application environment at the time could not get to the required speed using Java. I should note that the latter parsing problem would now be able
to run fast enough in pure Java, but the original application was built with quite an early version of Java. In addition, some number crunchers find that the latest Java runtimes and optimizing
compilers give them sufficient performance in Java without resorting to any native calls.
[9]