Threading to Appear Quicker Streaming to Appear Quicker
1.5.1 Threading to Appear Quicker
A lot of time in CPU cycles passes while the user is reacting to the application interface. This time can be used to anticipate what the user wants to do using a background low priority thread, so that precalculated results are ready to assist the user immediately. This makes an application appear blazingly fast. Similarly, ensuring that your application remains responsive to the user, even while it is executing some other function, makes it seem fast and responsive. For example, I always find that when starting up an application, applications that draw themselves on screen quickly and respond to repaint requests even while still initializing you can test this by putting the window in the background and then bringing it to the foreground give the impression of being much faster than applications that seem to be chugging away unresponsively. Starting different word-processing applications with a large file to open can be instructive, especially if the file is on the network or a slow removable disk. Some act very nicely, responding almost immediately while the file is still loading; others just hang unresponsively with windows only partially refreshed until the file is loaded; others dont even fully paint themselves until the file has finished loading. This illustrates what can happen if you do not use threads appropriately. In Java, the key to making an application responsive is multithreading. Use threads to ensure that any particular service is available and unblocked when needed. Of course this can be difficult to program correctly and manage. Handling interthread communication with maximal responsiveness and minimal bugs is a complex task, but it does tend to make for a very snappily built application.1.5.2 Streaming to Appear Quicker
When you display the results of some activity on the screen, there is often more information than can fit on a single screen. For example, a request to list all the details on all the files in a particular large directory may not fit on one display screen. The usual way to display this is to show as much as will fit on a single screen and indicate that there are more items available with a scrollbar. Other applications or other information may use a more button or have other ways of indicating how to display or move on to the extra information. In these cases, you initially need to display only a partial result of the activity. This tactic can work very much in your favor. For activities that take too long and for which some of the results can be returned more quickly than others, it is certainly possible to show just the first set of results while continuing to compile more results in the background. This gives the user an apparently much quicker response than if you were to wait for all the results to be available before displaying them. - 14 - This situation is often the case for distributed applications. A well-known example is again found in web browsers that display the initial screenful of a page as soon as it is available, without waiting for the whole page to be downloaded. The general case is when you have a long activity that can provide results in a stream, so that the results can be accessed a few at a time. For distributed applications, sending all the data is often what takes a long time; in this case, you can build streaming into the application by sending one screenful of data at a time. Also, bear in mind that when there is a really large amount of data to display, the user often views only some of it and aborts, so be sure to build in the ability to stop the stream and restore its resources at any time.1.5.3 Caching to Appear Quicker
Parts
» OReilly.Java.performance tuning
» The Tuning Game System Limitations and What to Tune
» A Tuning Strategy Introduction
» Threading to Appear Quicker Streaming to Appear Quicker
» User Agreements Starting to Tune
» Setting Benchmarks Starting to Tune
» The Benchmark Harness Starting to Tune
» Taking Measurements Starting to Tune
» What to Measure Introduction
» Dont Tune What You Dont Need to Tune
» Measurements and Timings Profiling Tools
» Garbage Collection Profiling Tools
» Profiling Methodology Method Calls
» Java 2 cpu=samples Profile Output
» HotSpot and 1.3 -Xprof Profile Output
» JDK 1.1.x -prof and Java 2 cpu=old Profile Output
» Object-Creation Profiling Profiling Tools
» Monitoring Gross Memory Usage
» Replacing Sockets ClientServer Communications
» Performance Checklist Profiling Tools
» Garbage Collection Underlying JDK Improvements
» Replacing JDK Classes Underlying JDK Improvements
» VM Speed Variations VMs with JIT Compilers
» Other VM Optimizations Faster VMs
» Inline calls Remove dynamic type checks Unroll loops Code motion
» Literal constants are folded String concatenation is sometimes folded Constant fields are inlined
» Optimizations Performed When Using the -O Option
» Performance Effects From Runtime Options
» Compile to Native Machine Code
» Native Method Calls Underlying JDK Improvements
» Uncompressed ZIPJAR Files Underlying JDK Improvements
» Performance Checklist Underlying JDK Improvements
» Object-Creation Statistics Object Creation
» Pool Management Object Reuse
» Reusable Parameters Object Reuse
» String canonicalization Changeable objects
» Weak references Canonicalizing Objects
» Avoiding Garbage Collection Object Creation
» Preallocating Objects Lazy Initialization
» Performance Checklist Object Creation
» The Performance Effects of Strings
» Compile-Time Versus Runtime Resolution of Strings
» Converting bytes, shorts, chars, and booleans to Strings Converting floats to Strings
» Converting doubles to Strings
» Converting Objects to Strings
» Word-Counting Example Strings Versus char Arrays
» Line Filter Example HotSpot 1.0
» String Comparisons and Searches
» Sorting Internationalized Strings Strings
» The Cost of try-catch Blocks Without an Exception
» The Cost of try-catch Blocks with an Exception
» Using Exceptions Without the Stack Trace Overhead Conditional Error Checking
» no JIT 1.3 Variables Strings
» Method Parameters Performance Checklist
» Exception-Terminated Loops Loops and Switches
» no JIT 1.3 Loops and Switches
» Recursion Loops and Switches
» no HotSpot 1.0 2nd Loops and Switches
» Recursion and Stacks Loops and Switches
» Performance Checklist Loops and Switches
» Replacing System.out IO, Logging, and Console Output
» Logging From Raw IO to Smokin IO
» no JIT HotSpot 1.0 no JIT HotSpot 1.0 Serialization
» no IO, Logging, and Console Output
» Clustering Objects and Counting IO Operations
» Compression IO, Logging, and Console Output
» Performance Checklist IO, Logging, and Console Output
» Avoiding Unnecessary Sorting Overhead
» An Efficient Sorting Framework
» no HotSpot Better Than Onlogn Sorting
» User-Interface Thread and Other Threads
» Desynchronization and Synchronized Wrappers
» Avoiding Serialized Execution HotSpot 1.0
» no JIT no JIT HotSpot 1.0 Timing Multithreaded Tests
» Atomic Access and Assignment
» Free Load Balancing from TCPIP
» Load-Balancing Classes Load Balancing
» A Load-Balancing Example Load Balancing
» Threaded Problem-Solving Strategies Threading
» Collections Appropriate Data Structures and Algorithms
» Java 2 Collections Appropriate Data Structures and Algorithms
» Hashtables and HashMaps Appropriate Data Structures and Algorithms
» Cached Access Appropriate Data Structures and Algorithms
» Caching Example I Appropriate Data Structures and Algorithms
» Caching Example II Appropriate Data Structures and Algorithms
» Finding the Index for Partially Matched Strings
» Search Trees Appropriate Data Structures and Algorithms
» Comparing Communication Layers Distributed Computing
» Batching I Application Partitioning
» Compression Caching Low-Level Communication Optimizations
» Transfer Batching Low-Level Communication Optimizations
» Batching II Distributed Garbage Collection
» Performance Checklist Distributed Computing
» When Not to Optimize Tuning Class Libraries and Beans
» Scaling Design and Architecture
» Distributed Applications Design and Architecture
» Object Design Design and Architecture
» Use simulations and benchmarks Consider the total work done and the design overhead
» Tuning After Deployment When to Optimize
» User Interface Usability Training Server Downtime
» Performance Checklist When to Optimize
» Clustering Files Cached Filesystems RAM Disks, tmpfs, cachefs
» Disk Fragmentation Disk Sweet Spots
» RAM Underlying Operating System and Network Improvements
» Network Bottlenecks Network IO
» Performance Checklist Underlying Operating System and Network Improvements
Show more