Use Threading to Reduce Response-Time Variance Limit the Number of Objects a Thread Touches

In the case of the bank example, this solution doesnt really cause a problem. If we serialize only instances of Account that havent been active for a while, the risks of locking out a client who wants to access her money are minimal. However, in other applications, using serialization for persistence can lead to serious problems. In practice, serialization is fine for client applications. Its quite easy to design data objects so they are fast to serialize and involve passing small amounts of information over the wire. Furthermore, most clients are single-threaded anyway; since the serialization algorithms data-corruption problems occur only when multiple threads are running, they rarely occur on the client side. Yet if you use serialization for persistence, logging, or to pass state between servers, you need to be careful. The rules are simple: Make serialization fast Limit the number of instances that can be reached by the serialization algorithm from any serializable instance. Make serialization safe Make sure that the objects being serialized are locked during the serialization algorithm.

12.2.10 Use Threading to Reduce Response-Time Variance

A typical remote method invocation embodies three distinct types of code: • Resource allocation code • Actual requested functionality • Cleanup code One of the key observations about threads is that, to a large extent, they allow us to isolate these three types of code in different threads. The upcoming pool example shows how this is done. What I want to emphasize here is that when we move functionality into worker threads, we not only get a more robust server, we get a more responsive and predictable client. The less we do inside any given client method invocation, the more predictable the outcome will be. For example, consider what we did in our printer example. We moved the actual printing into a separate thread. The client threads, which used to print the document, simply drop off an object in a container and return. Every client thread does the same things in the same order, and they never block while waiting for resources. Similarly, when we implemented logging as a background thread, one of the major gains was that client threads didnt have to wait for a resource to become available. We turned a variable- length operation waiting for the log file into a faster, fixed-length operation putting an object into a container. The servers arent necessarily faster or more efficient as a result of these transformations, but the client application feels faster and has a much more uniform response time as a result.

12.2.11 Limit the Number of Objects a Thread Touches

The next tip in using threads is almost an emergent tip. If you use containers to mediate thread communication, use background threads to perform resource allocation and cleanup tasks, and carefully limit the number of objects that serialization will visit, youll notice something else happening. Each thread will only ever visit a small number of objects. Theres a slight exception to this. Since RMI reuses the same thread across multiple client requests, that thread may eventually wind up visiting many instances. But during any given remote method invocation, it will only visit a few. This is an important and useful consequence. Its so important and so useful that it deserves to be a design guideline of its own instead of just a consequence of the others. If all of your threads touch only a few objects, then your code will be much easier to debug, and you can think about thread interaction problems such as deadlock. If, on the other hand, a given thread can execute any method in your code, then youll have a hard time predicting when thread-interaction problems will occur and which threads caused a particular problem.

12.2.12 Acquire Locks in a Fixed Order