Acquire Locks in a Fixed Order Use Worker Threads to Prevent Deadlocks

carefully limit the number of objects that serialization will visit, youll notice something else happening. Each thread will only ever visit a small number of objects. Theres a slight exception to this. Since RMI reuses the same thread across multiple client requests, that thread may eventually wind up visiting many instances. But during any given remote method invocation, it will only visit a few. This is an important and useful consequence. Its so important and so useful that it deserves to be a design guideline of its own instead of just a consequence of the others. If all of your threads touch only a few objects, then your code will be much easier to debug, and you can think about thread interaction problems such as deadlock. If, on the other hand, a given thread can execute any method in your code, then youll have a hard time predicting when thread-interaction problems will occur and which threads caused a particular problem.

12.2.12 Acquire Locks in a Fixed Order

Recall that deadlock is a situation when a set of threads acquire locks in such a way that none of them can continue processing. The simplest example requires two threads and two locks and involves the following sequence of actions: 1. Thread 1 acquires Lock 1. 2. Thread 2 acquires Lock 2. 3. Thread 1 attempts to acquire Lock 2 and blocks because Thread 2 has Lock 2. 4. Thread 2 attempts to acquire Lock 1 and blocks because Thread 1 has Lock 1. The basic idea behind ordering locks is simple. Deadlock depends on locks being acquired in different orders. In our example scenario, Thread 1 acquires Lock 1 and then Lock 2. And Thread 2 acquires Lock 2 and then Lock 1. However, if both threads acquire the locks in the same order, then deadlock doesnt occur. Instead, one of the threads blocks until the other thread completes, as in the following variant scenario: 1. Thread 1 acquires Lock 1. 2. Thread 2 attempts to acquire Lock 1 and blocks because Thread 1 has Lock 1. 3. Thread 1 attempts to acquire Lock 2 and succeeds. After awhile, it relinquishes both locks. 4. Thread 2 then acquires Lock 1 and continues. Of course, its often very difficult to define a global order on all the locks. If there are a large number of objects being locked, and if the codebase is large enough, enforcing a global ordering turns out to be a difficult task. Often, however, you can impose an ordering on the types of locks. A rule as simple as: Synchronize on instances of class X before synchronizing on instances of class Y . often has the same effect as imposing a global ordering on all instances. For example, Synchronize on instances of Logfile before synchronizing on instances of Account .

12.2.13 Use Worker Threads to Prevent Deadlocks

Another common trick to prevent deadlocks is to use worker threads to reduce the number of locks any given thread has. Weve already briefly discussed worker threads. Among other examples, we discussed log files. Our example began with: A single thread that both received and handled a request and, in the course of doing so, logged information to the log file. We transformed this into: Two threads. One thread received the request and encapsulated the request in an object that was dropped off in a container. The second worker thread, pulled these objects from the container and registered them in a log. The main reason I gave for doing this was to prevent clients from blocking each other, or waiting for an external device to become available. Logging is something that can often be done independently of the actual client request. But theres another benefit. In the first scenario, a single thread holds two distinct types of locks: locks associated with the actual client request and incidental locks associated with the logging mechanism. Using a logging thread changes this in a very significant way: There is only one lock associated with logging that the request thread ever acquires namely, the lock associated with the container used to communicate with the worker thread. Moreover, the lock associated with the container doesnt count when youre reasoning about deadlock. This is because the container lock has the following two properties: • If another lock is acquired before the container lock, that other lock will be released after the container lock is released. • No locks are acquired between when the container lock is acquired and when the container lock is released. Any lock with these two properties cannot add a deadlock to a system. It can block threads as the container lock does in order to ensure data integrity, but it cannot deadlock threads.

12.3 Pools: An Extended Example

At this point, youve read some 70 or so pages of reasonably plausible material on threading. But, speaking from personal experience, its almost impossible to read 70 pages of material on threading and actually understand all of it on the first reading. The point of this section is to introduce a fairly complex and sophisticated piece of code that involves threading. If you understand this example, how it works, and why its built the way it is, then youve got a reasonable grasp of this material and how to use threads in your applications. If, however, this seems like an incredibly byzantine and opaque piece of code, then you may want to reread those 70 pages or grab a copy of one of the references.

12.3.1 The Idea of a Pool

Pooling is an important idiom in designing scalable applications. The central idea is that there is a resource, encapsulated by an object, with the following characteristics: • It is difficult to create the resource, or doing so consumes other scarce resources.