Special Methods of Heuristic Search
4.6 Special Methods of Heuristic Search
Finishing our considerations concerning search methods, let us come back to the first heuristic method discussed in this chapter, i.e., hill climbing. The main idea of this method consists of expanding a tree in the direction of those states whose heuristic function value is most promising. Let us assume that a problem is described with
the help of a solution space (X 1 , X 2 ) (instead of an abstract model of a problem), which is typical for optimization problems. Thus, a solution space is the domain of
a problem. Then, let us assume that for points (X 1 , X 2 ) of this space, values of the heuristic function h(X 1 , X 2 ) are known and they are defined as shown in Fig. 4.11 . Our goal is to climb to the high hill situated in the middle of the area with the hill- climbing method. 28 If we start our search at the base of the hill, we will conquer the summit, i.e., we will find a solution. However, if we start at the base of the lower hill situated in the bottom-leftmost subarea, then we will climb this hill and never
leave it. 29 This means, however, that we will not find the optimum solution. We find ourselves in a similar situation if we land in a plain area (plateau). Then, we gain no benefit from the heuristic information and we never reach a solution.
In order to avoid a situation in which we find a local extremum (minimum/ maximum) instead of the global one, 30 a lot of heuristic search methods have been constructed. Let us introduce the most important ones. The simulated annealing method was introduced by Scott Kirkpatrick, 31 C. Daniel Gelatt and Mario P. Vecchi [159] in 1983. In order to avoid getting stuck in a local
28 In a mathematical formulation, we seek a global maximum in the solution space. 29 As we more away from the summit of this hill, values of the heuristic function decrease. 30 In practice finding a local extremum means finding some solution which is not satisfactory. 31 Scott Kirkpatrick—a professor of physics and computer science (MIT, Berkeley, Hebrew Uni-
versity, IBM Research, etc.). He is the author of many patents in the areas of applying statistical physics in computer science, distributed computing, and computer methods in physics.
50 4 Search Methods Fig. 4.11 Potential
plateau problems that can appear during “hill climbing”
h(X , X 1 2 )
global maximum
X 1 local maximum
extremum, the interesting physical phenomenon of metal (or glass) annealing is used. In order to improve the properties of a material (e.g., its ductility or its hardness), it is heated to above a critical temperature and then it is cooled in a controlled way (usually slowly). Heating results in “unsticking” atoms from their initial positions. (When they are in their initial positions, the whole system is in a local internal energy minimum.) After “unsticking”, atoms drift in a random way through states of high energy. If we cooled the material quickly, then a microstructure would “get stuck” in
a random state. (This would mean reaching a local minimum of the internal energy of the system.) However, if we cool the material in a controlled, slow way, the internal energy of the system reaches the global minimum. 32
The Kirkpatrick method simulates the process described above. The internal energy of a system E corresponds to the heuristic function f , and the tempera- ture T is a parameter used for controlling the algorithm. From a current (temporary) solution i , a “rival” solution j is generated randomly out of its neighborhood. If the value of the “rival”solution E( j ) is not worse than that of the current solution (i.e., E( j ) ≤ E(i ), since we look for the global minimum), then it is accepted. If
not, then it can be accepted as well, however with some probability. 33 Thus, moving from a better state to a worse state is possible in this method. It allows us to leave
a local extremum. As we have mentioned, a parameter T (temperature) is used for controlling the algorithm. At the beginning, T is big and the probability of accepting
a “rival” worse solution is relatively big, which allows us to leave local minima. In succeeding steps of the algorithm “the system is cooled”, i.e., the value of T decreases. Thus, the more stable a situation is, the less the probability of choosing a “rival” worse solution.
32 This improves the properties of a material. 33 This probability is determined according to the Boltzmann distribution. This defines the distri-
bution of energy among particles in a thermal equilibrium as P = e
ij = E ( j ) − E (i ) , and k is the Boltzmann constant.
E ij / kT )
4.6 Special Methods of Heuristic Search 51 The tabu search method was introduced by Fred Glover 34 in 1986 [109]. In this
method the current solution is always replaced by the best solution in its neighbor- hood (even if it is worse). Additionally, a solution which has been already “visited” is forbidden for some time. (It receives the tabu status.) A visited solution is added to
a short tabu list. A newly added solution replaces the oldest on the list. 35 The search process finishes after a fixed number of steps. There are a lot of modifications of tabu search. The method is often combined with other heuristic methods. A combination of the method with evolutionary computing gives especially good results. Evolutionary computing, which can be treated as a considerable extension of heuristic methods, is discussed in the next chapter.
Bibliographical Note Search methods are among the earliest methods of Artificial Intelligence. Therefore,
they are described in fundamental monographs concerning the whole area of AI [189, 211, 241, 256, 261, 262, 273, 315].
The foundations of constructing heuristic search strategies are discussed in [221]. In the area of CSP a monograph [305] is the classic one. For constraint program-
ming a book [8] is recommended.
34 Fred W. Glover—a professor of computer science, mathematics, and management science at the University of Colorado. An adviser at Exxon, General Electric, General Motors, Texas Instruments,
etc. 35 The tabu list is a LIFO queue (Last-In-First-Out).