Hopfield Neural Networks Practical Artificial Intelligence Programming With Java
very different than back propagation networks covered later in Section 7.4 because the training data only contains input examples unlike back propagation networks
that are trained to associate desired output patterns with input patterns. Internally, the operation of Hopfield neural networks is very different than back propagation
networks. We use Hopfield neural networks to introduce the subject of neural nets because they are very easy to simulate with a program, and they can also be very
useful in practical applications.
The inputs to Hopfield networks can be any dimensionality. Hopfield networks are often shown as having a two-dimensional input field and are demonstrated recogniz-
ing characters, pictures of faces, etc. However, we will lose no generality by imple- menting a Hopfield neural network toolkit with one-dimensional inputs because a
two-dimensional image can be represented by an equivalent one-dimensional array.
How do Hopfield networks work? A simple analogy will help. The trained connec- tion weights in a neural network represent a high dimensional space. This space is
folded and convoluted with local minima representing areas around training input patterns. For a moment, visualize this very high dimensional space as just being
the three dimensional space inside a room. The floor of this room is a convoluted and curved surface. If you pick up a basketball and bounce it around the room, it
will settle at a low point in this curved and convoluted floor. Now, consider that the space of input values is a two-dimensional grid a foot above the floor. For any new
input, that is equivalent to a point defined in horizontal coordinates; if we drop our basketball from a position above an input grid point, the basketball will tend to roll
down hill into local gravitational minima. The shape of the curved and convoluted floor is a calculated function of a set of training input vectors. After the “floor has
been trained” with a set of input vectors, then the operation of dropping the basket- ball from an input grid point is equivalent to mapping a new input into the training
example that is closest to this new input using a neural network.
A common technique in training and using neural networks is to add noise to training data and weights. In the basketball analogy, this is equivalent to “shaking the room”
so that the basketball finds a good minima to settle into, and not a non-optimal local minima. We use this technique later when implementing back propagation networks.
The weights of back propagation networks are also best visualized as defining a very high dimensional space with a manifold that is very convoluted near areas of local
minima. These local minima are centered near the coordinates defined by each input vector.