Java Classes for Hopfield Neural Networks
                                                                                algorithms for storing and recall of patterns at the same time. In a Hopfield neural network simulation, every neuron is connected to every other neuron.
Consider a pair of neurons indexed by i and j. There is a weight W
i,j
between these neurons that corresponds in the code to the array element weight
[i, j]. We can define energy between the associations of these two neurons as:
energy [i, j] = −weight[i, j] ∗ activation[i] ∗ activation[j]
In the Hopfield neural network simulator, we store activations i.e., the input values as floating point numbers that get clamped in value to -1 for off or +1 for on. In
the energy equation, we consider an activation that is not clamped to a value of one to be zero. This energy is like “gravitational energy potential” using a basketball
court analogy: think of a basketball court with an overlaid 2D grid, different grid cells on the floor are at different heights representing energy levels and as you
throw a basketball on the court, the ball naturally bounces around and finally stops in a location near to the place you threw the ball, in a low grid cell in the floor – that
is, it settles in a locally low energy level. Hopfield networks function in much the same way: when shown a pattern, the network attempts to settle in a local minimum
energy point as defined by a previously seen training example.
When training a network with a new input, we are looking for a low energy point near the new input vector. The total energy is a sum of the above equation over all
i,j.
The class constructor allocates storage for input values, temporary storage, and a two-dimensional array to store weights:
public Hopfieldint numInputs { this.numInputs = numInputs;
weights = new float[numInputs][numInputs]; inputCells = new float[numInputs];
tempStorage = new float[numInputs];
} Remember that this model is general purpose: multi-dimensional inputs can be con-
verted to an equivalent one-dimensional array. The method addT rainingData is used to store an input data array for later training. All input values get clamped to an
“off” or “on” value by the utility method adjustInput. The utility method truncate truncates floating-point values to an integer value. The utility method deltaEnergy
has one argument: an index into the input vector. The class variable tempStorage is set during training to be the sum of a row of trained weights. So, the method
deltaEnergy returns a measure of the energy difference between the input vector in the current input cells and the training input examples:
private float deltaEnergyint index {
112
float temp = 0.0f; for int j=0; jnumInputs; j++ {
temp += weights[index][j]  inputCells[j]; }
return 2.0f  temp - tempStorage[index]; }
The method train is used to set the two-dimensional weight array and the one- dimensional tempStorage array in which each element is the sum of the corre-
sponding row in the two-dimensional weight array:
public void train { for int j=1; jnumInputs; j++ {
for int i=0; ij; i++ { for int n=0; ntrainingData.size; n++ {
float [] data = float []trainingData.elementAtn;
float temp1 = adjustInputdata[i]  adjustInputdata[j];
float temp = truncatetemp1 + weights[j][i];
weights[i][j] = weights[j][i] = temp; }
} }
for int i=0; inumInputs; i++ { tempStorage[i] = 0.0f;
for int j=0; ji; j++ { tempStorage[i] += weights[i][j];
} }
} Once the arrays weight and tempStorage are defined, it is simple to recall an
original input pattern from a similar test pattern:
public float [] recallfloat [] pattern, int numIterations {
for int i=0; inumInputs; i++ { inputCells[i] = pattern[i];
} for int ii = 0; iinumIterations; ii++ {
for int i=0; inumInputs; i++ { if deltaEnergyi  0.0f {
113
inputCells[i] = 1.0f; } else {
inputCells[i] = 0.0f; }
} }
return inputCells; }
                