to Shepherd and Koch 1990 in Haykin 1994, human brain has more than 10 billion neurons and 60
trillion synapses or connections between neurons. Even though it is relatively slower than computer
systems that are made up from nano-technology silicon gates, it can do highly complex, nonlinear, and
parallel tasks such as pattern recognition and perception, faster and very much better than the best
computing system that human ever created.
17
Figure 1:
Neuron or nerve cell.
Because the neural networks are good for recognition tasks, some earlier researchers such as
McCulloch-Pitts, Grossberg, Minsky, etc., tried to model the nervous processing unit so its mechanism
can be emulated in computing systems. From this perspective, we define Artificial Neural network
ANN or usually called as just Neural Network NN as an emulation of human nervous system when
performing information processing. Its characteristics are displayed on the ability to obtain new knowledge
after a successful learning process and store it in the information storage which is its synaptic weights.
In more detail, NN is generalization of mathematical models of human cognition based on the
assumptions that Fausset, 1994:
•
information processing occurs at many simple elements called neuron,
•
signals are passed between neurons over connection links,
•
each connection link has an associated connection weight which multiplies the signals
transmitted,
•
each neuron applies an activation function which is usually non linier, to its net input to
determine its output signals.
Figure 2
: A mathematical model of a neuron.
In NN model, neuron takes a set of inputs, ,
along with a set of connection or link or synaptic which are characterized by weights,
. The summing junction, ∑, sums up the input signals that
are amplified by the connection weights. The activation function,
m
x
km
w
.
ϕ , limits the net outputs in
allowable values. The architecture of the NN model is depicted in Figure 2.
The general mathematical equations for neural information processing are given in Equation 1 for
inputs summing process to obtain
k
v
=
=
∑
m k
kj j
v w
j
x
1 and Equation 2 for producing the NN output,
.
k
y
k k
y v
ϕ
=
2 2.2.
Neural Network Learning Model Taxonomy
According to Haykin’s 1994 taxonomy, there are three NN learning models.
• Supervised. The essential of this paradigm is
the availability of an external supervisor, so there will be an input-output relation in order to
find the most minimum disagreement between the NN outputs with the examples given by the
supervisor.
• Unsupervised or Self-Organized. In this
learning paradigm, there is no external teacher or examples to be learnt by the NNs. So, the
NNs will perform a competitive learning rule where the winning neuron is entitled to keep
the input in its memory.
• Reinforcement Learning. This is the on-line
learning of an input-output mapping through a process of trial and error designed to maximize
a scalar performance index called as reinforcement signal.
2.3. Supervised Neural Network – Back
Propagation Network
The BPN was developed to cope with the limitations of single-layer NN. The BPN actually is a
feedforward NN that is trained by backpropagation which means the signals is propagated in reverse
direction. The primary aim of NN training is to train the NN to achieve a balance between the ability to
respond correctly to the input patterns that are used for training or memorization, and the ability to give
reasonable responses to input that is similar that used in training or generalization.
In training the NN with backpropagation mechanism, there will be three steps, namely:
•
feedforwarding the input training patterns to the NN input layer,
•
calculating the NN outputs and backpropagating the associated error,
•
adjusting the connection weights to minimize the error.
After passing the training session, the NN uses the final connection weights when recognizing the input
patterns given to it. As commonly in a multilayer NN, BPN has three
layers namely input layer, hidden layer, and output layer. The number of input and output layers is
depended on the input pattern and the output’s target. The number of hidden layer is depended on particular
applications, but commonly one hidden layer is sufficient for many applications. The architecture of
BPN is depicted in Figure 3.
Figure 3
: The architecture of the BPN model Priyanto et.al, 2008.
1 Training Algorithm. Refer to Fausset,
1994 for detailed BPN training algorithm.
2 Activation Function. There are four
common NN activation functions namely:
•
identity function which produces binary output 0 or 1,
•
binary step function with threshold θ ,
•
binary sigmoid logistics sigmoid,
•
bipolar sigmoid,
•
hyperbolic tangent. The sigmoid function and hyperbolic tangent are
the most common activation function for training NN with backpropagation mechanism.
2.4. Unsupervised Neural Network – Adaptive
Resonance Theory
Figure 4: ART1-NN architecture Sumari et.al, 2008a.
ART-NN, as presented in Figure4, is designed to facilitate degree controlling of pattern similarity that is
placed at the same cluster and overcomes stability- plasticity problem that is faced by other NN. The
ART1-NN is also designed to group cluster binary input vectors. It has two layers, F1 layer is divided
into two sublayers namely F1a as input part, and F1b as interface part, and layer F2 cluster along
with reset unit that is used to control degree of patterns similarity that is put down at the same unit
cluster. F1 and F2 layers are connected by two groups of weight paths, bottom-up weight and top-down
weight. To control learning process, some complement units are also entangled at this NN.
For performing pattern matching, ART is provided with a parameter called vigilance parameter, with a
value in range 0 ρ
ρ 1 ≤ . Higher values of
ρ are
applied for training session, while lower values are applied for operating session.
ART1-NN architecture consists of two parts. The architecture of the ART1-NN is depicted in Figure 4.
• Computation Units. It consists of F1 layer input part and interface part, F2 layer, and
reset unit. The F2 layer is also called as competitive layer.
• Complementary Units. This unit provides a mechanism so that the computation that is
carried out by ART1 algorithm can be done by using NN principles. These units are also called
as gain control units. For the ART1-NN algorithm in detail, refer to Skapura, 1991.
2.5. Aircraft Radar Cross Section and Speed