BASIC CONCEPT OF NEURAL COMPUTING

12.4 BASIC CONCEPT OF NEURAL COMPUTING

Neural computing is a problem-solving methodology that attempts to mimic how our brains function. It is one of several successful approaches to machine learning. The resulting model from neural computing is often called an artificial neural network (ANN) or a neural network (NN).

The study of neural networks began from a single neuron, called a perceptron, which is capable of converting a set of inputs to its outputs. After some initial success, the perceptron model fell into disfavor because it was unable to solve the simple XOR problem. A dramatic resurgence came in the late 1980s, when the multiple-layer archi- tecture was proposed and proved powerful in solving complex problems. This renewed interest developed because of the need for brainlike information processing, advances in computer technology, and progress in neuroscience toward better understanding of the mechanisms of the brain. Potential commercial and military applications with large payoffs also motivated (and funded) this research.

BIOLOGICAL AND ARTIFICIAL NEURAL NETWORKS The human brain is composed of special cells called neurons. These cells do not die

when a human is injured (all other cells reproduce to replace themselves and then die). This phenomenon may explain why we retain information. Information storage spans sets of neurons. The estimated number of neurons in a human brain is 50 billion to 150 billion, of which there are more than 100 different kinds. Neurons are partitioned into groups called networks. Each network contains several thousand highly intercon- nected neurons. Thus, the brain can be viewed as a collection of neural networks.

The ability to learn and react to changes in our environment requires intelligence. Thinking and intelligent behavior are controlled by the brain and the central nervous system. People who suffer brain damage have difficulty learning and reacting to chang- ing environments. Even so, undamaged parts of the brain can often compensate with new learning.

A portion of a network composed of two cells is shown in Figure 12.4. The cell itself includes a nucleus (at the center). To the left of cell 1, the dendrites provide input signals to the cell. To the right, the axon sends output signals to cell 2 via the axon ter- minals. These axon terminals merge with the dendrites of cell 2. Signals can be trans- mitted unchanged, or they can be altered by synapses. A synapse is able to increase or decrease the strength of the connection from neuron to neuron and cause excitation or inhibition of a subsequent neuron. This is where information is stored.

An artificial neural network model emulates a biological neural network. Neural computing actually uses a very limited set of concepts from biological neural systems (see AIS in Focus 12.4). It is more of an analogy to the human brain than an accurate model of it. Neural concepts are usually implemented as software simulations of the massively par-

allel processes that involve processing elements (also called artificial neurons or neurodes)

interconnected in a network architecture. The artificial neuron receives inputs analogous to the electrochemical impulses the dendrites of biological neurons receive from other

neurons. The output of the artificial neuron corresponds to signals sent out from a biolog- ical neuron over its axon. These artificial signals can be changed by weights in a manner similar to the physical changes that occur in the synapses (see Figure 12.5).

It is important to recognize that artificial neural networks were originally pro- posed to model the human brain's activities. The human brain has much more com-

P A R T IV INTELLIGENT DECISION SUPPORT SYSTEMS