Back Propagation Neural Network Classification

computing error of the nodes in output layer, denotes δ k then adjusts weight vjk: δk = yk 1-tktk-yk ......................... 6 vjk = v jk + β.δk.hj............................ 7 Where: β is constant of momentum tk is prediction value 5 This process to compute errors of the nodes in input layer, denoted τk, and adjust weight vjk τk = hj 1-hj ∑k δk. vjk.................... 8 wij = wij + β . τk . xi ......................... 9 6 Move to the next training set, and repeat step 2. Learning process is stopped if yk are close enough to tk. The termination can be based on the error E. for instance, learning process is stopped when E0.0001 2 5 . p p k yk tk p E − = ∀ ∑ .................... 10 tkp = target value of p-th data from training set node k yp = prediction value of p-th data from training set node k The trained neural network can be used to predict target t by inputting values from input layer x

2.4.2.3 Back Propagation Neural Network Classification

Method. During the last decade, researchers have already done neural network for land cover classification using remote sensing 18 satellite. There are many researchers who use neural network in their research Sadly, 1998: a. Chang 1994 used a dynamic learning neural network for remote sensing applications and obtained a good result and concluded that neural network is a feasible classifier for a very large volume image. b. Similar study was also carried out by Bischof 1992 and showed that the neural network outperforms the maximum likelihood method. c. Yoshida 1994 proposed a neural network classification method for remotely sensed data analysis in order to improve neighborhood relations between pixels and decrease the error probability for pattern classification and obtained a more realistic and noiseless result compared to a conventional statistic method. Back propagation neural network neural network usually includes an input layer, one or several hidden layers and an output layers as the biological neural network does. Three types of layers include Fu, 1994: ¾ The input layer: The nodes, which encode the instance presented to the network for processing. For example, each input unit may be designated by an attribute value possessed by the instance. 19 ¾ The hidden layer: The nodes, which are not directly observable and hence hidden. They provide nonlinearities for the network. ¾ The Output layer: The nodes which encode possible concept or value to be assigned to the instance under consideration. For example, each input unit represents a class of object. The processing neurons in each layer are called processing units or simply known as units and neurons. The units of the input layer, hidden layers and output layer can also be called respectively input units, hidden units and output units, respectively. The direction of information transmission is from the input layer to the output layer. The neural network must be trained in advance in order to make discriminate analysis with it. The purpose of the training is to adjust the association strength or coefficients of weights between the neurons. The criteria of the training are to make the error between the computed output dependent vector and the known dependent vector of the trained patterns. The process of the training is just to transmit backward the error to the network, adjust the weight among the units between the output layer, the hidden layer and the input layer, that’s why this kind of network is called back propagation neural network Zhou,1997. 20 The purpose of classification is to automatically categorize image pixels into classes based on land cover type. Back propagation neural network is a supervised classification method that was developed by Rumelhard et al. in 1986. Back propagation neural network is consists of many layers of neurons such as input layer, hidden layer may be more than layer and output layers. Each layer consists of many neurons.

2.4.3 Classification Accuracy Assessment