4.2.1.1 Classification Image with Back Propagation Neural Network
Classification by using Back Propagation Neural Network model composed on several steps:
1. Creating Data This Function creates and initiates a new back propagation neural network
segment for back propagation neural network processing. This back propagation neural network segment can be trained to recognize classes
with the back propagation neural network train program. In this process there are 2 types of data, which is spatial data in the form of Landsat 7
ETM + image and Training data area which is in form of vector. Data vector has made equal precisely with data training for method maximum
likelihood of the size pixel 30 x 30 meters. The model has three parameters, which are input units, number of hidden units and number of
samples. After repetition to several time, that will get the best combination are input 6, hidden layer 5 and number of sample 500, like a Figure
4.10. The input is coming from layer per-band on image and number of samples coming from number of pixels which is used to training.
Figure 4.10. Creates Data for Neural Network
52
2. Training This command of back-propagation neural networks needs to be trained to
learn the input patterns of interest. The momentum rate optionally specifies the momentum rate between 0.0 and 1.0 for back propagation
neural network training. The learning rate optionally specifies the learning rate between 0.0 and 1 for back propagation neural network training. The
learning and momentum rates affect how quickly the neural network stabilizes. The momentum rate can be used to speed up learning. A high
momentum rate 0.9 trains with larger steps than a lower rate 0.1. The use of momentum term helps reduce oscillation between iterations, and
allows a higher learning rate to be specified without the risk of non- convergence.
For this research, momentum of rate used was 0.9, while learning rate used was 0.1. Training Process require five parameters that are
momentum rate, learning rate, maximum total error, maximum individual error and maximum number of iteration. After done to several times, it
will obtained a combination for training that are momentum rate 0.9, learning rate 0.1, maximum total error 0.1, maximum individual error
0.1 and maximum number of iteration 1000. The setting is shown in Figure 4.11
53
Figure 4.11. Training Data for Neural Network 3. Classify
This function classifies multispectral imagery using a back propagation neural network. The classification can be restricted to pixels under
specified rectangular window. If the window is not specified, then every pixel in the image is classified. In classification model have three
parameters, which are null class, most likely classes’ images and resample mode. Best combinations for this research are null class yes, most likely
class images 1 and resample mode nearest. For resample mode should be equal with maximum likelihood, which is nearest neighbourhood.
Figure 4.12. Classify Process for Neural Network
54
4. Report This function generates a report for the specified neural network segment.
The report will be showed a process of iteration until get the error that was expected. If the maximum total error which is expected can not reach, then the
iteration will be done but the maximum total error still equal with the lowest error
In order to acquire a better result, then the process will be performed for several times of repetition by changing the number sample from 100 to 5000
samples. In addition, the number of iteration must also be changed for several times, starting from 100 iterations until 1100 iterations.
From several repetitions, it can be noticed that the more number of sample is used for learning process, the bigger total of error will be. The whole
samples as described in the table above was running by using 1000 iterations. Number of iteration was increased to 1100 iterations, yet the maximum total
error obtained was not significantly changed. As an example, for number of sample 500 and 1000 iterations, the maximum total error is 0.449, then the
iteration was increased to 1100 iterations, the error is still 0.449. The conclusion is the maximum total error that can be achieved by model with 500
samples is 0.449, and this error will not change even though the number of iteration was increased to 1100. The maximum total error along with the
increasing of number of sample will progressively far from the expected value, which is 0.1 Figure 4.11.
From the explanation as stated above, it can be concluded that the model will stop to perform learning process in the training area if the value of
55
maximum total error or the expected number of iteration has been achieved. As known before, back propagation neural network method is a non-
parametric classification, which means that classification process is using weighting for each pixel inside training area, instead of using statistical
algorithm. Pixels were acquired randomly inside the training area, the more number of pixel used, then the more number of pixels that have high value.
This will also cause individual maximum error getting bigger. The accuracy of classification result by some changing in number of
sample can be measured by using error matrix. Error matrix was obtained by drawing matrix diagram of the classification result and reference data.
Reference data used is a vector data from training area. The result of error matrix for each changing in number of samples can be seen in Table 4.4.
Table 4.4. Overall Accuracy for Back Propagation Neural Network Method
No. Number of Sample
Overall Accuracy
1 100 68.62
2 500 69.89
3 1000 70.72
4 4000 73.34
5 5000 67.54
From the table above, it can be observed that, the more number of samples
used in learning phase, the higher of accuracy of classification result by using neural network will be. The calculation for overall accuracy using error matrix can
be seen in Appendix 3. Due to the level of accuracy with number of samples is 4000 was classified as high, and then later this classification result will be used as
comparison to the maximum likelihood method. Nevertheless, after the number of samples was increased to 5000 samples of pixels, the overall accuracy is getting
lower. This occurrence happened due to number of pixels used 5000 is over than
56
number of pixels used as the training area, which is 4000 samples of pixel, so that there are several number of pixels outside of the training area included in training
process .
Classification result for number of samples 4000 can be seen in this Figure 4.13 below.
57
Figure 4.13. The Classification Result of Ciliwung Watershed Using Back
Propagation Neural Network Classification Method
58
4.3 Classification Accuracy Assessment