CHAPTER 4 ANALYSIS AND DESIGN
CHAPTER 4
ANALYSIS AND DESIGN4.1 Analysis
Before entering the Backpropagation proces, the process done is normalization of data. The steps taken are reading the contents of master data, transform the data to 0-1 range, and write the CSV file that be used in the process of Backpropagation both the process of learning and testing.
Illustration 4.1: Flowchart Backpropagation Learning Process In the learning process flowchart above, there are three processes in Backpropagation, namely feed forward, backward, and weight update. Before entering the first process in Backpropagation, the steps taken are reading the contents of the learning data from the CSV file, then determining the value of learning rate, maximum epoch, and maximum error that will be used in the calculation process. The next step is generating random weights to calculate the value in the hidden layer and the output layer. The weights ranging from -1 to 1.
In the feed forward process, the steps taken are calculate nodes value for hidden layer and output layer. Using the values of the five parameters and the weights that exist between the input layer and the hidden layer, the value is calculated on the hidden layer using sigmoid activation function. Then the value on the hidden layer and the weights between the hidden layer and the output layer are calculated to get the value on the output layer using the same activation function. Then calculate the error value from the output results that calculated using MSE (Mean Square Error). If the error and epoch values do not match those specified in the initial step, then the calculation is continued to feed backward process. In the feed backward process, the step is to calculate the change of weight. Then update the old weight with new weight. The calculations are repeated continuously until the epoch and the maximum error value reach the specified limits.
Illustration 4.2: Flowchart Backpropagation Testing Process
In the testing flowchart above, the testing process is done by reading the contents of CSV file which contains test data. Then the data is calculated by Backpropagation process with the optimal weight obtained from the learning process. Finally, the error percentage from 1, 2, and 3 hidden layers using Backpropagation is displayed. While classification result stored in CSV file.
4.2 Design
4.2.1 Learning Process
1. The first step taken are determine the architecture of Backpropagation. This project uses five nodes input layer, 1, 2, and 3 hidden layers with three nodes in each hidden layer, and 2 nodes output layer.
Illustration 4.3: Architecture Backpropagation
Where: X1= temperature X2= pressure X3= humidity X4= wind X5= rain X6= clouds
2. Determine the coefficient of learning rate, maximum epoch,
0.9 Clouds
x ' =
0.1
0.9
0.1
0.1
0.9 0.9 0.6333 Clouds 0.1715 0.3000 0.9000 0.9 0.5026 0.7400 Rain 0.9000
0.9 0.1 0.4200 Clouds 0.5001 0.3000 0.8094
0.1 Clear 0.2358 0.9 0.8698
0.1
0.1
0.1 0.8921 0.644
Table 4.2: Normalized Learning Data
Temp Press Humidity Wind Rain Clouds Weather
b −a
0.8 (x−a)
4. Normalization process using formula:
3. Read the master data.
68 Clouds Clear= 001 Clouds= 010 Rain= 100
1
47
56 Rain 304.396 973
48 Rain 295.21 974 100 1 2.085
94 1 4.1425
32 Clouds 299.354 974
1
98
8 Clear 296.021 977
95
294.308 975
Table 4.1: Example Learning Data
Temp Press Humidity Wind Rain Clouds Weather
- 0.1 Where: x’= data after normalization process x= data that will be processed a= minimum data b= maximum data
5. Initiation of weight values by generating small random numbers -1 to 1.
Table 4.3: Example Input-Hidden Weight
Z1 Z2 Z3 Z4
1
0.2
0.1
0.3
0.4 X1
0.1
0.3
0.1
0.2 X2 -0.4
0.1 0.1 -0.2
X3
0.3
0.1
0.5
0.3 X4
0.2
0.3
0.1
0.1 X5
0.1
0.2
0.4
0.1 X6
0.1
0.2
0.2
0.2 Table 4.4: Example Hidden-Output Weight Y1 Y2 Y3
1
0.1
0.3
0.1 Z1
0.1
0.1
0.1 Z2
0.3
0.2
0.2 Z3
0.2
0.4
0.2 Z4
0.1
0.1
0.3
6. Calculate the value of input and weight between input layer and hidden layer.
n
- z =V
X V inj j ∑ i ij i =1
Where: inj z = the weighted hidden nodes signal V 0j = weight bias between input layer and hidden layer X i = input value V ij = weight between input layer and hidden layer Example: z in1 = 0.2 + (0.1 x 0.1) + (-0.4 x 0.8921) + (0.3 x 0.644) + (0.2 x 0.1) + (0.1 x 0.1) + (0.1 x 0.1)
= 0.08636 z in2 = 0.1 + (0.3 x 0.1) + (0.1 x 0.8921) + (0.1 x 0.644) + (0.3 x 0.1) + (0.2 x 0.1) + (0.2 x 0.1) = 0.35361 z in3 = 0.3 + (0.1 x 0.1) + (0.1 x 0.8921) + (0.5 x 0.644) + (0.1 x 0.1) + (0.4 x 0.1) + (0.2 x 0.1)
= 0.79121 z in4 = 0.4 + (0.2 x 0.1) + (-0.2 x 0.8921) + (0.3 x 0.644) + (0.1 x 0.1) + (0.1 x 0.1) + (0.2 x 0.1) = 0.47478 7. Calculate the hidden value using sigmoid activation function.
Z j
=f (z
inj
) Where: Z j = hidden nodes value z inj = the weighted hidden nodes signal Example: Z 1 = 1/(1+e -Z_in1 )
= 1/(1+e -0.08636 ) = 0.5215
Z 2 = 1/(1+e -Z_in2 ) = 1/(1+e -0.35361 ) = 0.5874
Z 3 = 1/(1+e -Z_in3 ) = 1/(1+e -0.79121 ) = 0.6880
Z 4 = 1/(1+e -Z_in4 )
- 0.47478
= 1/(1+e ) = 0.6165
8. Calculate the value of hidden and weight between hidden layer and output layer.
n
- y =V Z W
ink j ∑ j jk i =1
Where: y ink = the weighted output nodes signal V 0j = weight bias between hidden layer and ouput layer Z j = hidden nodes value jk W = weight between hidden layer and output layer Example: Y in1 = 0.1 + (0.1 x 0.5215) + (0.3 x 0.5874) + (0.2 x 0.6880) + (0.1 x 0.6165) in2 = 0.5276 Y = 0.3 + (0.1 x 0.5215) + (0.2 x 0.5874) + (0.4 x 0.6880) + (0.1 x 0.6165)
= 0.8064 Y in3 = 0.1 + (0.1 x 0.5215) + (0.2 x 0.5874) + (0.2 x 0.6880) + (0.3 x 0.6165)
= 0.5921
9. Calculate the value of output using sigmoid activation function.
y =f ( y ) k ink
Where: k Y = output nodes value Example:
- Y_in1
Y 1 = 1/(1+e ) -0.52762 = 1/(1+e ) 2 = 0.6289 -Y_in2
Y = 1/(1+e ) -0.80648 = 1/(1+e ) = 0.6913 -Y_in3
Y 3 = 1/(1+e ) -0.59218 = 1/(1+e )
= 0.6438
10. Calculate the weighted changes value between output layer and hidden layer.
δ =(t −Y )f ' ( y )
k k k ink
Δ W =αδ Z
jk k j
Where: k δ = error values are propagated back to hidden nodes k t = target output Y k = output nodes value α= learning rate value Z j = hidden nodes value jk ∆W = weighted changes value between output layer with hidden layer Example: δ 1 = (0-0.6289) x 0.6289 x (1-0.6289) 2 = -0.1467
δ = (0-0.6913) x 0.6913 x (1-0.6913) = -0.1475
δ 3 = (0-0.6438) x 0.6438 x (1-0.6438) = -0.1476
= -0.0293 ∆W 11 = 0.2 x -0.1467 x 0.5215
= -0.0152 ∆W 12 = 0.2 x -0.1467 x 0.5874 13 = -0.0170
∆W = 0.2 x -0.1467 x 0.6880 = -0.0201
∆W 14 = 0.2 x -0.1467 x 0.6165 = -0.0179
∆W 20 = 0.2 x -0.1475 x 1 21 = -0.0295 ∆W = 0.2 x -0.1475 x 0.5215
= -0.0152 ∆W 22 = 0.2 x -0.1475 x 0.5874
= -0.0171 ∆W 23 = 0.2 x -0.1475 x 0.6880 24 = -0.0202
∆W = 0.2 x -0.1475 x 0.6165 = -0.0180
∆W 30 = 0.2 x -0.14760 x 1 = -0.0295
∆W 31 = 0.2 x -0.14760 x 0.5215 32 = -0.0153 ∆W = 0.2 x -0.14760 x 0.5874
= -0.0171 ∆W 33 = 0.2 x -0.14760 x 0.6880
= -0.0202 ∆W 34 = 0.2 x -0.14760 x 0.6165
= -0.0180
11. Calculate the weighted changes value between hidden layer and input layer.
m
δ = δ W
δ =δ f ' (z )
j inj inj Δ V =α δ
X ij j i
Where: inj δ = the number of input delta in the hidden layer from output nodes δ j = error values are propagated back to input nodes ∆V ij = weighted changes value between hidden layer and input layer i
X = input nodes Example: δ in1 = (-0.1467 x 0.1) + (-0.1475 x 0.1) + (-0.1476 x 0.1)
= -0.0441 δ in2 = (-0.1467 x 0.3) + (-0.1475 x 0.2) + (-0.1476 x 0.2) in3 = -0.1030 δ = (-0.1467 x 0.2) + (-0.1475 x 0.4) + (-0.1476 x 0.2)
= -0.1178 δ in4 = (-0.1467 x 0.1) + (-0.1475 x 0.1) + (-0.1476 x 0.3) 1 = -0.0737
δ = -0.0441 x 0.5215 x (1-0.5215) = -0.0110
δ 2 = -0.1030 x 0.5874 x (1-0.5874) 3 = -0.0250 δ = -0.1178 x 0.6880 x (1-0.6880)
= -0.0253 δ 4 = -0.0737 x 0.6165 x (1-0.6165)
= -0.0175 10 ∆V = 0.2 x -0.0110 x 1
= -0.0022
∆V 11 = 0.2 x -0.0110 x 0.1 = -0.0002
∆V 12 = 0.2 x -0.0110 x 0.8921 13 = -0.0019 ∆V = 0.2 x -0.0110 x 0.644 14 = -0.0014
∆V = 0.2 x -0.0110 x 0.1 = -0,0002
∆V 15 = 0.2 x -0.0110 x 0.1 = -0.0002
∆V 16 = 0.2 x -0.0110 x 0.1 20 = -0.0002 ∆V = 0.2 x -0.0250 x 1
= -0.005 ∆V 21 = 0.2 x -0.0250 x 0.1
= -0.0005 ∆V 22 = 0.2 x -0.0250 x 0.8921 23 = -0.0044
∆V = 0.2 x -0.0250 x 0.644 = -0.0032
∆V 24 = 0.2 x -0.0250 x 0.1 = -0.0005
∆V 25 = 0.2 x -0.0250 x 0.1 26 = -0.0005 ∆V = 0.2 x -0.0250 x 0.1
= -0.0005 ∆V 30 = 0.2 x -0.0253 x 1
= -0.0050 ∆V 31 = 0.2 x -0.0253 x 0.1 32 = -0.0005
∆V = 0.2 x -0.0253 x 0.8921 = -0.0004
∆V 33 = 0.2 x -0.0253 x 0.644 = -0.0032
∆V 34 = 0.2 x -0.0253 x 0.1 = -0.0005
∆V 35 = 0.2 x -0.0253 x 0.1 = -0.0005
∆V 36 = 0.2 x -0.0253 x 0.1 40 = -0.0005 ∆V = 0.2 x -0.0175 x 1
= -0.0035 ∆V 41 = 0.2 x -0.0175 x 0.1
= -0.0003 ∆V 42 = 0.2 x -0.0175 x 0.8921 43 = -0.0031
∆V = 0.2 x -0.0175 x 0.644 44 = -0.0022 ∆V = 0.2 x -0.0175 x 0.1
= -0.0003 ∆V 45 = 0.2 x -0.0175 x 0.1
= -0.0003 ∆V 46 = 0.2 x -0.0175 x 0.1
= -0.0003
12. Calculate the new weight from old weight and weight changes value.
W =W +Δ W jk jk jk
V =V +Δ V ij ij ij
Where: W jk = weight between hidden layer and output layer ij
V = weight between input layer and hidden layer jk ∆W = the weight changes value between hidden layer and output layer ∆V ij = the weight changes value between input layer and hidden layer
Example: W 10 = 0.1 + (-0.0293) 11 = 0.0707
W = 0.1 + (-0.0152) = 0.0848
W 12 = 0.3 + ( -0.0170) = 0.283
W 13 = 0.2 + (-0.0201) 14 = 0.1799 W = 0.1 + (-0.0179) 20 = 0.0821
W = 0.3 + (-0.0295) = 0.2705
W 21 = 0.1 + (-0.0152) 22 = 0.0848 W = 0.2 + (-0.0171)
= 0.1829 W 23 = 0.4 + (-0.0202)
= 0.3798 W 24 = 0.1 + (-0.0180) 30 = 0.082
W = 0.1 + (-0.0295) = 0,0705
W 31 = 0.1 + (-0.0153) 32 = 0.0847 W = 0.2 + (-0.0171)
= 0.1829
= 0.1798 W 34 = 0.3 + (-0.0180)
= 0.282
Table 4.5: New Hidden-Output Weight
Y1 Y2 Y3 1 0.0707 0.2705 0.0705 Z1 0.0848 0.0848 0.0847 Z2 0.283 0.1829 0.1829 Z3 0.1799 0.3798 0.1798 Z4 0.0821 0.082 0.282 10 V = 0.2 + (-0.0022)
= 0.1978
V 11 = 0.1 + (-0.0002) = 0.0998
V 12 = -0.4 + (-0.0019) 13 = -0.4019 V = 0.3 + (-0.0014)
= 0.2986
V 14 = 0.2 + (-0.0002) 15 = 0.1998 V = 0.1 + (-0.0002)
= 0.0998
V 16 = 0.1 + (-0.0002) = 0.0998
V 20 = 0.1 + (-0.005) 21 = 0.095 V = 0.3 + (-0.0005)
= 0.2995
V 22 = 0.1 + (-0.0044) 23 = 0.0956 V = 0.1 + (-0.0032)
= 0.0968
V 24 = 0.3 + (-0.0005) = 0.2995
V 25 = 0.2 + (-0.0005) 26 = 0.1995 V = 0.2 + (-0.0005) 30 = 0.1995
V = 0.3 + (-0.0050) = 0.295
V 31 = 0.1 + (-0.0005) 32 = 0.0995 V = 0.1 + (-0.0045)
= 0.0955
V 33 = 0.5 + (-0.0032) = 0.4968
V 34 =0.1 + (-0.0005) 35 = 0.0995 V = 0.4 + (-0.0005)
= 0.3995
V 36 = 0.2 + (-0.0005) = 0.1995
V 40 = 0.4 + (-0.0035) = 0.3965
= 0.1997
V 42 = -0.2 + (-0.0031) 43 = -0.2031 V = 0.3 + (-0.0022)
= 0.2978
V 44 = 0.1 + (-0.0003) = 0.0997
V 45 = 0.1 + (-0.0003) 46 = 0.0997 V = 0.2 + (-0.0003)
= 0.1997
Table 4.6: New Input-Hidden Weight
Z1 Z2 Z3 Z4 1 0.1978 0.095 0.295 0.3965 X1 0.0998 0.2995 0.0995 0.1997 X2 -0.4019 0.0956 0.0955 -0.2031 X3 0.2986 0.0968 0.4968 0.2978 X4 0.1998 0.2995 0.0995 0.0997 X5 0.0998 0.1995 0.3995 0.0997 X6 0.0998 0.1995 0.1995 0.1997
4.2.2 Testing Process
Testing is done by doing the same steps 1-9 on the learning process. Testing is done using optimal weight from learning process.