M.F.V. Ruslau, B.S.S. Ulama
SWUP
MA.67
adopt the techniques introduced by Boonkiatpong Sinthupinyo 2011. Classification accuracy is not increased significantly. However, the network integration method, simply
helping to reduce errors and improve accuracy, especially for large and heterogeneous data sets.
2. Materials and methods
2.1 Back propagation model
Back propagation is a learning technique that adjusts weights in the neural network by propagating weight changes. It is simply a gradient descent method to minimize the total
square error of the output computed by the network Cherkassky Mulier, 2007. The back propagation is one of the simplest and most general methods for supervised training of
multilayer neural networks, in which neurons connect each other with weighted connection. Back propagation has become popular to classification problem in recent years. Back
propagation algorithm can be developed to multi classification problem. Multi classification problem has similarities to binary classification, which makes the network structure then
randomize the weight and calculate the output unit. The difference lies in the calculation process for the case multi category output where the output will work in accordance with
the classification process, namely calculation of output is processed through back propagation learning algorithm with binary numbers, if the output goes into classification
class, then the class contains a value of 1, whereas a class apart It contains the value 0.
Figure 1. Backpropagation architecture for multi classification category.
The architecture consists of one input layer, one hidden layer and one output layer. In thus case, X
i
is the i-th input variable, W
ik
is the weight of the i-th input node to node k-th in the
hidden layer. Z
k
is k-th node in the hidden layer. Y
j
is the j-th output node, the number of
categories and V
kj
is the weight of node k-th in the hidden layer to the j-th node in the output
layer.
2.2 Implementation procedure
In this study, we are predicting the class classification of an outcome variable. Response in this study consisted of four categories. The response variable was the poverty
status. This situation is identical to the multi classification learning process, its means the binary process n times. The predictors and response variables will be scaled to [0,1] scale
before entering them into the network.
X
i
Y
j
Z
k
W
ik
V
kj
Input Layer Hidden Layer
Output Layer
Classifying the poor household using neural network
SWUP
MA.68
Our experiment work with a large dataset of poor household, 65.658 households with 16 predictor variables. Consists of one ordinal variables, three interval variables, and
nominals. Thus, we divide the original training set N into n sub data sets, namely N
1
to N
n
. Each sub dataset will be trained by the same Back propagation network structure with a
single hidden layer. The number of nodes in hidden layer was selected by training with different number of hidden layer. The result shown in Figure 2. and we were collected the
weight from each node in the lowest error network and use these weights as weight to training networks in original dataset with the same structure.
3. Results and discussion