TELKOMNIKA ISSN: 1693-6930
New Modelling: Modification of Two Dimensional Fisherface Based Feature Arif Muntasa 117
number of training sets is M, and for each class i-th has M
i
training sets. The i-th image training is represented by using A
i
and mean i-th image of training sets are
i
A
, whereas
A
represents mean image of all training sets. 2D-LDA can be calculated:
c i
T i
i T
i i
R W
A A
R R
A A
S
1
6
c i
c A
T i
T i
i R
B
i i
A A
R R
A A
M S
1
7 The initialization value of R can be defined as identity matrix that has k dimension,
where k represents number of eigenfaces used. The results of Equations 20 and 21 can be used to compute the covariance as seen
R W
R B
R
S S
C
8 The result of Equation 8 can be utilized to calculate the eigenvector. These
eigenvectors are used as initialization value of L
R m
R R
R
L
, ,
, ,
3 2
1
To compute the value of C
L
, it is necessary to calculated first the value of
L W
S
and
L B
S
c i
T i
i T
i i
L W
A A
L L
A A
S
1
9
c i
c A
T i
T i
i L
B
i i
A A
L L
A A
M S
1
10 And the value of C
L
can be computed by using
L W
L B
L
S S
C
11 The value of R can be updated by using the eigenvector of Equation 11
L m
L L
L
R
, ,
, ,
3 2
1
12 The values of L and R are used to achieve the projection matrix. However this method
has limitation, which is the value of L and R depend on the number of iterations. Time complexity to achieve feature extraction L and R is On
3
.
4. Proposed Method
Main idea of proposed method is modify of 2D-LDA. The training set is not converting into one dimensional vector. For each class is computed the average of the training set and for
all classes are also computed the average of the training sets. The covariance of training set can be computed by using the following equation
ISSN: 1693-6930
TELKOMNIKA Vol. 12, No. 1, March 2014: 115 – 122
118
T T
D B
D B
T D
W D
W T
D B
D B
T D
W D
W
S S
S S
S S
S S
C
2 2
2 2
2 2
2 2
13 In this proposed method, the zero mean of training sets has been modified by
multiplication
D W
S
2
with its transpose and followed by multiplication
D B
S
2
with its transpose as seen on equation
T D
B D
B T
D W
D W
S S
S S
z
2 2
2 2
14
In this case, the value of
D W
S
2
and
D B
S
2
can be calculated by using the following equation
c i
T i
i i
i D
W
A A
A A
S
1 2
15
c i
c A
T i
i i
D B
i i
A A
A A
M S
1 2
16
D W
S
2
and
D B
S
2
represent scatter within class and scatter between class of two dimensional matrix, They can be obtained from the training sets without transformation. Time complexity to
obtain the values of
D W
S
2
and
D B
S
2
is On
2
. The results of Equation 13 are used to compute the eigenvector. The eigenvector is used as the optimal projection matrix, whereas the weight of
training set can be achieved by using the following equation
D i
A W
2
17 The eigenvector of Equation 13 is used to achieve the weight of the testing sets. The
results of Equation 17 is measured the similarity of the weight between the training and testing sets. Feature extractions resulted of proposed method is two dimensional matrixes, the results
of feature extraction of training sets has same size with original image. To achieve high recognition rate, it is necessary to chose dominant feature. The most dominant feature has
correlation to the largest eigenvalue. If number of feature chosen is d, then number of vector element is dh, where h represents image height.
To find out of the testing sets class, it is necessary to measure the weight between the training and the testing set. In this research, four methods were used to measure the similarity,
which are Euclidian Distance D
1
and Manhattan D
2
as seen in the following equation
2 1
2 2
1
H j
Test D
Train D
j
W W
D
18
h j
Test D
Train D
j
W W
D
1 2
2 2
19 The final decision of the similarity measurements is the smallest value of the result for
each equation.
TELKOMNIKA ISSN: 1693-6930
New Modelling: Modification of Two Dimensional Fisherface Based Feature Arif Muntasa 119
5. Experimental Results and Analysis