Discriminant Analysis Parhusip and Natangku, 2011

INDOMS,14 April 2012 7

3.3 Penalty Method with a noncoercive objective function

x f  Percentages of protein are assumed to follow a perturbed continuous function x f   and its optimization follows the Lagrangian ,   x L  for  P ,i.e ~ , 1 x g x f x L m i i i            . m = the number of constraints. Applied this construction to the studied problem, one yields t B Y x t B Y x L 2 1 2 ,                             . 2 1 2 2 2 t B Y t B Y t B Y                         Since ,       x L , we obtain  t ;  Y ;  B .; 2   . Since  B  Y , to   2 1 2   B Y B   = 0. One deduces 2 B  = 0 which is absolutely wrong since  B and   . Thus we have shown there exists no practical true optimizer to satisfy ,       x L . Therefore numerical methods in fmincon.m and fminunc.m fail. Up to now, we are dealing with optimizations that require solving of linear and nonlinear systems. There are several modern optimization algorithms avoiding these: such as anneling simulation, ant colony algorithms and genetic algorithm,particle swarm algorithm where their algorithms are inspired by nature behaviours. One may refer to Rao.S.S.2009, Brownlee, J. 2011 for beginners on engineering optimization. The ant colony algorithm has been developed by some authors by renewing the steps of evaporation in this algorithm. Additionally, there exists no requirement of convexity of the objective function for using these algorithms and one should not solve of gradient function to find its optimizers. In the next section, one also requires linear algebra in statistics when we are working with data analysis.

IV. LINEAR ALGEBRA ON MULTIVARIATE ANALYSIS

4.1 Discriminant Analysis Parhusip and Natangku, 2011

One of problems in multivariate analysis is to discriminate data into some groups. We will discuss only 2 groups. We have a set of data that contains various Indonesian foods. Protein, fat, carbohydrates , and calcium in each food have been observed. Each item of foods will be classified into 2 groups. The classification rule says that a sample x is selected in a group 1  if the inequality 1 is satisfied. i.e       2 1 2 1 1 2 1 1 2 1        x x S x x x S x x pooled pooled 1 with     2 1 1 2 1 2 2 1 1       n n S n S n S pooled is the covariance matrix as a linear combination between the covariance matrix from the first group denoted by 1 S and the covariance matrix from the second group denoted by 2 S . The formula 1 is obtained as a study of expectation cost misclassification which is omitted the discussion here for simplicity see Johnson and Wichern, 2007 for detail. There are 50 types of foods that are clasified into two groups. The first group is considered that the food contains protein and fat mostly rather than calcium and carbohidrate as the characterization of the second group. Our result is depicted in Figure 5 by employing Eq.1. The vertical upward histogram denotes that the food in the first INDOMS,14 April 2012 8 group. If the histogram too small closes to zero then the food can not be discriminated significantly into one of the groups. Figure 6 . Clasification of 50 foods into 2 groups shown only the first 25th. The values on the vertical axis are the values obtained from the left hand side on Equation 1. We expect that Figure 6 is now usable to a user. For instance, the first food called ‘Bandeng Presto’ contains protein and fat more rather than carbohydrate and calcium. Due to limitation of papers, other applications of linear algebra in multivariate analysis can not be shown. Joint density of multivariate variables, the maximum likelihood estimation for mean and matrix convariance, factor analysis require the invers of covariance matrix to exist. Dealing with linear combination of random multivariables require lots of knowledge in linear algebra in its general theories to a formal analysis see John and Wichern,1997.

V. LINEAR ALGEBRA ON A SYSTEM OF DIFFERENTIAL EQUATIONS