The Eigenvector

3. The Eigenvector

From linear algebra we know that for a given matrix A and a given vector b, the equation Ax = b has a solution if the inverse A -1 exists which is the case if and only if A 0. The fact that a homogeneous

system of linear equations, Ax = 0, has a non-zero solution if A = 0 plays an important role in the judgment matrix of the AHP.

The matrix of paired comparisons in the AHP leads to the condition Aw = λ max w or (A - λ max I)w = 0, a homogeneous system in the matrix A - λ max I. A nonzero solution implies that the

determinant | A - λ max I |

is equal to zero. But this determinant is an nth degree polynomial in , λ max where

n is the order of A. This polynomial is equal to zero when λ max is a root of the equation obtained by setting the determinant equal to zero and known as the characteristic equation of A. Such a root is

known as a characteristic root or eigenvalue of the matrix A. Thus when λ max is an eigenvalue of A , the solution vector w is not identically equal to zero. Consider the homogeneous system Aw = λw. The

characteristic polynomial of an n by n matrix A has n zeros, λ 1 ,λ 2 , ..., λ n . In the following we use the vector e = (1, 1, ..., 1) T . All other vectors are column vectors.

Theorem 3.1 If A > 0, w 1 is its principal eigenvector corresponding to the maximum eigenvalue λ 1 , λ i λ j for all i and j, and w i is the right eigenvector corresponding to λ i then

lim T k = cw 1

where c is some constant.

e =a 1 w 1 + ... + a n w n where a i , i = 1,..., n are constants. On multiplying both sides on the left by A k

Proof Because w 1 , ..., w n are linearly independent, we have:

we have:

kk

A e = a 1 λ 1 w 1 + K + a n λ n w n = λ 1  a 1 w 1 + a 2   w 2 + K + a n   w n 

and again

multiplying on the left by e T

we have:

Since w 1 > 0, b 1 0, the theorem follows on putting

The proof of this theorem can be generalized to a nonnegative matrix, some power of which is positive. Because of its central relevance we need the following:

Definition - A matrix is irreducible if it cannot be decomposed in the form where A and C are square matrices and 0 is the zero matrix.

1 such that m A > 0. Otherwise it is called imprimitive .

Definition - A nonnegative, irreducible matrix A is primitive if and only if there is an integer m

The graph of a primitive matrix has a path of length m between any two vertices. From the work of Frobenius (1912), Perron (1907), and Wielandt (1950), we know that a

nonnegative, irreducible matrix A is primitive, if and only if, A has a unique characteristic root of maximum modulus, and this root has multiplicity 1.

Theorem 3.2 For a primitive matrix A

lim T

= cw, _ A _ ≡ e Ae

k → infinity _ A _

where c is a constant and w is the eigenvector corresponding to λ max λ 1 .

The actual computation of the principal eigenvector in Expert Choice is based on Theorem 3.1. It says that the normalized row sums of the limiting power of a primitive matrix (and hence also of a positive matrix) gives the desired eigenvector. Thus a short computational way to obtain this vector is to raise the matrix to powers. Fast convergence is obtained by successively squaring the matrix. The row sums are calculated and normalized. The computation is stopped when the difference between these sums in two consecutive calculations of the power is smaller than a prescribed value.

There is literally an infinite number of ways to estimate the ratio w i / w j from the matrix ( a ij ). But we have already shown that our formulation with particular emphasis on consistency leads to an eigenvalue problem.

What is an easy way to get a good approximation to the priorities? Multiply the elements in each row together and take the n th root where n is the number of elements. Then normalize the column of numbers thus obtained by dividing each entry by the sum of all entries. Alternatively normalize the

elements in each column of the judgment matrix and then average over each row. We would like to caution the reader that for real applications one should only use the eigenvector derivation procedure because it can be shown that the approximations described above can lead to rank reversal in spite of its closeness to the eigenvector.