EIGENVALUES AND EIGENVECTORS
EIGENVALUES AND EIGENVECTORS
4.1 Linear transformations with diagonal matrix representations
V be a linear transformation on a finite-dimensional linear space V. Those properties of T which are independent of any coordinate system (basis) for V are called intrinsicproperties of T. They are shared by all the matrix representations of T. If a basis can be chosen so that the resulting matrix has a particularly simple form it may be possible to detect some of the intrinsic properties directly from the matrix representation.
Let T:
Among the simplest types of matrices are the diagonal matrices. Therefore we might ask whether every linear transformation has a diagonal matrix representation. In Chapter 2 we treated the problem of finding a diagonal matrix representation of a linear transfor-
mation T: W, where dim V = and dim W = m . In Theorem 2.14 we proved that there always exists a basis
for W such that the matrix of T relative to this pair of bases is a diagonal matrix. In particular, if W = V the matrix will be a square diagonal matrix. The new feature now is that we want to use the
. . . , e,) for V and a basis
same basis for both Vand W. With this restriction it is not always possible to find a diagonal matrix representation for T. We turn, then, to the problem of determining which trans-
formations do have a diagonal matrix representation. Notation:
If A =
is a diagonal matrix, we write A = diag
. . . , a,,).
It is easy to give a necessary and sufficient condition for a linear transformation to have a diagonal matrix representation.
V, where dim V = n. If T has a diagonal matrix representation, then there exists an independent set of elements
THEOREM 4.1. Given a linear transformation T:
..., in V and a corresponding set of scalars
such that
Conversely, if there is an independent set
in V and a corresponding set of scalars satisfying
then the matrix
is a representation of T relative to the basis
, . . . , u,).
97 Proof.
Eigenvectors and eigenvalues of a linear transformation
Assume first that T has a diagonal matrix representation A = relative to some basis
. . . , e,). The action of T on the basis elements is given by the formula
since = for k . This proves (4.1) with
and
= a,, .
..., exist satisfying (4.1). Since
Now suppose independent elements
and scalars
are independent they form a basis for V. If we define a,,
and
= 0 for i
then the matrix A =
is a diagonal matrix which represents T
relative to the basis ..., Thus the
of finding a diagonal matrix representation of a linear transformation has been transformed to another problem, that of finding independent elements
..., and scalars
satisfying (4.1) are called eigenvectors and eigenvalues of
. . . , 1, to satisfy (4.1). Elements
and scalars
respectively. In the next section we study eigenvectors and eigenvaluesf in a more general setting.
4.2 Eigenvectors and eigenvalues of a linear transformation
In this discussion V denotes a linear space and S denotes a of V. The spaces and V are not required to be finite dimensional.
DEFINITION . Let S V be a linear transformation of S into V. A scalar is called an eigenvalue
is a
element in S such that
T(x) = The element is called an eigenvector of Tbelonging to 1. The scalar is called an eigenvalue
corresponding to
There is exactly one eigenvalue corresponding to a given eigenvector x. In fact, if we have
= and T(x) =
for some
then
so = .
Note: Although Equation (4.2) always holds for x = 0 and any scalar I, the definition excludes 0 as an eigenvector. One reason for this prejudice against 0 is to have exactly one eigenvalue associated with a given eigenvector x.
The following examples illustrate the meaning of these concepts.
V be the linear transformation defined by the equation T(x) = cx for each in S, where c is a fixed scalar. In this example every
EXAMPLE 1. M ultiplication by a fixed scalar. Let T: S
element of S is an eigenvector belonging to the scalar c.
The words eigenvector and eigenvalue are partial translations of the German words and respectively. Some authors use the terms characteristic vector, or proper vector as synonyms for eigenvector.
are also called characteristic values, proper values, or latent roots.
98 Eigenvalues and eigenvectors
EXAMPLE 2. The eigenspace E(A) consisting of all x such that T(x) = Let T:
be the set of all elements x in such that T(x) =
be a linear transformation having an eigenvalue
Let
This set contains the zero element 0 and all eigenvectors belonging to il. It is easy to prove that E(I) is a
of S, because if x and are in we have
+ for all scalars a and
Hence (ax + by) E(1) so is a subspace. The space E(1) is called the eigenspace corresponding to
It may be finite- or infinite-dimensional. If is finite-dimensional then dim E(I)
1 , since E(1) contains at least one element x corresponding to 1.
EXAMPLE 3. Existence of zero eigenvalues. If an eigenvector exists it cannot be zero, by definition. However, the zero scalar can be an eigenvalue. In fact, if 0 is an eigenvalue for x then T(x) = Ox = 0, so x is in the null space of T. Conversely, if the null space of T contains any
elements then each of these is an eigenvector with eigenvalue 0. In general,
is the null space of T
EXAMPLE 4. Rejection in the xy-plane. Let S = V = V,(R) and let T be a reflection in the xy-plane. That is, let Tact on the basis vectors k as follows: T(i) = T(j) = T(k) = -k. Every
vector in the xy-plane is an eigenvector with eigenvalue 1. The remaining eigenvectors are those of the form ck, where c
0 each of them has eigenvalue 1 .
This example is of special interest because it shows that the existence of eigenvectors may depend on the underlying field of scalars. The plane can be regarded as a linear space in two different ways: (1) As a dimensional real linear space, V = V,(R), with two basis elements (1, 0) and (0,
EXAMPLE 5. Rotation of
through
and with real numbers as scalars; or (2) as a l-dimensional complex linear space, V = with one basis element 1, and complex numbers as scalars.
can be expressed in polar form, z =
Consider the second interpretation first. Each element z
0 of
If T rotates z through an angle then T(z) = = Thus, each z
0 is an eigenvector with eigenvalue = Note that the eigenvalue is not real unless is an integer multiple of
Now consider the plane as a real linear space, V,(R). Since the scalars of V,(R) are real numbers the rotation T has real eigenvalues only if is an integer multiple of
In other words, if
is not an integer multiple of then T has no real eigenvalues and hence no eigenvectors. Thus the existence of eigenvectors and eigenvalues may depend on the choice of scalars for V.
EXAMPLE 6. The gperator. Let V be the linear space of all real functions having derivatives of every order on a given open interval. Let D be the linear transfor- mation which maps each onto its derivative,
The eigenvectors of D are those functions
satisfying an equation of the form
Eigenvectors and eigenvalues of a linear transformation
for some real This is a first order linear differential equation. All its solutions are given by the formula
f(x) =
where c is an arbitrary real constant. Therefore the eigenvectors of D are all exponential functions f(x)
0 . The eigenvalue corresponding to f(x) = is In examples
with c
this one where is a function space the eigenvectors are called eigenfunctions. EXAMPLE 7. The integration operator. Let V be the linear space of all real functions
continuous on a finite interval [a, b]. define = T(f) to be that function given by
if
The eigenfunctions of T (if any exist) are those nonzerofsatisfying an equation of the form
for some real 4. If an eigenfunction exists we may differentiate this equation to obtain the =
0. In other words, the only candidates for eigenfunctions are those exponential functions of the with c
from which we
provided
0 and
0. However, if we put = a in (4.3) we obtain
= Ice”‘“.
Since is never zero we see that the equation T(f) =
be satisfied with a zero
so Thas no eigenfunctions and no eigenvalues. EXAMPLE 8. The
V be a linear trans- formation having an eigenvalue 1. Let x be an eigenvector belonging to and let
spanned by an eigenvector. Let T: S
be the
spanned by x. That is, is the set of all scalar multiples of x. It is easy to show that maps
into itself. In fact, if y = cx we
T(y) =
If c
is also an eigenvector belonging to
0 then
0 so every
element of
onto an element of
A of S is called invariant under Tif Tmaps each element of
We have j ust shown that the spanned by an eigenvector is invariant under T.
Eigenvalues and eigenvectors
4.3 Linear independence of eigenvectors corresponding to distinct eigenvalues
One of the most important properties of eigenvalues is described in the next theorem. As before, S denotes a
of a linear space
V, and assume that the corresponding eigenvalues
THEOREM 4.2. Let
be eigenvectors of a linear transformation T: S
are distinct. Then the eigenvectors are independent.
Proof. The proof is by induction on k. The result is trivial when k = . Assume, then, that it has been proved for every set of k
be k eigen- vectors belonging to distinct eigenvalues, and assume that scalars
1 eigenvectors. Let
exist such that
Applying T to both members of (4.4) and using the fact that = we find
Multiplying (4.4) by and subtracting from (4.5) we obtain the equation
k-l
But since ..., are independent we must have = 0 for each i = 1, 2, . . . , k . Since the eigenvalues are distinct we have
for i k so = 0 for i = 1, 2, ... k
1 . From (4.4) we see that is also 0 so the eigenvectors ..., are inde- pendent.
Note that Theorem 4.2 would not be true if the zero element were allowed to be an eigen- vector. This is another reason for excluding 0 as an eigenvector.
Warning: The converse of Theorem 4.2 does not hold. That is, if T has independent eigenvectors
then the corresponding eigenvalues
. . . , need not be dis-
tinct. For example, if T is the identity transformation, T(x) = x for all x, then every x 0 is an eigenvector but there is only one eigenvalue, = 1 .
Theorem 4.2. has important consequences for the finite-dimensional case.
V has at most n distinct eigenvalues. If T has exactly n distinct eigenvalues, then the corresponding eigenvectors form
THEOREM 4.3. If dim V = n , every linear transformation T: V
a basis for V and the matrix of T relative to this basis is a diagonal matrix with the eigenvalues as diagonal entries.
1 distinct eigenvalues then, by Theorem 4.2, V would contain n + 1 independent elements. This is not possible since dim V = n . The second assertion follows from Theorems 4.1 and 4.2.
Proof. If there were n
Exercises
Theorem 4.3 tells us that the existence of distinct eigenvalues is a condition for Tto have a diagonal matrix representation. This condition is not necessary. There exist linear transformations with less than distinct eigenvalues that can be
Note:
represented by diagonal matrices. The identity transformation is an example. All its eigen- values are
to but it can be represented by the identity matrix. Theorem 4.1 tells us that
for T to have a diagonal matrix representation.
existence of independent eigenvectors is necessary and
4.4 Exercises
1. (a) If T has an eigenvalue prove that
has the eigenvalue al.
(b) If x is an eigenvector for both
and , prove that x is also an eigenvector for + .
How are the eigenvalues related?
2. Assume ‘T: an eigenvector x belonging to an eigenvalue 1. Prove that an eigen- vector of
belonging to and, more generally, is an eigenvector of belonging to Then use the result of Exercise 1 to show that if P is a polynomial, then x is an eigenvector of
P(T) to P(A).
be a rotation of through an angle of
3. Consider the plane as a real linear space, =
, and let T
radians. Although T has no eigenvectors, prove that every vector is an eigenvector for
prove that at least one of or
4. If T:
has the property that
has a nonnegative eigenvalue
is an eigenvalue for T. [Hint:
= (T
5. Let be the linear space of all real functions differentiable on (0, 1). If f define =
T(f) to mean
= tf for all in (0, 1). Prove that every real is an eigenvalue for
and determine the eigenfunctions corresponding to A.
6. Let be the space of all real polynomials p(x) of degree
n. If define q =
to mean that = + 1) for all real Prove that T has only the eigenvalue 1. What are the eigenfunctions belonging to this eigenvalue?
7. Let be the linear space of all functions continuous on (
and such that the integral
f(t)
exists for all real x. If f let = T(f)
be defined by the equation g(x) =
f (t) . that every positive I is an eigenvalue for Tand determine the eigenfunctions corresponding to
8. Let be linear space of all functions continuous on (
and such that the integral
t f(t) dt exists for all real x. If f let = T(f)
be defined by the =
t f (t) dt . Prove that every negative is aneigenvalue for T and determine the
tions corresponding to 1.
be the real linear space of all real functions continuous on the interval [0, Let be the
9. Let =
of all functions f which have a continuous second derivative in linear
and which also satisfy the boundary conditions f(0) = 0. Let T:
be the linear transformation which maps each f onto its second derivative, T(f) = f Prove that the
eigenvalues of Tare the numbers of the form
= 1, 2, . . . , and that the eigen- functions corresponding to
where
are f(t) =
sin nt , where 0.
10. Let be the linear space of all real convergent sequences {x,}. Define T:
as follows:
If x = {x,} is a convergent sequence with limit a, let T(x) =
where = a for
1 . Prove that Thas only two eigenvalues, = 0 and = -1, and determine the eigen- vectors belonging to each such
11. Assume that a linear transformation T has two eigenvectors x and y belonging to distinct eigenvalues and If ax + by is an eigenvector of T, prove that a=0 or
b = 0.
a linear transformation such that every element of is an eigenvector. Prove that
12. Let T:
exists a scalar c such that T(x) = cx . In other words, the only transformation
and eigenvectors
4.5 The finite-dimensional case. Characteristic polynomials
If dim the problem of finding the eigenvalues of a linear transformation can be solved with the help of determinants. We wish to find those scalars such that the equation
0. The equation T(x) = can be written in the form
has a solution x with x
T)(x) = 0,
where the identity transformation. If we let then is an eigenvalue if and only if the equation
T,(x) = 0
has a solution x, in which case is not invertible (because of Theorem 2.10). Therefore, by Theorem 2.20, a
solution of (4.6) exists if and only if the matrix of is singular. If A is a matrix representation for T, then
A is a matrix representation for
A) = 0. Thus, if is an eigenvalue for Tit satisfies the equation
By Theorem 3.13, the matrix
A is singular if and only if det
of scalars which satisfies (4.7) is an eigenvalue. This suggests that we should study the determinant det
Conversely, any
in the underlying
A) as a function of
identity matrix, defined by the equation
THEOREM 4.4. A is any
matrix and
is the
= det (AZ A)
is a polynomial in of degree n. Moreover, the term of highest degree is and the constant term
= det (-A) = (- 1)” det A.
Proof. The statement f(0) = det (- A ) follows at once from the definition of We prove thatf is a polynomial of degree only for the case
3. The proof in the general case can be given by induction and is left as an exercise. (See Exercise 9 in Section 4.8.) For
1 the determinant is the linear
For = 2 we have
det
A)=
a quadratic polynomial in
For
= 3 we have
det
A)=
Calculation of eigenvalues and eigenvectors in thejnite-dimensional case
The last two are linear polynomials in The first term is a cubic polynomial, the term of highest degree being
DEFINITION. If A is an n x n matrix the determinant
f(A) = det
A)
is called the characteristic polynomial of A.
The roots of the characteristic polynomial of A are complex numbers, some of which may be real. If we let F denote either the real field R or the complex field C, we have the following theorem,.
THEOREM Let T: V be a linear transformation, where V has in and
dim
V = n . Let A be a matrix representation of T. Then the set of eigenvalues of T consists
of those roots of the characteristic polynomial of
A which lie in F.
Proof. The discussion preceding Theorem 4.4 shows that every eigenvalue of T satisfies the equation det
A) = 0 and that any root of the characteristic polynomial of A
which lies in
is an eigenvalue of T. ,
The matrix depends on the choice of basis for V, but the eigenvalues of T were defined without reference
a basis. Therefore, the set of roots of the characteristic polynomial of
A must be independent of the choice of basis. More than this is true. In a later section we shall prove that
characteristic polynomial itself is independent of the choice of basis. We turn now to the problem of actually calculating the eigenvalues and eigenvectors in the finite-dimensional case.
4.6 Calculation of’ eigenvalues and eigenvectors in the finite-dimensional case
In the finite-dimensional case the eigenvalues and eigenvectors of a linear transformation Tare also called
and eigenvectors of each matrix representation of T. Thus, the eigenvalues of a square matrix A are the roots of the characteristic polynomial
det
A ). The eigenvectors corresponding to an eigenvalue are those . . , x,) regarded as n x 1 column matrices satisfying the matrix equation
This is a system of n linear equations for the components . . . , x,. Once we know we can obtain the eigenvectors by solving this system. We illustrate with three examples that exhibit
features.
Eigenvalues and eigenvectors
EXAMPLE 1. A matrix with all its eigenvalues distinct. The matrix
A= -1 2 -1 3 -2 4
has the characteristic polynomial
det
A) = det
so there are three distinct eigenvalues: 1, , and 3. To find the eigenvectors corresponding to = 1 we solve the system AX = X, or
This gives us
which can be rewritten as
Adding the first and third equations we find
and all three equations then reduce to +
1 , 0) , where is any
= 0 . Thus the eigenvectors corresponding to = 1 are =
scalar. By similar calculations we find the eigenvectors
1 , -1) corresponding to
scalar. Since the eigenvalues are distinct the correspondingeigenvectors (1, 1 , 0), (0,
1, and =
1) corresponding to = 3, with any
and 1) are independent. The results can be summarized in tabular form as follows. In the third column we have listed the dimension of the eigenspace E(1).
Eigenvalue
Eigenvectors
dim
1 1 -1
3 3, -1),
Calculation of eigenvalues and eigenvectors in thejnite-dimensional case
105 EXAMPLE 2. A matrix with repeated eigenvalues. The matrix
2 -1
A= 2 0 3 1 -1 3
has the characteristic polynomial
det
A) det
The eigenvalues are 2, 2, and 4. (We list the eigenvalue 2 twice to emphasize that it is a double root of the characteristic polynomial.) To find the eigenvectors corresponding to
2 we solve the system AX = 2X, which reduces to
This has the solution = = so the eigenvectors corresponding to 2 are t(-1, 1,
1) corresponding to the eigenvalue: = 4. The results can be summarized as follows:
where 0. Similarly we find the eigenvectors
Eigenvalue
Eigenvectors
dim
4 1 EXAMPLE 3. Another matrix with repeated eigenvalues. The matrix
has the characteristic polynomial 7). When 7 the system 7X becomes
-2x, +
This has the solution = 2x,, = so the eigenvectors corresponding to = 7 are where 0. For the eigenvalue = the system AX = consists of the equation
and eigenvectors
repeated three times. To solve this equation we may take = a, = b , where a and b
are arbitrary, and then take = -a
b . Thus every eigenvector corresponding to = 1 has the form
1, -1), where a 0 or b 0. This means that the vectors (1, 0, -1) and (0, 1, -1) form a basis
-a
b) =
for E(1). Hence dim E(A) = 2 when = 1. The results can be summarized as follows: Eigenvalue
Eigenvectors
dim
2 Note that in this example there are three independent eigenvectors but only two distinct
0, - 1 ) +
a, b not both 0.
eigenvalues.
4.7 Trace of a matrix
be the characteristic polynomial of an n x n matrix A. We denote the n roots of
... with each root written as often as its multiplicity indicates. Then we have the factorization
by
We can also
in decreasing powers of I as follows,
= A” +
Comparing this with the factored form we find that the constant term and thecoefficient of are given by the formulas
Since we also have = (-1)” det A, we see that
1, = det A.
That is, theproduct of the roots of the characteristicpolynomialof A is equal to the determinant
The sum of the roots off(A) is called the trace of A, denoted by tr A. Thus, by definition,
tr A=
The coefficient of
= -tr A. We can also compute this coefficient from the determinant form
is given by
and we find that
Exercises
(A proof of this formula is requested in Exercise 12 of Section 4.8.) The two formulas for
show that
tr A
That is, the trace of A is also equal to the sum of the diagonal elements of A.
Since the sum of the diagonal elements is easy to compute it can be used as a numerical check in calculations of eigenvalues. Further properties of the trace are described in the next set of
4.8 Exercises
Determine the eigenvalues and eigenvectors of each of the matrices in Exercises 1 through 3. Also, for
eigenvalue compute the dimension of the eigenspace
1 a , a 0, b > 0.
b1
mechanical theory of electron spin and are called spin matrices, in honor of the physicist Wolfgang
(1900-1958). Verify that they all have eigenvalues 1 and . Then determine all 2 x 2 matrices with complex entries having the two eigenvalues 1 and -1 .
5. Determine all 2 x 2 matrices with real entries whose eigenvalues are (a) real and distinct, (b) real and equal, (c) complex conjugates.
6. Determine a, c, d, e, f, given that the vectors (1, (1, 0, and (1, 1 , 0) are vectors of the matrix
7. Calculate the eigenvalues and eigenvectors of each of the following matrices, Also, compute the dimension
the
for each eigenvalue
8. Calculate the
of each of the five matrices
and eigenvectors
These are called Dirnc matrices in honor of Paul A. M. Dirac the English physicist. They occur in the solution of the relativistic wave equation in quantum mechanics.
9. If A and B are x matrices, with B a diagonal matrix, prove (by induction) that the deter- minant
= (-1)” det A, and with the coefficient of
= det
A) is a polynomial in with
equal to the product of the diagonal entries of B.
10. Prove that a square matrix A and its transpose have the same characteristic polynomial.
11. If A and B are x matrices, with A nonsingular, prove that AB and BA have the same set of eigenvalues. Note: It can be shown that AB and BA have the same characteristic poly- nomial, even if A is singular, but you are not required to prove this.
12. Let A be an x matrix with characteristic Prove (by induction) that the coefficient of
is -tr A.
13. Let A and B be x matrices with det A = det B and tr A = tr B. Prove that A and B have the same characteristic polynomial if = 2 but that this need not be the case if > 2.
14. Prove each of the following statements about the trace. (a)tr(A+B)=trA+trB.
(b) tr = c tr A. (c) tr (AB) = tr (BA). (d) tr
= tr A.
4.9 Matrices representing the same linear transformation. Similar matrices
In this section we prove that two different matrix representations of a linear trans- formation have the same characteristic polynomial. To do this we investigate more closely the relation between matrices which represent the same transformation.
Let us recall how matrix representations are defined. Suppose T: W is a linear mapping of an n-dimensional space
into an m-dimensional space W. Let . . . , e,) w,) be ordered bases for
and W respectively. The matrix representation of T relative to this choice of bases is the m x matrix whose columns consist of the com- ponents of
. . . , w,). Different matrix represen- tations arise from different choices of the bases. We consider now the case in which
relative to the basis
and we assume that the same ordered basis e,) is used for both
be the matrix of T relative to this basis. This means that we have
Now choose another ordered basis
be the matrix of T relative to this new basis. Then we have
for both and Wand let B =
Matrices representing the same linear transformation. Similar matrices
for
Since each is in the space spanned by
we can write
for some set of scalars
determined by these scalars is singular because it represents a linear transformation which maps a basis of
. The x
matrix C =
onto another basis of
T to both members of (4.10) we also have the equations
The systems of equations in (4.8) through (4.11) can be written more simply in matrix form by introducing matrices with vector entries. Let
E=
and
= [u, , . . u,]
be 1 x row matrices whose entries are the basis elements in question. Then the set of equations in (4.10) can be written as a single matrix equation,
(4.12) Similarly, if we introduce
..., , Equations
E’ q =
and
and (4.11) become, respectively, (4.13)
UB,
From (4.12) we have
E = UC-‘. To find the relation between A and B we express
in two ways in terms of From (4.13) we have
= UB
and
E’C = EAC =
Eigenvalues and eigenvectors
of the basis vectors ...,
. Since the
are independent we must have
B = C-‘AC.
Thus, we have proved the following theorem. THEOR E M 4.6. If two n n matrices A and B represent the same linear transformation
then there is a nonsingular matrix C such that
B=
Moreover, if
A is the matrix of T relative to a basis E = , . , . , e,] and B is the matrix of T relative to a basis =
C we can take the nonsingular matrix relating the two bases according to the matrix equation = EC.
then for
The converse of Theorem 4.6 is also true. THEOREM 4.7. Let A and B be two n x n matrices related by an equation of the form
B= where C is a nonsingular n x n matrix. Then A and B represent the same linear transformation.
Choose a basis E = . . . , e,] for an n-dimensional space V. Let ...,
be the vectors determined by the equations
Uj
for
n,
where the scalars are the entries of C. Since C is nonsingular it represents an invertible linear transformation, so
is also a basis for and we have = EC.
Let Tbe the linear transformation having the matrix representation A relative to the basis
E, and let S be the transformation having the matrix representation B relative to the basis Then we have
for k=
...,n
and
for j =
k==l
We shall prove that
T by showing that
for
Equations (4.15) and (4.16) can be written more simply in matrix form as follows,
Matrices representing the same linear transformation. Similar matrices
= UB. Applying
(4.14) we also obtain the relation
or
= EAC.
But we have
= ECB =
EAC,
which shows
= S(x) for each in so T = S. In other words, the matrices A and B represent the same linear transformation.
for each j. Therefore
DEFINITION. ‘Two n matrices A and B are called similar there is a nonsingular matrix C such that B =
Theorems 4.6 and 4.7 can be combined to give us
Two n matrices are similar only they represent the same transformation.
THEOREM
Similar matrices share many properties. For example, they have the same determinant since
A)(det det A. This property gives us the following theorem.
det
det
T H EOREM 4.9. Similar matrices have the same characteristic polynomial and therefore the same
Proof. If A and B are similar there is a nonsingular matrix C such that B Therefore we have
A)C. This shows that B and A are similar, so det
B :=
B) det A).
Theorems 4.8 and 4.9 together show that all matrix representations of a given linear transformation T have the same characteristic polynomial. This polynomial is also called the characteristic polynomial of T.
The next theorem is a combination of Theorems 4.5, 4.2, and 4.6. In Theorem 4.10, F
Eigenvalues and eigenvectors
THEOREM 4.10. Let T: V V be a linear transformation, where V has scalars in F, and
dim
V = n. Assume that the characteristic polynomial of T has n distinct roots I,, . . . , in F. Then we have: (a) The corresponding eigenvectors
V. (b) The matrix of T relative to the ordered basis =
. . . , form a basis
. . . , u,] is the diagonal matrix A having the eigenvalues as diagonal entries:
(c) If A is the matrix of T relative to another basis E
. . . , e,], then
where C is the nonsingular matrix relating the two bases by the equation
Proof. By Theorem 4.5 each root is an eigenvalue. Since there are n distinct roots, Theorem 4.2 tells us that the corresponding eigenvectors
..., are independent.
the matrix of T relative to
Hence they form a basis for V. This proves (a). Since
is the diagonal matrix A, which proves (b). To prove (c) we use Theorem 4.6.
Note: The nonsingular matrix C in Theorem 4.10 is called a diagonalizing matrix. If
then the equation = EC in Theorem 4.10 shows that the kth column of C consists of the components of the eigenvector relative to
e,) is the basis of unit coordinate vectors
If the eigenvalues of A are distinct then A is similar to a diagonal matrix. If the values are not distinct then A still might be similar to a diagonal matrix. This will happen if and only if there are k independent eigenvectors corresponding to each eigenvalue of multiplicity k. Examples occur in the next set of exercises.
4.10 Exercises
Prove that the matrices
have the same eigenvalues but are not similar. In each case find a nonsingular matrix C such that
and [IO
is a diagonal matrix or explain why no such C exists.
Three bases in the plane are given. With respect to these bases a point has components (x,,
respectively.
[z,, where A,
are 2 x 2 matrices. Express C in terms of B, C A and B.
Exercises
4. In each case, that the eigenvalues of A are not distinct but that A has three independent eigenvectors. Find a nonsingular matrix C such that
is a diagonal matrix.
5. Show that none of the following matrices is similar to a diagonal matrix, but that each is similar
to a triangular matrix of the form
where is an eigenvalue.
1 I’ I
0 -1 0
6. Determine the eigenvalues and eigenvectors of the matrix
0 0 1 and thereby show that it is not similar to a diagonal matrix.
1 -3 3
7. (a) Prove that a. square matrix A is nonsingular if and only if 0 is not an eigenvalue of A. (b) If A is nonsingular, prove that the eigenvalues of
are the reciprocals of the eigenvalues
8. Given an
Prove the following statements about A.
matrix A with real entries such that
(a) A is nonsingular. (b) is even. (c) A has no real eigenvalues. (d) det A q = 1.