SPECIAL MATRICES AND FORMULAS

9. SPECIAL MATRICES AND FORMULAS

In this section we want to discuss various terms used in work with matrices, and prove some important formulas. First we list for reference needed definitions and facts about matrices.

There are several special matrices which are related to a given matrix A. We outline in (9.1) what these matrices are called, what notations are used for them, and how we get them from A.

(9.1) Name of Matrix

How to get it from A Transpose of A, or

Notations for it

A or A ′ or A t

Interchange rows and

A transpose columns in A. Complex conjugate

Take the complex of A

¯ A or A ∗

conjugate of each element.

Take the complex Hermitian conjugate,

Transpose conjugate,

A † (A dagger)

conjugate of each adjoint (Problem 9),

element and transpose. Hermitian adjoint.

See Formula (6.13). There is another set of names for special types of matrices. In (9.2), we list

Inverse of A

A −1

these and their definitions for reference.

138 Linear Algebra Chapter 3

A matrix is called if it satisfies the condition(s) real

A=¯ A

symmetric A=A T , A real (matrix = its transpose)

skew-symmetric or

A = −A T , A real antisymmetric orthogonal

A −1 =A T , A real (inverse = transpose)

pure imaginary

A=−¯ A

Hermitian

A=A † (matrix = its transpose conjugate)

A −1 =A † (inverse = transpose conjugate)

normal AA † =A † A (A and A † commute)

Now let’s consider some examples and proofs using these terms. Index Notation We are going to need index notation in some of our work below,

so for reference we restate the rule in (6.2b) for matrix multiplication. (9.3)

(AB) ij =

A ik B kj .

Study carefully the index notation for “row times column” multiplication. To find the element in row i and column j of the product matrix AB, we multiply row i of A times column j of B. Note that the k’s (the sum is over k) are next to each other in (9.3). If we should happen to have

k B kj A ik , we should rewrite it as k A ik B kj (with the k’s next to each other) to recognize it as an element of the matrix AB (not BA). We will see an example of this in (9.10) below.

Kronecker δ The Kronecker δ is defined by

For example, δ 11 = 1, δ 12 = 0, δ 22 = 1, δ 31 = 0, and so on. In this notation a unit matrix is one whose elements are δ ij and we can write

I = (δ ij ).

Section 9 Special Matrices and Formulas 139

(Also see Chapter 10, Section 5.) The Kronecker δ notation is useful for other purposes. For example, since (for positive integers m and n)

π, if m = n, (9.6a)

we can write

This is the same as (9.6a) because δ nm nm = 1 if m = n. Using the Kronecker δ, we can give a formal proof that for any matrix M and a conformable unit matrix I, the product of I and M is just M. Using index notation and equations (9.3) and (9.4), we have

(9.7) (IM) ij = δ ik M kj =M ij or IM = M

since δ ik = 0 unless k = i. More Useful Theorems Let’s use index notation to prove the associative law

for matrix multiplication, that is (9.8)

A(BC) = (AB)C = ABC.

First we write (BC) kj = l B kl C lj . Then we have

(9.9) [A(BC)] ij =

A ik (BC) kj =

A ik

B kl C lj

A ik B kl C lj = (ABC) ij

which is the index notation for A(BC) = ABC as in (9.8). We can prove (AB)C = ABC in a similar way (Problem 1).

In formulas we may want the transpose of the product of two matrices. First note that A T ik =A ki [see (2.1) or (9.1)]. Then

(AB) T ik = (AB) ki =

(AB) T =B T A T .

The theorem applies to a product of any number of matrices (see Problem 8b). For example

(ABCD) T =D T C T B T A T .

The transpose of a product of matrices is equal to the product of the transposes in reverse order.

140 Linear Algebra Chapter 3

A similar theorem is true for the inverse of a product (see Section 6, Problem 18).

(ABCD) −1 =D −1 C −1 B −1 A −1 .

The inverse of a product of matrices is equal to the product of the inverses in reverse order.

Trace of a Matrix The trace (or spur ) or a square matrix A (written Tr A) is the sum of the elements on the main diagonal. Thus the trace of a unit n by n matrix is n, and the trace of the matrix M in (6.10) is 6. It is a theorem that the trace of a product of matrices is not changed by permuting them in cyclic order. For example

Tr(ABC) = Tr(BCA) = Tr(CAB).

We can prove this as follows: Tr(ABC) =

(ABC) ii =

A ij B jk C ki

B jk C ki A ij = Tr(BCA)

C ki A ij B jk = Tr(CAB).

Warning: Tr(ABC) is not equal to Tr(ACB) in general. Theorem: If H is a Hermitian matrix, then U = e iH is a unitary matrix. (This

is an important relation in quantum mechanics.) By (9.2) we need to prove that U † =U −1 if H † = H. First, e iH e −iH =e iH−iH since H commutes with itself—see Problem 6.29. But this is e 0 which is the unit matrix [see Section 6] so U −1 =e −iH . To find U † = (e iH ) † , we expand U = e iH in a power series to get U = k (iH) k /k! and then take the transpose conjugate. To do this we just need to realize that the transpose of a sum of matrices is the sum of the transposes, and that the transpose of a power of a matrix, say (M n ) T is equal to (M T ) n (Problem 9.21). Also recall from Chapter 2 that you find the complex conjugate of an expression by changing the signs of all the i’s. This means that (iH) † = −iH † = −iH since H is Hermitian. Then summing the series we get U † =e −iH , which is just what we found for U −1 above. Thus U † =U −1 , so U is a unitary matrix. (Also see Problem 11.61.)

PROBLEMS, SECTION 9

1. Use index notation as in (9.9) to prove the second part of the associative law for matrix multiplication: (AB)C = ABC.

2. Use index notation to prove the distributive law for matrix multiplication, namely: A(B + C) = AB + AC.

Section 9 Special Matrices and Formulas 141

3. Given the following matrix, find the transpose, the inverse, the complex conjugate, and the transpose conjugate of A. Verify that AA − 1 =A − 1 A = the unit matrix.

4. Repeat Problem 3 given

0 0 1 2i −1

A= @ −i

5. Show that the product AA T is a symmetric matrix. 6. Give numerical examples of: a symmetric matrix; a skew-symmetric matrix; a real

matrix; a pure imaginary matrix. 7. Write each of the items in the second column of (9.2) in index notation.

8. (a)

Prove that (AB) † =B † A † . Hint: See (9.10).

(b) Verify (9.11), that is, show that (9.10) applies to a product of any number of matrices. Hint: Use (9.10) and (9.8).

9. In (9.1) we have defined the adjoint of a matrix as the transpose conjugate. This is the usual definition except in algebra where the adjoint is defined as the transposed matrix of cofactors [see (6.13)]. Show that the two definitions are the same for a unitary matrix with determinant = +1.

10. Show that if a matrix is orthogonal and its determinant is +1, then each element of the matrix is equal to its own cofactor. Hint: Use (6.13) and the definition of an orthogonal matrix.

11. Show that a real Hermitian matrix is symmetric. Show that a real unitary matrix is orthogonal. Note: Thus we see that Hermitian is the complex analogue of symmetric, and unitary is the complex analogue of orthogonal. (See Section 11.)

12. Show that the definition of a Hermitian matrix (A = A † ) can be written a ij =¯ a ji (that is, the diagonal elements are real and the other elements have the property that a 12 =¯ a 21 , etc.). Construct an example of a Hermitian matrix.

13. Show that the following matrix is a unitary matrix.

B (1 + i 3)/4

14. − Use (9.11) and (9.12) to simplify (AB C) , (C MC) , (AH) (AHA ) (HA 1 ) 1 . 15. (a)

Show that the Pauli spin matrices (Problem 6.6) are Hermitian. (b)

Show that the Pauli spin matrices satisfy the Jacobi identity ˆA, [B, C]˜ + ˆB, [C, A]˜ + ˆC, [A, B]˜ = 0 where [A, B] is the commutator of A, B [see (6.3)].

(c) Generalize (b) to prove the Jacobi identity for any (conformable) matrices A, B, C. Also see Chapter 6, Problem 3.14.

16. Let C ij = (−1) i+j M ij be the cofactor of element a ij in the determinant A. Show that the statement of Laplace’s development and the statement of Problem 3.8 can be combined in the equations X a X

ij C kj =δ ik · det A, or

a ij C ik =δ jk · det A.

142 Linear Algebra Chapter 3

17. (a) Show that if A and B are symmetric, then AB is not symmetric unless A and B commute.

(b) Show that a product of orthogonal matrices is orthogonal. (c)

Show that if A and B are Hermitian, then AB is not Hermitian unless A and B commute.

(d) Show that a product of unitary matrices is unitary. 18. If A and B are symmetric matrices, show that their commutator is antisymmetric

[see equation (6.3)]. 19. (a)

Prove that Tr(AB) = Tr(BA). Hint: See proof of (9.13). (b)

Tr(ABC) = Tr(CAB). (c)

If S is a symmetric matrix and A is an antisymmetric matrix, show that Tr(SA) = 0. Hint: Consider Tr(SA) T and prove that Tr(SA) = − Tr(SA).

20. Show that the determinant of a unitary matrix is a complex number with absolute value = 1. Hint: See proof of equation (7.11).

21. Show that the transpose of a sum of matrices is equal to the sum of the transposes. Also show that (M n ) T = (M T ) n . Hint: Use (9.11) and (9.8).

22. Show that a unitary matrix is a normal matrix, that is, that it commutes with its transpose conjugate [see (9.2)]. Also show that orthogonal, symmetric, antisymmet- ric, Hermitian, and anti-Hermitian matrices are normal.

23. Show that the following matrices are Hermitian whether A is Hermitian or not: AA † ,A+A †

, i(A − A † ). 24. Show that an orthogonal transformation preserves the length of vectors. Hint: If r is

the column matrix of vector r [see (6.10)], write out r T r to show that it is the square of the length of r. Similarly R T

2 2 R = |R| 2 and you want to show that |R| = |r| , that is, R T R=r T r if R = Mr and M is orthogonal. Use (9.11). 25. (a)

Show that the inverse of an orthogonal matrix is orthogonal. Hint: Let A = − O 1 ; from (9.2), write the condition for O to be orthogonal and show that A satisfies it.

(b) Show that the inverse of a unitary matrix is unitary. See hint in (a). (c)

If H is Hermitian and U is unitary, show that U − 1 HU is Hermitian.