9 The Polar Decomposition Theorem
Section 6.9 The Polar Decomposition Theorem
The polar decomposition is a decomposition of a linear transformation AV : U into the
product of a Hermitian linear transformation and a unitary linear transformation. The Hermitian linear transformation is positive semidefinite and, depending upon the properties of A , the unitary linear transformation may not be unique. The details of this rough description will be made clear in this section. One of the applications of the polar decomposition theorem is in the case where
AV : U is one to one and N dim V dim U . In other words, when AV : U is one to one
and onto, thus invertible. This is the case that arises when one studies the kinematics of strain for continuous materials. The formal statement of the polar decomposition theorem in this case is
Theorem 6.9.1: A one to one onto linear transformation AV : U has a unique multiplicative
decomposition
A RV (6.9.1)
where : RV U is unitary and VV :
V is Hermitian and positive definite.
Proof: The proof utilizes a construction similar to that used in Section 6.8 for the singular value
decomposition. Given a one to one onto linear transformation AV : U , we can construct Hermitian linear transformation CV : V by the definition
* C A A (6.9.2)
By the same argument that produced (6.8.8)
2 v Cv , v A Av , Av Av , Av 0 (6.9.3)
Because AV : U is one to one and onto, KA only contains the zero vector. As a result
Av 2 0 for all non zero vectors v V . Therefore, the Hermitian linear transformation
CV : V is positive definite. As a positive definite Hermitian linear transformation, CV : V
has the spectral representation
C j v j v j (6.9.4)
where the positive numbers 1 , 2 ,..., are the eigenvalues of N C and vv 1 , 2 ,..., v N is an
orthonormal basis for V consisting of eigenvectors of C . The representation (6.9.4) does not
assume the eigenvalues are distinct. If they are, the tensor products in (6.9.4) represent the
projections into the characteristic subspaces of C . It is useful to note that we can apply (6.6.31) to
the expression (6.9.4) and obtain
Chap. 6
• ADDITIONAL TOPICS EIGENVALUE PROBLEMS
C v j v (6.9.5) j
We can also apply the definition (6.6.31) and define the linear transformation VV :
V by
V 2 C j v j v j (6.9.6)
where, by convention, we have used the positive square root of each eigenvalue. It follows from (6.9.6) that
V v j v j (6.9.7)
Equation (6.9.6) provides one of the two linear transformations in the decomposition (6.9.1). The next formal step is to define the linear transformation RV : U by the formula
1 R AV (6.9.8)
Because A is invertible, R as defined by (6.9.8) is also invertible. If we can establish that R is unitary, we will have established (6.9.1). We shall establish that R is unitary by showing that it obeys (4.10.14), repeated,
* RR I
V (6.9.9)
The definition (6.9.8) yields
RR 1
AV AV V A AV
1 1 2 1 V A AV VVV
VV VV I V
where (4.9.10) and (6.9.6) have been used. The uniqueness of the decomposition (6.9.1) is a consequence of (6.9.2), (6.9.6) and (6.9.8).
A corollary to Theorem 6.9.1 is that AV : U also has the decomposition
A UR (6.9.11)
where : UU U is a positive definite Hermitian linear transformation. Equation (6.9.11) results if we simply define U by the formula
Sec. 6.9 • The Polar Decomposition Theorem
* U RVR (6.9.12)
It readily follows from this definition that
2 B U (6.9.13)
where BU : U is the positive definite Hermitian linear transformation defined by
* B AA (6.9.14)
It is possible to show that
where uu 1 , 2 ,..., u N is an orthonormal basis of U consisting of eigenvectors of B . The
eigenvectors uu 1 , 2 ,..., u N are related to the eigenvectors vv 1 , 2 ,..., v N by the formula
Av j
RVv j
R j v j Rv j
where (6.9.1) and (6.9.6) have been used. Equation (6.9.16) can also be established from (6.9.12), (6.9.6), (6.9.15) and (6.7.24).
Example 6.9.1: As an illustration of the polar decomposition theorem, consider the linear
transformation AV : V introduced in Example 5.3.1. The definition of this linear
transformation is given in equation (5.3.1), repeated,
Ae 1 e 1 e 2 4 e 3
Ae 2 2 e 1 4 e 3 (6.9.17)
Ae 3 e 1 e 2 5 e 3
Chap. 6
• ADDITIONAL TOPICS EIGENVALUE PROBLEMS
where eee 1 , 2 , 3 is a basis for V . As explained in Example 4.5.1, the matrix of A with respect to
this basis is
A M Aee , j , k 1 0 1 (6.9.18)
The linear transformation C defined by (6.9.2) has the matrix
T 1 2 1 1 2 1
Cee ,
j , k 1 0 1 1 0 1
The eigenvalues and eigenvectors of the matrix (6.9.19) can be shown to be
where the notation introduced in equation (5.3.23) has been used to label the eigenvectors. The spectral form of (6.9.19) which follows from (6.9.4) is
Sec. 6.9 • The Polar Decomposition Theorem 547
Cee ,
j , k 14 20 22
where (6.7.8) has been used to determine the matrix representation of the tensor products in (6.9.4). From (6.9.6) and (6.9.22), it follows that
V M Vee , j , k 1 1 1 2 2 2 3 3 3
Finally, the matrix of the orthogonal linear transformation RV :
V is, from (6.9.8)
Chap. 6
• ADDITIONAL TOPICS EIGENVALUE PROBLEMS
R M Ree , j , k M Aee , j , k M Vee , j , k
Therefore, the polar decomposition (6.9.1) is given by (6.9.18), (6.9.24) and (6.9.23). If we utilize (6.9.12) and (6.9.24) it follows that
Uee , j , k M Ree , j , k M Vee , j , k M Ree , k , j (6.9.25)
Equation (6.9.25) creates a small problem because the components of the linear transformation T R
with respect to the basis eee 1 , 2 , 3 are given by (4.9.24) specialized to the case of a real vector
space V and a linear transformation V V . Equation (4.9.24) requires knowledge of the matrix
of inner products formed from the basis eee 1 , 2 , 3 . Fortunately, we do not need to utilize (4.9.24)
in this case because R is orthogonal and, from (4.10.15),
1 R R (6.9.26)
and from (3.5.42)
M Ree , k , j M R , ee k , j M Ree , j , k (6.9.27)
Equation (6.9.27) allows (6.9.25) to be written
M Uee , j , k M Ree , j , k M Vee , j , k M Ree , j , k (6.9.28)
As a result of (6.9.28) and (6.9.24),
Sec. 6.9 • The Polar Decomposition Theorem
M Uee , , 0.0682 0.6198
k
Example 6.9.2: Consider the linear transformation AV : V whose matrix with respect to an
orthonormal basis iiii 1 ,,, 2 3 4 is
A M Aii ,, j k
(6.9.30) 3 i 2 3 i 4
The linear transformation C defined by (6.9.2) has the matrix
T 2 3 i 2 i 4 2 3 i 2 i 4 3 2 1 2 i 3
C M Cee , j , k
(6.9.31) 2 3 3 i 2 i 2 3 i 2 i 4 26 68 i 12 4 i 2 6 i
The eigenvalues and eigenvectors of the matrix (6.9.31) can be shown to be
and
Chap. 6
• ADDITIONAL TOPICS EIGENVALUE PROBLEMS
(6.9.33) 0.0880 0.5008 i 0.4924 0.3650 i 0.5494 0.1283 i 0.1498 0.1584 i
0.3202 0.0747 i 0.5868 0.2064 i 0.2852 0.5665 i 0.2463 0.2050 i
0.1050 0.7851 i 0.157 0 0.4337 i 0.1758 0.3270 i 0.0368 0.1437 i
where the notation introduced in equation (5.3.23) has been used to label the eigenvectors. The spectral form of (6.9.33) is given by (6.9.4). From (6.9.6) and (6.9.33), it follows that
V M Vii ,, j k 1 3 3 2 3 3 3 3 3
4.6995 0.4113 0.9145 i 1.5795 0.3682 i 0.1218 0.5138 i 0.4113 0.9145 i
0.8112 0.8392 i 0.6347 0.2736 i
1 .5795 0.3682 i 0.8112 0.8392 i
0.1218 0.5138 i 0.6347 0.2736 i
0.0264 0.0356 i
Finally, the matrix of the orthogonal linear transformation RV :
V is, from (6.9.8)
Sec. 6.9 • The Polar Decomposition Theorem
R M Rii ,, j k M Aii ,, j k M Vii ,, j k
0.4113 0.9145 i 1.5795 0.3682 i 0.1218 0.5138 i
3 2 1 2 i 0.4113 0.9145 i
0.8112 0.8392 i 0.6347 0.2736 i
3 i 2 3 i 4 1.5795 0.3682 i 0.8112 0.8392 i
0.0264 0.0356 i
2 i 4 0 5 i 0.1218 0.5138 i 0.6347 0.2736 i 0.0264 0.0356 i
i 0.33 8 0.2091 i 0.0859 0.4342 i 0.0572 0.6152 i 0.5180 0.0115 i 0.7544 0.1556 i 0.4194 0.1684 i 0.1913 0.2303 i 0.0501 0.3321 i
0.04 66 0.4144 i 0.2809 0.1589 i 0.0386 0.7080 i 0.4679 0.0155 i
0.0943 0.2648 i 0.6970 0.0992 i 0.1436 0.0720 i 0.0416 0.6307 i (6.9.35)
Therefore, the polar decomposition (6.9.1) is given by (6.9.30), (6.9.34) and (6.9.35). If we utilize (6.9.12) it follows that
M Uii ,, j k M Rii ,, j k M Vii ,, j k M Rii ,, k j M Rii ,, j k M Vii ,, j k M Rii ,, j k
As a result of (6.9.34) and (6.9.35),
0.7433 1.8926 i 1.0255 0.0011 i 0.1317 0.1896 i
0.7433 1.8926 i
0.4145 0.6976 i 0.2939 0.5847 i
M Uii ,, j k
1.0255 0.0011 i 0.4145 0.6976 i
0.2171 1.6108 i
0.1317 0.1896 i 0.2939 0.5847 i 0.2171 1.6108 i
The proof of the polar decomposition theorem, as shown by the above, involves a construction that is very similar to that used for the singular decomposition theorem of Section 6.8. It is the singular decomposition theorem that generalizes the polar decomposition theorem. Our next discussion will return to the singular decomposition theorem, and it will be used to reprove and generalize the polar decomposition theorem above. The generalization will be that we will not
assume that the linear transformation AV : U is one to one and onto. The result will be a polar
decomposition theorem similar in form to the one above except that the linear transformation RV : U is not unique. We begin this discussion by summarizing the results of Section 6.8. If
we are given a linear transformation AV : U , it has the component representation (6.8.46)
A p u p v p (6.9.38)
Chap. 6
• ADDITIONAL TOPICS EIGENVALUE PROBLEMS
where * R dim RA
, 1 , 2 ,..., R , 0, 0,..., 0 are the eigenvalues of AAV : V,
NR
* vv 1 , 2 ,..., v R , v R 1 ,..., v N is an orthonormal basis of V consisting of eigenvectors of A A and *