ON ANGLES BETWEEN SUBSPACES OF INNER PRODUCT SPACES
ON ANGLES BETWEEN SUBSPACES
OF INNER PRODUCT SPACES
HENDRA GUNAWAN AND OKI NESWAN
Abstract.We discuss the notion of angles between two subspaces of an inner
product space, as introduced by Risteski and Trenˇcevski [11] and Gunawan, Neswan
and Setya-Budhi [7], and study its connection with the so-called canonical angles.
Our approach uses only elementary calculus and linear algebra.1. Introduction n has attracted
The notion of angles between two subspaces of a Euclidean space R many researchers since the 1950’s (see, for instance, [1, 2, 3]). These angles are known to statisticians and numerical analysts as canonical or principal angles. A few results on canonical angles can be found in, for example, [4, 8, 10, 12]. Recently, Risteski and Trenˇcevski [11] introduced an alternative definition of angles between two subspaces n and explained their connection with canonical angles. of R 1 , . . . , u 1 , . . . , v p q
Let U = span{u } and V = span{v } be p- and q-dimensional sub- n 1 , . . . , u 1 , . . . , v spaces of R p q , where {u } and {v } are orthonormal, with 1 ≤ p ≤ q. Then, as is suggested by Risteski and Trenˇcevski [11], one can define the angle θ between the two subspaces U and V by 2 T T T cos θ := det(M M ) i , v k where M := [hu i] is a q × p matrix, M is its transpose, and h·, ·i denotes the n
. (Actually, Risteski and Trenˇcevski gave a more general usual inner product on R formula for cos θ, but their formula is erroneous and is corrected by Gunawan, Neswan and Setya-Budhi in [7].)
2000 Mathematics Subject Classification. 15A03, 51N20, 15A21, 65F30. Key words and phrases.
canonical angles, angle between subspaces.
The research is supported by QUE-Project V (2003) Math-ITB.Now let θ 1 2 be the canonical angles between U and V , which are p
≤ θ ≤ · · · ≤ θ defined recursively by cos θ 1 := max max 1 , v 1 hu, vi = hu i u∈U,kuk=1 v∈V,kvk=1 cos θ i+1 := max max i+1 , v i+1 i i hu, vi = hu i u∈U ,kuk=1 v∈V ,kvk=1 where U is the orthogonal complement of u relative to U and V is the orthogonal i i i i−1 complement of v relative to V (with U = U and V = V ). Then we have i i−1 2 Q p 2 Theorem 1 (Risteski & Trenˇcevski). cos θ = cos θ . i i=1
The proof of Theorem 1 uses matrix theoretic arguments (involving the notions of orthogonal, positive definite, symmetric, and diagonalizable matrices). In this note, we present an alternative approach towards Theorem 1. For the case p = 2, we reprove Theorem 1 by using only elementary calculus. For the general case, we prove a statement similar to Theorem 1 by using elementary calculus and linear algebra. n by an inner product space of arbitrary dimension. Related
As in [7], we replace R results may be found in [9].
2. Main results In this section, let (X, h·, ·i) be an inner product space of dimension 2 or higher (may 1 p , . . . , u , . . . , v 1 q be infinite), and U = span{u } and V = span{v } be two subspaces of 1 p , . . . , u , . . . , v 1 q
X with 1 ≤ p ≤ q < ∞. Assuming that {u } and {v } are orthonormal, let θ be the angle between U and V , given by 2 T T cos θ := det(M M ) i , v k with M := [hu i] being a q × p matrix.
2.1. The case p = 2. For p = 2, we have the following reformulation of Theorem 1. Fact 2
. cos θ = max max min max hu, vi · hu, vi. u∈U,kuk=1 v∈V,kvk=1 u∈U,kuk=1 v∈V,kvk=1
Proof proj . First note that, for each u ∈ X, hu, vi is maximized on {v ∈ V : kvk = 1} at V u proj D E V u v = and max u, u = V hu, vi = = kproj V V uk. Here of course proj
V kproj uk kproj uk v∈V,kvk=1 P q
, denoting the orthogonal projection of u on V . Now consider the function k k hu, v iv k=1 f (u) := kproj V uk, u ∈ U, kuk = 1. To find its extreme values, we examine the 2 function g := f , given by 2 X q 2
= k ,
V g(u) = kproj uk hu, v i u ∈ U, kuk = 1. k=1
Writing u = (cos t)u + (sin t)u 1 2 , we can view g as a function of t ∈ [0, π], given by X q ¤ 2 g(t) = 1 , v k 2 , v k .
£cos thu i + sin thu i k=1
Expanding the series and using trigonometric identities, we can rewrite g as √ 1 ¤ 2 2 g(t) = £C + A + B cos(2t − α)
2 where X q q 2 2 X ¤,
A = 1 , v k 2 , v k B = 2 1 , v k 2 , v k £hu i − hu i hu ihu i, k=1 k=1 X q 2 2 C = , v , v ¤, 1 k 2 k
£hu i + hu i k=1 and tan α = B/A. From this expression, we see that g has the maximum value + +
√ √ 2 2 2 2 C+ A B A B C− m := and the minimum value n := . (Note particularly that n is 2 2 nonnegative.) Accordingly the extreme values of f are
√ √ max max m and min max n. hu, vi = hu, vi = u∈U,kuk=1 v∈V,kvk=1 u∈U,kuk=1 v∈V,kvk=1
Now we observe that 2 2 2 C + B ) − (A mn =
4 X q q q 2 X 2 h i X 2 = 1 , v k 2 , v k 1 , v k 2 , v k hu i hu i − hu ihu i k=1 k=1 k=1 T
= det(M M ) 2 = cos θ.
This completes the proof. 1 , u 2 ¤ Geometrically, the value of cos θ, where θ is the angle between U = span{u } 1 , . . . , v q and V = span{v } with 2 ≤ q < ∞, represents the ratio between the area of the parallelogram spanned by proj u 1 and proj u 2 and the area of the parallelogram
V V spanned by u and u . The area of the parallelogram spanned by two vectors x and 1 2 2 2 2 ¤ 1 /2 y in X is given by 1 , u 2
£kxk kyk − hx, yi (see [6]). In particular, when {u } and
1 , . . . , v u 1 and q
{v } are orthonormal, the area of the parallelogram spanned by proj
V h P P £ P ¤ q q q 2 2 2 i 1 /2 proj u 2 is 1 , v 2 , v 1 , v 2 , v , while the area of the k k k k
V hu i hu i − hu ihu i k=1 k=1 k=1 parallelogram spanned by u 1 and u 2 is 1.
Our result here says that, in statistical terminology, the value of cos θ is the product of the maximum and the minimum correlations between u ∈ U and its best predictor proj
V u ∈ V . The maximum correlation is of course the cosine of the first canonical angle, while the minimum correlation is the cosine of the second canonical angle. The fact that the cosine of the last canonical angle is in general the minimum correlation between u ∈ U and proj
V u ∈ V is justified in [10].
2.2. The general case. The main ideas of the proof of Theorem 1 are that the T matrix M M is diagonalizable so that its determinant is equal to the product of its eigenvalues, and that the eigenvalues are the square of the cosines of the canonical angles. The following statement is slightly less sharp than Theorem 1, but much easier to prove (in the sense that there are no sophisticated arguments needed). Theorem 3
. cos θ is equal to the product of all critical values of f (u) := max hu, vi, v∈V,kvk=1 u ∈ U, kuk = 1. proj V u
Proof and
. For each u ∈ X, hu, vi is maximized on {v ∈ V : kvk = 1} at v = V 2 kproj uk max
, given by hu, vi = kproj V uk. Now consider the function g := f v∈V,kvk=1 2
, g(u) = kproj V uk u ∈ U, kuk = 1. P p
Writing u = t u , we have i i i=1 X q q p X X proj u = k k = t i i , v k k ,
V hu, v iv hu iv k=1 k=1 i=1 and so we can view g as g(t 1 , . . . , t ), given by p ° ° ° ° X q p q p X 2 ³ ´ X X 2 g(t 1 , . . . , t p ) := t i i , v k k = t i i , v k , ° hu iv ° hu i i=1 i=1 k=1 k=1 P p 2 T where t i , v k , one may write i = 1. In terms of the matrix M = [hu i] i=1 T T 2 g(t) = t M , p = 1, q T p p M t = |Mt| |t| where t = [t 1 p ] p . denotes the usual norm in R To find its critical values, we use the Lagrange method, working with the function 2 2 p t h + λ ¢, .
λ (t) := |Mt| q ¡1 − |t| p ∈ R
For the critical values, we must have T λ (t) = 2M
∇h M t − 2λt = 0, whence T (2.1) M M t = λt. T Multilpying both sides by t on the left, we obtain T T T 2 λ = t M M t = g(t), because t t = 1. Hence any Lagrange multiplier λ is a critical value of g. = |t| T
But (2.1) also tells us that λ must be an eigenvalue of M M . From elementary linear algebra, we know that λ must satisfy T det(M M − λI) = 0, T which has in general p roots, λ 1 p M is diagonalizable,
≥ · · · ≥ λ ≥ 0, say. Since M we conclude that T Y p det(M M ) = λ . i i=1
This proves the assertion.
¤ A geometric interpretation of the value of cos θ is provided below.
Fact 4 . cos θ is equal to the volume of the p-dimensional parallelepiped spanned by proj u , i = 1, . . . , p. i
V Proof 1 , . . . , v u = q i . Since we are assuming that {v } is orthonormal, we have proj
V P q , v , for each i = 1, . . . , p. Thus, for i, j = 1, . . . , p, we obtain i k k hu iv k=1 X q u i , proj u j i , v k j , v k
V V hproj i = hu ihu i, k=1 whence T 2 u , proj u M ) = cos θ. i j det[hproj
V V i] = det(M u i , proj u j
But det[hproj
V V i] is the square of the volume of the p-dimensional paral- lelepiped spanned by proj u i , i = 1, . . . , p (see [6]). ¤
V
3. Concluding remarks Statisticians use canonical angles as measures of dependency of one set of random variables on another. The drawback is that finding these angles between two given subspaces is rather involved (see, for example, [2, 4]). Many researchers often use only the first canonical angle for estimation purpose (see, for instance, [5]). Geometrically, however, the first canonical angle is not a good measurement for approximation (in 3 R , for instance, the first canonical angle between two arbitrary subspaces is 0 even though the two subspaces do not coincide).
The work of Risteski and Trenˇcevski [11] suggests that if we multiply the cosines of the canonical angles (instead of computing each of them separately), we get the cosine of some value θ that can be considered as the ‘geometrical’ angle between the two given subspaces. In general, as is shown in [7], the value of cos θ represents the ratio between the volume of the parallelepiped spanned by the projection of the basis vectors of the lower dimension subspace on the higher dimension subspace and the volume of the parallelepiped spanned by the basis vectors of the lower dimension subspace. (Thus, the notion of angles between two subspaces developed here can be thought of as a generalization of the notion of angles in trigonometry taught at schools.) What is particularly nice here is that we have an explicit formula for cos θ in terms of the basis vectors of the two subspaces.
References
[1] T.W. Anderson, An Introduction to Multivariate Statistical Analysis, John Wiley & Sons, Inc.,
New York, 1958. [2] ˙A. Bj¨orck and G.H. Golub, “Numerical methods for computing angles between linear subspaces”, Math. Comp. 27 (1973), 579–594.
[3] C. Davies and W. Kahan, “The rotation of eigenvectors by a perturbation. III”, SIAM J. Numer.
Anal. 7 (1970), 1–46.
[4] Z. Drmaˇc, “On principal angles between subspaces of Euclidean space”, SIAM J. Matrix Anal.
Appl. 22 (2000), 173–194 (electronic).
[5] S. Fedorov, “Angle between subspaces of analytic and antianalytic functions in weighted L
2 space on a boundary of a multiply connected domain” in Operator Theory, System Theory andRelated Topics , Beer-Sheva/Rehovot, 1997, 229–256.
[6] F.R. Gantmacher, The Theory of Matrices, Vol. I, Chelsea Publ. Co., New York (1960), 247–256.
[7] H. Gunawan, O. Neswan and W. Setya-Budhi, “A formula for angles between two subspaces of
inner product spaces”, submitted.
[8] A.V. Knyazev and M.E. Argentati, “Principal angles between subspaces in an A-based scalar
product: algorithms and perturbation estimates”, SIAM J. Sci. Comput. 23 (2002), 2008–2040.
[9] J. Miao and A. Ben-Israel, “Product cosines of angles between subspaces”, Linear Algebra Appl.
237 /238 (1996), 71–81.[10] V. Rakoˇcevi´c and H.K. Wimmer, “A variational characterization of canonical angles between
subspaces”, J. Geom. 78 (2003), 122–124.[11] I.B. Risteski and K.G. Trenˇcevski, “Principal values and principal subspaces of two subspaces
of vector spaces with inner product”, Beitr¨ age Algebra Geom. 42 (2001), 289–300.[12] H.K. Wimmer, “Canonical angles of unitary spaces and perturbations of direct complements”,
Linear Algebra Appl. 287(1999), 373–379. Department of Mathematics, Bandung Institute of Technology, Bandung 40132, Indonesia E-mail address : hgunawan, [email protected]