LINEAR SPACES

1.12 Orthogonality in a Euclidean space

DEFINITION . In a Euclidean space tw o elements x and y are called orthogonal their inner product is zero. A subset

(x, y ) = 0 for every pair of distinct elements x and y in S. An orthogonal set is called orthonormal

of

V is

an orthogonal set

each of its

elements has norm 1.

The zero element is orthogonal to every element of V; it is the only element orthogonal to itself. The next theorem shows a relation between orthogonality and independence.

THEOREM 1.10. a Euclidean space

every orthogonal set of elements is

independent. In particular, in a jinite- dimensional Euclidean space w ith dim V=

every

orthogonal set consisting of n

elements is a basis for V.

Let S be an orthogonal set of elements in V, and suppose some finite linear combination of elements of

Proof.

is zero, say

where each S. Taking the inner product of each member with and using the fact that (xi ,

0 so = 0. Repeating the argument with

= 0 if i

1 , we find that

But

0 since

we find that each = 0. This proves that is independent. If dim V = n and if consists of n elements, Theorem 1.7(b)

replaced by

shows that is a basis for V.

= let be the set of trigonometric functions

EXAMPLE . In the real linear space

with the inner product

. . given by

for n= If m

= cos nx,

= sin nx,

n, we have the orthogonality relations

0 dx = 0,

19 so S is an orthogonal set. Since no member of S is the zero element, S is independent. The

Orthogonality in a Euclidean space

norm of each element of

= dx = and, for n

is easily calculated. We have

we have

nx dx = Therefore,

nx dx =

1 . Dividing each by its norm, we obtain an orthonormal set

Thus, we have

for

In Section 1.14 we shall prove that every finite-dimensional Euclidean space has an orthogonal basis. The next theorem shows how to compute the components of an element relative to such a basis.

dimension and that =

THEOREM I 1. Let V he a

Euclidean space

an orthogonal

[fan element x is expressed as

linear combination of the basis elements, say

its components relative to the ordered basis (e, , . . . , are given by ‘1.9)

particular, if is an orthonormal basis, each is given by

= (x,

Proof. Taking the inner product of each member of (1.8) with we obtain

we obtain (1.10). If . . . , e,} is an orthonormal basis, Equation (1.9) can be written in the form

= 0 if i

This implies

and when

(x,

The next theorem shows that in a finite-dimensional Euclidean space with an orthonormal

20 Linear spaces

THEOREM 1.12. Let V be Euclidean space of dimension n, and assume

fhat {e,, . . . , e,} is an orthonormal basis for V. Then for every puir of elements x and y in V,

we have

(Parseval’s formula).

In particular, when x = y , we have

Proof, Taking the inner product of both members of Equation (1.11) withy and using the linearity property of the inner product, we obtain (1.12). When x =

Equation (1.12) reduces to (1.13).

Note: Equation (1.12) is named in honor of

A. (circa who

obtained this type of formula in a special function space. Equation (1.13) is a generalization of the theorem of Pythagoras.

1.13 Exercises

. In each case, determine whether (x, y) is an inner product for

1. Let x = . . . , x,) andy =

be arbitrary vectors in

if (x, y) is defined by the formula given. In case (x, y) is not an inner product, tell which axioms are not satisfied.

2. Suppose we retain the first three axioms for a real inner product (symmetry, linearity, and homogeneity but replace the fourth axiom by a new axiom (4’): (x, x) = 0 if and only if

x = 0. Prove that either (x, x) > 0 for all x 0 or else (x, x) < 0 for all x 0.

Assume (x, x) > 0 for some x 0 and y) < 0 for some 0. In the space spanned by {x, y), find an element 0 with (z, z) =

[Hint:

Prove that each of the statements in Exercises 3 through 7 is valid for all elements x and y in a real Euclidean space.

3. (x, y) = 0 if and only if

4. (x, y) = 0 if and only if

5. (x, = 0 if and only if

for all real c.

6. (x y, x y) = 0 if and only if

7. If x and y are elements making an angle with each other, then

Exercises

8. In the real linear space C(l, e), define an inner product by the equation

(a) = compute (b) Find a linear polynomial g(x) = a +

that is orthogonal to the constant function f(x) = 1.

9. In the real linear space C(

dt . Consider the three functions given by

let

Prove that two of them are orthogonal, two make an angle with each other, and two make an angle

with each other.

10. In the linear space of all real polynomials of degree define

(a) Prove that is an inner product for

(b) Compute

= and

(c) = , find all linear polynomials orthogonal In the linear space of all real polynomials, define

dt . (a) Prove that this improper integral converges absolutely for all polynomials and g.

(c) Compute when

(d) Find all linear

= a + orthogonal

12. In the linear space of all real polynomials, determine whether or not is an inner product if

is defined by the formula given. In case is not an inner product, indicate which axioms are violated. In (c),

and denote derivatives.

converges. If x =

13. Let

of all infinite sequences {x,} of real numbers for which the series

and =

are two elements of

define

(a) Prove that this series converges absolutely. Use the

inequality to estimate the sum

(b) Prove that is a linear space with (x,

as an inner product.

(c) Compute (x, if

+ 1) for 1.

(d) Compute (x, if

Let be the set of all real functions continuous on [0, + and such that the integral dt converges. Define

dt .

22 Linear spaces

(a) Prove that the integral for converges absolutely for each pair of functions f and

in V.

[Hint: Use the

inequality to estimate the integral

(b) Prove that V is a linear space with (f,

as an inner product.

(c) Compute

where = 0, ....

15. In a complex Euclidean space, prove that the inner product has the following properties for

all elements and z, and all complex a and

(b) (x,

16. Prove that the following identities are valid in every Euclidean space. (a)

17. Prove that the space of all complex-valued functions continuous on an interval [a, b] becomes

a unitary space if we define an inner product by the formula

where w is a fixed positive function, continuous on [a, b].

1.14 Construction of orthogonal sets. The Gram-Scltmidt process

Every finite-dimensional linear space has a finite basis. If the space is Euclidean, we can always construct an orthogonal basis. This result will be deduced as a consequence of a general theorem whose proof shows how to construct orthogonal sets in any Euclidean

space, finite or infinite dimensional. The construction is called the Gram-Schmidt

onalizationprocess, in honor of J. P. Gram (1850-1916) and E. Schmidt (18451921).

be ajinite o r sequence of elements in a Euclidean space V, and let

THEOREM 1.13. ORTHOGONALIZATION THEOREM .

Let

denote the spanned by

k of these elements. Then there is a corresponding sequence of elements in V which has the following properties for each integer k: (a) The element is orthogonal to every element in the

..., (b) The

spanned by

is the same as that spanned by ...,

(c) The sequence , . . . , is unique, except for scalar factors. That is, , , . . . , is

another sequence of elements in V satisfying properties (a) and (b) for all k, then for each k there is a scalar such that

We construct the elements . . . , by induction. To start the process, we take

Proof.

so that (a) and (b) are satisfied when k = r . Then we define

= Now assume we have constructed

by the equation

23 where the scalars a,, . . . , a, are to be determined. For j

Construction of orthogonal sets. The Gram- Schmidt process

the inner product of with

is given by

since = if

. If

we can make

orthogonal to by taking

and in this case we choose Thus, the element

If then is orthogonal to

for any choice of

is well defined and is orthogonal to each of the earlier elements

. Therefore, it is orthogonal to every element in the

This proves (a) when = r+ 1. To prove (b) when k = r + we must show that

,..., given that

. The first r elements

..., are in

and hence they are in the larger

The new element given by (1.14) is a difference of two elements in

so it, too, is in ..., This proves that

Equation (1.14) shows that

,..., so a similar argument gives the inclusion in the other direction:

is the sum of two elements in

This proves (b) when k=r+

1. Therefore both (a) and (b) are proved by induction on k.

Finally we prove (c) by induction on k. The case k = 1 is trivial. Therefore, assume (c) is true for k = r and consider the element

. Because of (b), this element is in

so we can write

where .. .,

= 0. By property (a), both and are orthogonal to

. We wish to prove that

is orthogonal to . In other words,

. Therefore, their difference,

is orthogonal to itself, so = 0. This completes the proof of the tion theorem.

24 Linear spaces

0 for some Then (1.14) shows that

In the foregoing construction, suppose we have

and hence of ..., so the elements

is a linear combination of

..., are dependent. In other words, if the first k elements .., are independent, then the corresponding elements

In this case the coefficients

are

in (1.14) are given by

and the formulas defining

These formulas describe the Gram-Schmidt process for constructing an orthogonal set of elements

as a given independent set ..

which spans the same

In particular, if

is a basis for a finite-dimensional Euclidean space,

... is an orthogonal basis for the same space. We can also convert this to an

orthonormal basis by normalizing each element that is, by dividing it by its norm. Therefore, as a corollary of Theorem 1.13 we have the following.

THEOREM 1.14.

Euclidean space has an orthonormal basis.

If and y are elements in a Euclidean space, withy the element

is called the projection of x along y. In the Gram-Schmidt process we construct the element

along each of the earlier elements

by subtracting from

the projection of

..., Figure 1.1 illustrates the construction geometrically in the vector space

The Gram-Schmidt process in . An orthogonal set , is constructed from a given independent set

25 EXAMPLE 1. In

Construction of orthogonal sets. The Gram-Schmidt process

find an orthonormal basis for the spanned by the three vectors

(-3, -3, 1, -3). Solution. Applying the Gram-Schmidt process, we find =

the three vectors must be dependent. But since and are the vectors

is a of dimension 2. The set

and

are independent. Therefore

is an orthogonal basis for this subspace. Dividing each of and

by its norm we get an orthonormal basis consisting of the two vectors

EXAMPLE 2. The Legendre polynomials. In the linear space of all polynomials, with the inner product (x, y) =

, where , =

dt , consider the infinite sequence

When the orthogonalization theorem is applied to this sequence it yields another sequence of polynomials

. . . , first encountered by the French mathe- matician A. M. Legendre (1752-1833) in his work on potential theory. The first few polynomials are easily calculated by the Gram-Schmidt process. First of all, we have

= 1 . Since

= we find that

and

= t.

Next, we use the relations

to obtain =

Similarly, we find that =

26 Linear spaces

We shall encounter these polynomials again in Chapter 6 in our further study of differential equations, and we shall prove that

n! 1)“.

The polynomials given by

are known as the Legendrepolynomials. The polynomials in the corresponding orthonormal

are called the normalized Legendre poly-

nomials. From the formulas for

given above, we find that

1.15. Orthogonal complements. Projections

Let

be a Euclidean space and let S be a finite-dimensional subspace. We wish to

consider the following type of approximation problem: Given an element x in V, to deter-

mine an element in S whose distance from x is as small as possible. The distance between two elements x and y is defined to be the norm

Before discussing this problem in its general form, we consider a special case, illustrated in Figure 1.2. Here V is the vector space

and S is a two-dimensional subspace, a plane through the origin. Given x in V, the problem is to find, in the plane S, that point nearest to x.

If x then clearly = x is the solution. If x is not in S, then the nearest point is obtained by dropping a perpendicular from x to the plane. This simple example suggests an approach to the general approximation problem and motivates the discussion that

follows.

DEFINITION. Let S be a subset of a Euclidean space V. An element in V is said to be

orthogonal to S if it is orthogonal to every element of S. The set of’ all elements orthogonal to S is denoted by

and is called

perpendicular.”

of V, whether or not S itself is one. In case S is a subspace, then

It is a simple exercise to verify that

is a

is called the orthogonal complement of S.

is a line through the origin perpendicular to this plane. This example also gives a geometric inter- pretation for the next theorem.

If S is a plane through the origin, as shown in Figure 1.2, then

Orthogonal complements. Projections

F IGURE 1.2 Geometric interpretation of the orthogonal decomposition theorem in

Let V be a Euclidean space and let S be

THEOREM 1.15. ORTHOGONAL DECOMPOSITION THEOREM .

of V. Then every element x in V can be represented uniquely as a sum of two elements, one in S and one in

That is, we have (1.17)

where S

and

Moreover, the norm of x is given by the Pythagorean formula (1.18)

Proof. First we prove that an orthogonal decomposition (1.17) actually exists. Since S is finite-dimensional, it has a finite orthonormal basis, say {e, , . . . , e,}. Given

define the elements and

as follows:

Note that each term (x, is the projection of x along . The element is the sum of the projections of x along each basis element. Since is a linear combination of the basis elements, lies in S. The definition of

shows that Equation (1.17) holds. To prove that lies in

we consider the inner product of and any basis element . We have

= (x, e,) (s, .

But from we find that (s,

is orthogonal to Therefore is orthogonal to every element in S, which means that

= (x, e,), so

Next we prove that the orthogonal decomposition (1.17) is unique. Suppose that x has two such representations, say

and

28 Linear spaces

where and t are in and

We wish to prove that = t and = . From

and

are in

so we need only prove that = 0. But t Sand

we have

and equal to . Since the zero element is the only element orthogonal to itself, we must have

so t is both orthogonal to

t = 0. This shows that the decomposition is unique. Finally, we prove that the norm of

is given by the Pythagorean formula. We have

the remaining terms being zero since and are orthogonal. This proves (1.18).

DEFINITION.

Let be a of a Euclidean space and let

. . . , e,} be an orthonormal basis for S. If x V, the element

by the equation

(x,

is called the projection of x on the

S.

We prove next that the projection of x on is the solution to the approximation problem stated at the beginning of this section.

1.16 Best approximation of elements

a Euclidean space by elements in a dimensional

T H EO REM 1.16. A PPRO X IM A T IO N T H EO REM .

Let be a

of

V. Then the projection of x on is nearer to x than any other element of S. That is,

a Euclidean space V, and let x be any element of

is the projection of x on we have

for all t in the equality sign holds if and only if t = s. Proof. By Theorem 1.15 we can write x

where and Then, for any t in

we have

x t = (x s) + (s t) .

Since and x

this is an orthogonal decomposition of x so its norm is given by the Pythagorean formula

But

0, so we have with equality holding if and only if =

This completes the proof.

Best approximation of elements in a Euclidean space

EXAMPLE 1. Approximation of continuous functions on by trigonometric polynomials. Let

= the linear space of all real functions continuous on the interval and define an inner product by the equation

dx . In Section 1.12 we exhibited an orthonormal set of trigonometric functions

S of dimension 2n + 1. The ele- ments of are called trigonometric polynomials.

The 2n + 1 elements

span a

denote the projection off on the

S. Then we have

The numbers are called Fourier off. Using the formulas in we can rewrite (1.22) in the form

f,(x) =

f(x) sin kx dx

The approximation theorem tells us that the trigonometric poly- nomial in (1.23) approximates

better than any other trigonometric polynomial in in the sense that the norm

is as small as possible.

EXAMPLE 2. Approximation of continuous functions on

by polynomials of

degree n. Let = C(- the space of real continuous functions on [- 1, and let =

The + 1 normalized Legendre polynomials ..., introduced in Section 1.14, span a

S of dimension n + 1 consisting of all poly- nomials of degree

denote the projection off on S. Then we have

n.

let

This is the polynomial of degree

is smallest. For example, = sin

for which the norm

the coefficients

are given by

sin

dt.

In particular, we have

= 0 and

30 Linear spaces

Therefore the linear

which is nearest to sin

on [- is

Since = 0, this is also the nearest quadratic approximation.

1.17 Exercises

1. In each case, find an orthonormal basis for the

spanned by the given vectors. =

of

2. In each case, find an orthonormal basis for the

spanned by the given vectors. =

dt, let x,(t) = cos forn

3. In the real linear space

with inner product (x,

Prove that the functions

. . . , given by

form an orthonormal set spanning the same

as

4. In the linear space of all real polynomials, with inner product = let x,(t) = for = 0, 1, 2, . . . . Prove that the functions

6t + 1) form an orthonormal set spanning the same

(2t

y ,(t) =

as

and such that the integral

5. Let be the linear space of all real functions f continuous on [0, +

dt, and let . . . , be the set obtained by applying the Gram-Schmidt process to

dt converges. Define

. . . , where x,(t) = for 0. Prove that

6. In the real linear space

3) with inner product

f dx, let f(x) = l/x and show that the constant polynomial nearest to f is = log 3. Compute

-f for this g.

7. In the real linear space

2) with inner product

g) =

f let f(x) = and

1). Compute -f for this g.

show that the constant polynomial nearest to f is =

8. In the real linear space C( 1, 1) with inner product

f dx , let f(x) =

and find the linear polynomial nearest to f. Compute

f for this g.

9. In the real linear space

with inner product

dx, let f(x) = x.

In the spanned by u,,(x) = 1 ,

= sin x, find the trigonometric polynomial nearest to

= cos x,

10. In the linear space of Exercise 5, let

f (x) =

and find the linear polynomial that is nearest