Basic Formulas and Theorems
4.6.1 Basic Formulas and Theorems
The basic formulas and theorems derived in this section rely to a great extent on the theory in Sec. 4.5. An expansion of the form of (4.6.1) can be expressed in many equivalent ways. If we set a k =r k sin v k ,b k =r k cos v k , then using the addition theorem for the sine function we can write
f (t ) =
(a k cos kωt + b k sin kωt),
k =0
where a k ,b k are real constants. Another form, which is often the most convenient, can be found with the help of Euler’s formulas,
sin x = (e −e ), (i = 2 −1). 2i Here and in what follows i denotes the imaginary unit. Then one gets
= ikωt c k e ,
c −k = (a 2 k 2 +ib k ), k ≥ 1, . . . . (4.6.4) In the rest of this chapter we shall use the term Fourier series to denote an expansion of
c 0 =a 0 , c k = (a k −ib k ),
the form of (4.6.1) or (4.6.3). We shall call the partial sums of the form of these series trigonometric polynomials . Sometimes the term spectral analysis is used to describe the above methods.
We shall study functions with period 2π. These are fully defined by their values on the fundamental interval [−π, π]. If a function of t has period L, then the substitution x = 2πt/L transforms the function to a function of x with period 2π. We assume that the
function can have complex values, since the complex exponential function is convenient for manipulations.
In the continuous case the inner product of two complex-valued functions f and g of period 2π is defined in the following way (the bar over g indicates complex conjugation):
(f, g) =
f (x) ¯g(x)dx.
(It makes no difference what interval one uses, as long as it has length 2π—the value of the inner product is unchanged.) As usual the norm of the function f is defined by
1/2 . Notice that (g, f ) = (f, g).
Theorem 4.6.1.
The following orthogonality relations hold for the functions
φ j (x)
=e ij x , j = 0, ±1, ±2, . . . ,
where
2π if j = k,
(φ j ,φ k )
484 Chapter 4. Interpolation and Approximation
Proof.
−1) −k − (−1) −k (φ j ,φ k ) =
( π e i(j −k)x
ij x −ikx
e e dx = (
i(j − k) whereby orthogonality is proved. For j = k
e e −ikx dx =
If one knows that the function f (x) has an expansion of the form
j =−∞
then from Theorem 4.6.1 it follows formally that
since (φ j ,φ k )
(f, φ j )
f (x)e −ijx dx.
These coefficients are called Fourier coefficients; see the more general case in Theo- rem 4.5.13. In accordance with (4.6.4) set
j sin jx),
(4.6.8) (Notice that the factors preceding the integral are different in the expressions for c j and for
a j ,b j , respectively.) From a generalization of Theorem 4.5.13, we also know that the error
n< ∞,
j =−n
4.6. Fourier Methods 485 becomes as small as possible if we choose k j =c j , −n ≤ j ≤ n. Theorem 4.5.14 and its
corollary, Parseval’s identity,
2π |c j 2 2 | 2 = |f (x)| dx, (4.6.9)
j =−∞
are of great importance in many applications of Fourier analysis. The integral in (4.6.9) can
be interpreted as the “energy” of the function f (x). Theorem 4.6.2 (Fourier Analysis, Continuous Case).
Assume that the function f is defined at every point in the interval [−π, π] and that
f (x) is finite and piecewise continuous. Associate with f a Fourier series in the following two ways:
where the coefficients a j ,b j , and c j are defined by (
4.6.8) in the first case and (4.6.7) in the second case. Then the partial sums of the above expansions give the best possible approximations to f (x) by trigonometric polynomials, in the least squares sense.
If f is of bounded variation and has at most a finite number of discontinuities, then the series is everywhere convergent to f (x). At a point x = a of discontinuity f (a) equals 1
the mean f (a) = 2 (f (a +) + f (a−)). Proof. The proof of the convergence results is outside the scope of this book (see, e.g.,
Courant and Hilbert [83]). The rest of the assertions follow from previously made cal- culations in Theorem 4.6.1 and the comments following; see also the proof of Theorem
4.5.13. The more regular a function is, the faster its Fourier series converges. The following
useful result is relatively easy to prove using (4.6.7) and integrating by parts k + 1 times (cf. (3.2.8)).
Theorem 4.6.3.
If f and its derivatives up to and including order k are periodic and everywhere continuous, and if f (k +1) is piecewise continuous, then
1 (k
(4.6.10) Sometimes it is convenient to separate a function f defined on [−π, π] into an even
and an odd part. We set f (x) = g(x) + h(x), where
g(x) = (f (x) 2 + f (−x)), h(x) = (f (x) 2 − f (−x)) ∀x. (4.6.11) Then g(x) = g(−x) and h(x) = −h(−x). For both g(x) and h(x) it suffices to give the
function only on [0, π]. For the even function g(x) the sine part of the Fourier series drops
486Chapter 4. Interpolation and Approximation and we have
g(x) = a 0 +
a j cos jx, a j =
g(x) cos jx dx. (4.6.12)
For h(x) the cosine part drops out and the Fourier series becomes a sine series:
h(x) =
b j sin jx, b j =
h(x) sin jx dx. (4.6.13)
j =1
The proof is left as an exercise to the reader (use the formulas for the coefficients given in (4.6.8)).
Example 4.6.1.
Consider the rectangular wave function obtained by periodic continuation outside the interval (−π, π) of
see Figure 4.6.1. The function is odd, so a j = 0 for all j, and
1 % 0 if j even,
π sin jx dx = jπ
1 − cos jπ) =
0 2/(jπ) if j odd. Hence
Notice that the coefficients c j decay as j −1 in agreement with Theorem 4.6.3.
Figure 4.6.1.
A rectangular wave.
The sum of the series is zero at the points where f has a jump discontinuity; this agrees with the fact that the sum should equal the average of the limiting values to the left and to the right of the discontinuity.
Figure 4.6.2 shows the approximations to the square wave using one, two, five, and ten terms of the series (4.6.14). As can be seen, there is a ringing effect near the discon- tinuities. The width and energy of this error are reduced when the number of terms in the approximation is increased. However, the height of the overshoot and undershoot near the
4.6. Fourier Methods 487
Figure 4.6.2. Illustration of Gibbs’ phenomenon.
discontinuity converges to a fixed height, which is equal to about 0.179 times the jump in the function value. This artifact is known as Gibbs’ phenomenon. 158
The Gibbs’ oscillations can be smoothed by multiplying the terms by factors which depend on m, the order of the partial sum. Let us consider the finite expansion
k =−(m−1)
Then in the smoothed expansion each term in the sum is multiplied by the Lanczos σ -factors,
m −1
sin πk/m
(see Lanczos [235, Chap. IV, Secs. 6 and 9]). Since the coefficients in the real form of the Fourier series are
a k =c k +c −k ,
b k = i(c k −c −k ),
the same σ -factor applies to them. The Gibbs’ oscillations can also be suppressed by using the epsilon algorithm. For this purpose one adds the conjugate Fourier series, applies the epsilon algorithm, and keeps only the real part of the result.