Approximation Formulas by Operator Methods

3.3.4 Approximation Formulas by Operator Methods

We shall now demonstrate how operator methods are very useful for deriving approximation formulas. For example, in order to find interpolation formulas we consider the operator expansion

( −∇) j f (b).

j =0

The verification of the assumptions of Theorem 3.3.7 offers no difficulties, and we omit the details. Truncate the expansion before (−∇) k . By the theorem we obtain, for every

γ , an approximation formula for f (b − γ h) that uses the function values f (b − jh) for j = 0 : k − 1; it is exact if f ∈ P k and is unique in the sense of Theorem 3.3.4. We also obtain an asymptotic error estimate if f / ∈P k , namely the first neglected term of the expansion, i.e.,

k −∇) f (b) ∼

k −h) f (k) (b).

Note that the binomial coefficients are polynomials in the variable γ , and hence also in the variable x = b − γ h.

It follows that the approximation formula yields a unique polynomial P B ∈P k that solves the interpolation problem: P B (b − hj) = f (b − hj), j = 0 : k − 1 (B stands for backward). If we set x = b − γ h, we obtain

f (b) (3.3.35)

= f (b − γ h) + O(h (k) f ). Similarly, the interpolation polynomial P F ∈P k that uses forward differences based on the

values of f at a, a + h, . . . , a + (k − 1)h reads, if we set x = a + θh,

4 j f (a) (3.3.36)

j =0

= f (a + θh) + O(h k f (k) ).

3.3. Difference Operators and Operator Expansions 243 These formulas are known as Newton’s interpolation formulas for constant step size,

backward and forward. The generalization to variable step size will be found in Sec. 4.2.1. There exists a similar expansion for central differences. Set

φ j is an even function if j is even, and an odd function if j is odd. It can be shown that δ j φ k (θ ) =φ k −j (θ ) and δ j φ k ( 0) = δ j,k (Kronecker’s delta). The functions φ k have thus an analogous relation to the operator δ as, for example, the functions θ j /j

! and θ j have to the operators D and 4, respectively. We obtain the following expansion, analogous to Taylor’s

formula and Newton’s forward interpolation formula. The proof is left for Problem 3.3.5 (b). Then

= f (a + θh) + O(h f (k) ). (3.3.38)

j =0

The direct practical importance of this formula is small, since δ j f (a) cannot be expressed as a linear combination of the given data when j is odd. There are several formulas in which this drawback has been eliminated by various transformations. They were much in use before the computer age; each formula had its own group of fans. We shall derive only one of them, by a short break-neck application of the formal power series techniques. 81 Note that

E θ =e θ hD = cosh θhD + sinh θhD, δ 2 =e hD −2+e −hD ,

e hD −e −hD = 2µδ,

1 θ ∞ −θ 2j

cosh θhD = (E +E ) = φ 2j (θ )δ 2 ,

j =0

1 d( cosh θhD)

∞ 1 dδ 2j dδ 2

sinh θhD = θ

φ 2j (θ )

d(hD)

θ dδ 2 d(hD)

2j =

∞ jδ 2(j−1)

φ 2j (θ ) µδ 2j−1 .

+ θh) = f + θµδf +

f (x 0 0 0 δ 2

φ 2j (θ )

µδ 2j−1 f 0 +δ 2j f 0 . (3.3.39)

This is known as Stirling’s interpolation formula. 82 The first three terms have been taken out from the sum, in order to show their simplicity and their resemblance to Taylor’s formula. They yield the most practical formula for quadratic interpolation; it is easily remembered

81 Differentiation of a formal power series with respect to an indeterminate has a purely algebraic definition. See the last part of Sec. 3.1.5.

82 James Stirling (1692–1770), British mathematician perhaps most famous for his amazing approximation to n!.

244 Chapter 3. Series, Operators, and Continued Fractions and worth being remembered. An approximate error bound for this quadratic interpolation

reads |0.016δ 3 f | if |θ| < 1. Note that

=θ − 1)(θ − 4) · · · (θ − (j − 1) )/( 2j)!. The expansion yields a true interpolation formula if it is truncated after an even power of

δ . For k = 1 you see that f 0 + θµδf 0 is not a formula for linear interpolation; it uses three data points instead of two. It is similar for all odd values of k. Strict error bounds can be found by means of Peano’s theorem, but the remainder given by Theorem 4.2.3 for Newton’s general interpolation formula (that does not require equidis- tant data) typically give the answer easier. Both are typically of the form c k +1 f (k +1) (ξ ) and require a bound for a derivative of high order. The assessment of such a bound typically costs much more work than performing interpolation in one point.

A more practical approach is to estimate a bound for this derivative by means of a bound for the differences of the same order. (Recall the important formula in (3.3.4).) This is not a rigorous bound, but it typically yields a quite reliable error estimate, in particular if you put a moderate safety factor on the top of it. There is much more to be said about the choice of step size and order; we shall return to these kinds of questions in later chapters.

You can make error estimates during the computations; it can happen sooner or later that it does not decrease when you increase the order. You may just as well stop there, and accept the most recent value as the result. This event is most likely due to the influence of irregular errors, but it can also indicate that the interpolation process is semiconvergent only.

The attainable accuracy of polynomial interpolation applied to a table with n equidis- tant values of an analytic function depends strongly on θ; the results are much poorer near the boundaries of the data set than near the center. This question will be illuminated in Sec. 4.7 by means of complex analysis.

Example 3.3.9.

The continuation of the difference scheme of a polynomial is a classical application of

a difference scheme for obtaining a smooth extrapolation of a function outside its original domain. Given the values y n −i = f (x n − ih) for i = 1 : k and the backward differences,

∇ j y n −1 k , j = 1 : k − 1. Recall that ∇ −1 y is a constant for y ∈ P k . Consider the algorithm

It is left for Problem 3.3.2 (g) to show that the result y n is the value at x = x n of the interpolation polynomial which is determined by y n −i ,i = 1 : k. This is a kind of inverse use of a difference scheme; there are additions from right to left along a diagonal, instead of subtractions from left to right.

This algorithm, which needs additions only, was used long ago for the production of mathematical tables, for example, for logarithms. Suppose that one knows, by means of a

3.3. Difference Operators and Operator Expansions 245 series expansion, a relatively complicated polynomial approximation to (say) f (x) = ln x,

that is accurate enough in (say) the interval [a, b], and that this has been used for the computation of k very accurate values y 0 = f (a), y 1 = f (a + h), . . . , y k −1 , needed for

starting the difference scheme. The algorithm is then used for n = k, k + 1, k + 2, . . . , (b − a)/h. k − 1 additions only are needed for each value y n . Some analysis must

have been needed for the choice of the step h to make the tables useful with (say) linear interpolation, and for the choice of k to make the basic polynomial approximation accurate enough over a substantial number of steps. The precision used was higher when the table was produced than when it was used. When x = b was reached, a new approximating polynomial was needed for continuing the computation over another interval (at least a new value of ∇ k −1 y n ). 83

The algorithm in (3.3.40) can be generalized to the case of nonequidistant with the use of divided differences; see Sec. 4.2.1.

We now derive some central difference formulas for numerical differentiation. From the definition and from Bickley’s table (Table 3.3.1),

≡E 1/2 −E −1/2 = 2 sinh hD .

We may therefore put x = 1

2 hD , sinh x = 1 2 δ into the expansion (see Problem 3.1.7)

sinh 7 x x

1 sinh 3 x

1·3 sinh 5 x

= sinh x −

7 with the result δ

δ 3 3δ 5 5δ 7 35δ 9 63δ 11 hD = 2arcsinh

24 + 640 − 7168 + 294,912 − 2,883,584 + · · · The verification of the assumptions of Theorem 3.3.7 follows the pattern of the proof of

(3.3.23), and we omit the details. Since arcsinh z, z ∈ C, has the same singularities as its derivative (1 + z 2 ) −1/2 , namely z = ±i, it follows that the expansion in (3.3.42), if sc(δ/2)

is substituted for δ/2, converges if sc(δ/2) < 1; hence ρ = 2. By squaring the above relation, we obtain

(3.3.43) By Theorem 3.3.7 (3.3.43) holds for all polynomials. Since the first neglected nonvanishing

term of (3.3.43) when applied to f is (asymptotically) cδ 12 f ′′ (x 0 ) , the formula for f ′′ (x)

83 This procedure was the basis of the unfinished Difference Engine project of the great nineteenth century British computer pioneer Charles Babbage. He abandoned it after a while in order to spend more time on his huge Analytic

Engine project, which was also unfinished. He documented a lot of ideas, where he was (say) 100 years ahead of his time. “Difference engines” based on Babbage’s ideas were, however, constructed in Babbage’s own time, by the Swedish inventors Scheutz (father and son) in 1834 and by Wiberg in 1876. They were applied to, among other things, the automatic calculation and printing of tables of logarithms; see Goldstine [159].

246Chapter 3. Series, Operators, and Continued Fractions is exact if f ′′ ∈P 12 , i.e., if f ∈ P 14 , although only 13 values of f (x) are used. We thus

gain one degree and, in the application to functions other than polynomials, one order of accuracy, compared to what we may have expected by counting unknowns and equations only; see Theorem 3.3.4. This is typical for a problem that has a symmetry with respect to the hull of the data points.

Suppose that the values f (x) are given on the grid x = x 0 + nh, n integer. Since (3.3.42) contains odd powers of δ, it cannot be used to compute f n ′ on the same grid, as

pointed out in the beginning of Sec. 3.3.2. This difficulty can be overcome by means of another formula given in Bickley’s table, namely

µ = 1+δ 2 / 4. (3.3.44) This is derived as follows. The formulas

follow rather directly from the definitions; the details are left for Problem 3.3.6 (a). The formula (cosh hD) 2 − (sinh hD) 2 = 1 holds also for formal power series. Hence

− δ 2 = 1 or µ 2 =1+ δ 2 ,

from which the relation (3.3.44) follows. If we now multiply the right-hand side of (3.3.42) by the expansion

we obtain

This leads to a useful central difference formula for the first derivative (where we have used more terms than we displayed in the above derivation):

2h If you truncate the operator expansion in (3.3.47) after the δ 2k term, you obtain exactly the derivative of the interpolation polynomial of degree 2k + 1 for f (x) that is determined by

the 2k + 2 values f i , i = ±1, ±2, . . . , ±(k + 1). Note that all the neglected terms in the expansion vanish when f (x) is any polynomial of degree 2k + 2, independent of the value

of f 0 . (Check the statements first for k = 0; you will recognize a familiar property of the parabola.) So, although we search for a formula that is exact in P 2k+2 , we actually find a formula that is exact in P 2k+3 .

By the multiplication of the expansions in (3.3.43) and (3.3.46), we obtain the fol- lowing formulas, which have applications in other sections:

(hD) 3

1 (hD) 5 δ 2 µδ = 5 1− ,

(hD) 7 = µδ 7 +···.

3.3. Difference Operators and Operator Expansions 247 Another valuable feature typical for expansions in powers of δ 2 is the rapid convergence.

It was mentioned earlier that ρ = 2, hence ρ 2 = 4, (while ρ = 1 for the backward differentiation formula). The error constants of the differentiation formulas obtained by

(3.3.43) and (3.3.47) are thus relatively small. All this is typical for the symmetric approximation formulas which are based on central differences; see, for example, the above formula for f ′′ (x 0 ) , or the next example. In view of this, can we forget the forward and backward difference formulas altogether? Well, this is not quite the case, since one must often deal with data that are unsymmetric

with respect to the point where the result is needed. For example, given f −1 ,f 0 ,f 1 , how would you compute f ′ (x 1 ) ? Asymmetry is also typical for the application to initial value problems for differential equations. In such applications methods based on symmetric rules for differentiation or integration have sometimes inferior properties of numerical stability.

We shall study the computation of f ′ (x 0 ) using the operator expansion (3.3.47). The truncation error (called R T ) can be estimated by the first neglected term, where

1 µδ 2k+1 f 0 ( ≈h 2k f 2k+1) (x 0 ).

The irregular errors in the values of f (x) are of much greater importance in numerical differentiation than in interpolation and integration. Suppose that the function values have

0 = (f 1 −f −1 ) is also equal to 1 2 2 2 U . Similarly, one can show that the error bounds in µδ ( 2k+1) f 0 , for k = 1 : 3, are 1.5U, 5U, 417.5U, respectively. Thus one gets the upper bounds U/(2h), 3U/(4h), and 11U/(12h) for the roundoff error R XF with one, two, and three terms in (3.3.47).

errors whose magnitude does not exceed 1 U . Then the error bound on µδf

Example 3.3.10.

Assume that k terms in the formula above are used to approximate f ′ (x 0 ) , where

f (x) = ln x, x 0 = 3, and U = 10 −6 . Then

f ( 2k+1) ( 3) = (2k)!/3 2k+1 ,

and for the truncation and roundoff errors we get k

R T 0.0123h 2 0.00329h 4 0.00235h 6 . R XF ( 1/2h)10 −6 ( 3/4h)10 −6 ( 11/12h)10 −6

The plots of R T and R XF versus h in a log-log diagram in Figure 3.3.1 are straight lines that well illustrate quantitatively the conflict between truncation and roundoff errors. The truncation error increases, and the effect of the irregular error decreases with h. One sees how the choice of h, which minimizes the sum of the bounds for the two types of error, depends on U and k, and tells us what accuracy can be obtained. The optimal step lengths for k = 1, 2, 3 are h = 0.0344, h = 0.1869, and h = 0.3260, giving error bounds 2.91 · 10 −5 ,

8.03 · 10 −6 , and 5.64 · 10 −6 . Note that the optimal error bound with k = 3 is not much better than that for k = 2.

248 Chapter 3. Series, Operators, and Continued Fractions

Figure 3.3.1. Bounds for truncation error R T and roundoff error R XF in numerical differentiation as functions of h (U = 0.5 · 10 −6 ).

The effect of the pure rounding errors is important, though it should not be exaggerated. Using IEEE double precision with u = 1.1 · 10 −16 , one can obtain the first two derivatives very accurately by the optimal choice of h. The corresponding figures are h = 2.08 · 10 −5 ,

h = 2.19 · 10 −3 , and h = 1.36 · 10 −2 , giving the optimal error bounds 1.07 · 10 −11 ,

1.52 · 10 −13 , and 3.00 · 10 −14 , respectively. It is left to the user (Problem 4.3.8) to check and modify the experiments and conclu-

sions indicated in this example. When a problem has a symmetry around some point x 0 , you are advised to try to derive a δ 2 -expansion. The first step is to express the relevant operator in the form M(δ 2 ) , where the function M is analytic at the origin. To find a δ 2 -expansion for M(δ 2 ) is algebraically the same thing as expanding M(z) into powers of a complex variable z. Thus, the methods for the manipulation of power series mentioned in Sec. 3.1.4 and Problem 3.1.8 are available, and so is the Cauchy–FFT method. For suitably chosen r, N you evaluate

M(re 2πik/N ), k = 0 : N − 1,

and obtain the coefficients of the δ 2 -expansion by the FFT ! You can therefore derive a long expansion, and later truncate it as needed. You also obtain error estimates for all these truncated expansions for free. By the assumed symmetry there will be even powers of δ only in the expansion. Some computation and storage can be saved by working with F (√z) instead.

Suppose that you have found a truncated δ 2 -expansion, (say)

A(δ 2 )

2 4 ≡a 2k

1 +a 2 δ +a 3 δ +···+a k +1 δ ,

3.3. Difference Operators and Operator Expansions 249 but you want instead an equivalent symmetric expression of the form

B(E)

3 (E 2 ≡b k +b +E +b +E −2 ) +···+b k

1 2 (E

+1 (E +E ). Note that δ 2 =E−2+E −1 . The transformation A(δ 2 ) 9→ B(E) can be performed

−k

in several ways. Since it is linear it can be expressed by a matrix multiplication of the form b = M k +1 a , where a, b are column vectors for the coefficients, and M k +1 is the (k + 1) × (k + 1) upper triangular submatrix in the northwest corner of a matrix M that

turns out to be 

1 This 8 × 8 matrix is sufficient for a δ 2 -expansion up to the term a 8 δ 14 . Note that the matrix elements are binomial coefficients that can be generated recursively (Sec. 3.1.2). It

is easy to extend by the recurrence that is mentioned in the theorem below. Also note that the matrix can be looked upon as the lower part of a thinned Pascal triangle.

Theorem 3.3.10.

The elements of M are

j −1 2j−2

We extend the definition by setting M 0,j =M 2,j . Then the columns of M are obtained by the recurrence

(3.3.51) Proof. Recall that δ = (1 − E −1 )E 1/2 and put m − ν = µ. Hence

M i,j +1 =M i +1,j − 2M i,j +M i −1,j .

(E µ +E −µ ). (3.3.52)

( 1δ 2 δ 4 ...) = ( 1 (E − E −1 )

(E 2 −E −2 ) . . . ) M,

250 Chapter 3. Series, Operators, and Continued Fractions we have in the result of (3.3.52) an expression for column m+1 of M. By putting j = m+1

and i = µ + 1, we obtain (3.3.50). The proof of the recurrence is left to the reader. (Think of Pascal’s triangle.)

The integration operator D −1 is defined by the relation

(D −1 f )(x) =

f (t ) dt.

The lower limit is not fixed, so D −1 f contains an arbitrary integration constant. Note that DD −1 f = f , while D −1 Df = f + C, where C is the integration constant. A difference expression like

D −1 f (b) −D −1 f (a) =

f (t ) dt

is uniquely defined. So is δD −1 f , but D −1 δf has an integration constant.

A right-hand inverse can be also defined for the operators 4, ∇, and δ. For example, ( ∇ −1

= =n u j has an arbitrary summation constant but, for example, ∇∇ −1 = 1, and 4∇ −1 = E∇∇ −1 = E are uniquely defined.

u) j

One can make the inverses unique by restricting the class of sequences (or functions). For example, if we require that

j =0 u j is convergent, and make the convention that (4 −1 u) n → 0 as n → ∞, then 4 −1 u n =− ∞ j =n u j ; notice the minus sign. Also notice that this is consistent with the following formal computation:

1+E+E 2 + · · ·)u n = (1 − E) −1 u n = −4 −1 u n .

We recommend, however, some extra care with infinite expansions into powers of operators like E that is not covered by Theorem 3.3.7, but the finite expansion

(3.3.53) is valid.

1+E+E n +···+E −1 = (E − 1)(E − 1) −1

In Chapter 5 we will use operator methods together with the Cauchy–FFT method for finding the Newton–Cotes’ formulas for symmetric numerical integration. Operator techniques can also be extended to functions of several variables. The basic relation is again the operator form of Taylor’s formula, which in the case of two variables reads

u(x 0 ,y 0 ). (3.3.54)

∂x

∂y

3.3. Difference Operators and Operator Expansions 251

Dokumen yang terkait

Analisis Komparasi Internet Financial Local Government Reporting Pada Website Resmi Kabupaten dan Kota di Jawa Timur The Comparison Analysis of Internet Financial Local Government Reporting on Official Website of Regency and City in East Java

19 819 7

ANTARA IDEALISME DAN KENYATAAN: KEBIJAKAN PENDIDIKAN TIONGHOA PERANAKAN DI SURABAYA PADA MASA PENDUDUKAN JEPANG TAHUN 1942-1945 Between Idealism and Reality: Education Policy of Chinese in Surabaya in the Japanese Era at 1942-1945)

1 29 9

Improving the Eighth Year Students' Tense Achievement and Active Participation by Giving Positive Reinforcement at SMPN 1 Silo in the 2013/2014 Academic Year

7 202 3

An Analysis of illocutionary acts in Sherlock Holmes movie

27 148 96

The Effectiveness of Computer-Assisted Language Learning in Teaching Past Tense to the Tenth Grade Students of SMAN 5 Tangerang Selatan

4 116 138

The correlation between listening skill and pronunciation accuracy : a case study in the firt year of smk vocation higt school pupita bangsa ciputat school year 2005-2006

9 128 37

Existentialism of Jack in David Fincher’s Fight Club Film

5 71 55

Phase response analysis during in vivo l 001

2 30 2

The Risk and Trust Factors in Relation to the Consumer Buying Decision Process Model

0 0 15

PENERAPAN ADING (AUTOMATIC FEEDING) PINTAR DALAM BUDIDAYA IKAN PADA KELOMPOK PETANI IKAN SEKITAR SUNGAI IRIGASI DI KELURAHAN KOMET RAYA, BANJARBARU Implementation of Ading (Automatic Feeding) Pintar in Fish Farming on Group of Farmer Close to River Irriga

0 0 5