Convergence Acceleration of Fourier Series
4.6.4 Convergence Acceleration of Fourier Series
The generalized Euler transformation described in Sec. 3.4.3 can be used for accelerating the convergence of Fourier series, except in the immediate vicinity of singular points. Consider
a complex power series
S(z) =
A Fourier series that is originally of the form ∞ n =−∞ c n e inφ , or in trigonometric form, can easily be brought to this form; see Problem 4.6.7. We consider the case
S(z)
=− n z −1 ,
z log(1 − z) = n =1 n
4.6. Fourier Methods 493 which is typical for a power series with completely monotonic terms. (The rates of conver-
gence are the same for almost all series of this class.) Numerical computation, essentially by the above algorithm, gave the following results. The coefficients u j are computed in IEEE double precision arithmetic. We make the rounding errors during the computations less important by subtracting the first row of partial sums by its last element; it is, of course, added again to the final result. 159 The first table shows, for various φ, the most accurate result that can be obtained without thinning. These limits are due to the rounding errors; we can make the pure truncation error arbitrarily small by choosing N large enough.
π/ 2 π/ 3 π/ 4 π/ 8 π/ 12 π/180 |error| 2·10 −16 8·10 −16 10 −14 6·10 −12 10 −9 5·10 −7 3·10 −5 2·10 −1 N
20 22 18 10 no. terms
kk
615 Note that a rather good accuracy is also obtained for φ = π/8 and φ = π/12, where z
the algorithm is “unstable,” since | 1−z | > 1. In this kind of computation “instability” does not mean that the algorithm is hopeless, but it shows the importance of a good termination
criterion. The question is to navigate safely between Scylla and Charybdis. For a small value such as φ = π/180, the sum is approximately 4.1 + 1.5i. The smallest error with 100
terms (or less) is 0.02; it is obtained for k = 3. Also note that kk/N increases with φ. By the application of thinning the results can often be improved considerably for
φ ≪ π, in particular for φ = π/180. Let τ be a positive integer. The thinned form of S(z) reads
S(z) =
u p ∗ z τ ·(p−1) , u ∗ p =
z j +τ ·(p−1) −1 .
p =1
j =1
The series (4.6.31) has “essentially positive” terms originally that can become “essentially iπ/ alternating” by thinning. For example, if z = e 3 and τ = 3, the series becomes an
alternating series, perhaps with complex coefficients. It does not matter in the numerical work that u ∗ p depends on z.
We present the errors obtained for four values of the parameter τ , with different amounts of work. Compare |error|, kk, etc. with appropriate values in the table above. We
see that, by thinning, it is possible to calculate the Fourier series very accurately for small values of φ also.
159 Tricks like this can often be applied in linear computations with a slowly varying sequence of numbers. See, for example, the discussion of rounding errors in Richardson extrapolation in Sec. 3.4.6.
494 Chapter 4. Interpolation and Approximation Roughly speaking, the optimal rate of convergence of the Euler transformation de-
pends on z in the same way for all power series with completely monotonic coefficients, independently of the rate of convergence of the original series. The above tables from a particular example can therefore—with some safety margin—be used as a guide for the application of the Euler transformation with thinning to any series of this class.
for z = e iφ , φ = π/12, with relative |error| < 10 −10 . You see in the first table that |error| = 6·10 −12 for φ = π/3 = 4π/12
Say that you want the sum of a series
without thinning. The safety margin is, we hope, large enough. Therefore, try τ = 4. We √ make two tests with completely monotonic terms: u n =n −1 and u n = exp(− n) . We hope that tol = 10 −10 is large enough to make the irregular errors relatively negligible. In
both tests the actual magnitude of the error turns out to be 4 · 10 −11 , and the total number of terms is 4 ·32 = 128. The values of errest are 6 ·10 −11 and 7 ·10 −11 ; both slightly overestimate the actual errors and are still smaller than tol.
Parts
» Numerical Methods in Scientific Computing
» Solving Linear Systems by LU Factorization
» Sparse Matrices and Iterative Methods
» Software for Matrix Computations
» Characterization of Least Squares Solutions
» The Singular Value Decomposition
» The Numerical Rank of a Matrix
» Second Order Accurate Methods
» Adaptive Choice of Step Size
» Origin of Monte Carlo Methods
» Generating and Testing Pseudorandom Numbers
» Random Deviates for Other Distributions
» Absolute and Relative Errors
» Fixed- and Floating-Point Representation
» IEEE Floating-Point Standard
» Multiple Precision Arithmetic
» Basic Rounding Error Results
» Statistical Models for Rounding Errors
» Avoiding Overflowand Cancellation
» Numerical Problems, Methods, and Algorithms
» Propagation of Errors and Condition Numbers
» Perturbation Analysis for Linear Systems
» Error Analysis and Stability of Algorithms
» Interval Matrix Computations
» Taylor’s Formula and Power Series
» Divergent or Semiconvergent Series
» Properties of Difference Operators
» Approximation Formulas by Operator Methods
» Single Linear Difference Equations
» Comparison Series and Aitken Acceleration
» Complete Monotonicity and Related Concepts
» Repeated Richardson Extrapolation
» Algebraic Continued Fractions
» Analytic Continued Fractions
» Bases for Polynomial Interpolation
» Conditioning of Polynomial Interpolation
» Newton’s Interpolation Formula
» Barycentric Lagrange Interpolation
» Iterative Linear Interpolation
» Fast Algorithms for Vandermonde Systems
» Complex Analysis in Polynomial Interpolation
» Multidimensional Interpolation
» Analysis of a Generalized Runge Phenomenon
» Bernštein Polynomials and Bézier Curves
» Least Squares Splines Approximation
» Operator Norms and the Distance Formula
» Inner Product Spaces and Orthogonal Systems
» Solution of the Approximation Problem
» Mathematical Properties of Orthogonal Polynomials
» Expansions in Orthogonal Polynomials
» Approximation in the Maximum Norm
» Convergence Acceleration of Fourier Series
» The Fourier Integral Theorem
» Fast Trigonometric Transforms
» Superconvergence of the Trapezoidal Rule
» Higher-Order Newton–Cotes’ Formulas
» Fejér and Clenshaw–Curtis Rules
» Method of Undetermined Coefficients
» Gauss–Christoffel Quadrature Rules
» Gauss Quadrature with Preassigned Nodes
» Matrices, Moments, and Gauss Quadrature
» Jacobi Matrices and Gauss Quadrature
» Multidimensional Integration
» Limiting Accuracy and Termination Criteria
» Convergence Order and Efficiency
» Higher-Order Interpolation Methods
» Newton’s Method for Complex Roots
» Unimodal Functions and Golden Section Search
» Minimization by Interpolation
» Ill-Conditioned Algebraic Equations
» Deflation and Simultaneous Determination of Roots
» Finding Greatest Common Divisors
» Permutations and Determinants
» Eigenvalues and Norms of Matrices
» Function and Vector Algorithms
» Textbooks in Numerical Analysis
» Encyclopedias, Tables, and Formulas
Show more