Discrete Convolution by FFT
4.7.2 Discrete Convolution by FFT
The most important operation in signal processing is computing the discrete version of the convolution operator. This awkward operation in the time domain becomes very simple in the frequency domain.
Definition 4.7.3.
The convolution of two sequences f i and g i , i = 0 : N − 1, is conv (f, g) = (h 0 ,h 1 ,...,h N −1 ) T , where
where the sequences are extended to have period N by setting f i =f i +jN ,g i =g i +jN for all integers i, j .
The discrete convolution can be used to approximate the convolution defined for continuous functions in Definition 4.6.5 in a similar way as the Fourier transform was approximated using sampled values in Sec. 4.6.6.
We can write the sum in (4.7.15) as a matrix-vector multiplication h = Gf , where G is a Toeplitz matrix. Writing out components we have
h 0 g 0 g N −1 g N −2 ···g 1 f 0 h 1 g 1 g 0 g N −1 ···g 2 f 1
= g 2 g 1 g 0 ···g 3 f 2
g N −1 g N −2 g N −3 ···g 0 f N −1 Note that each column in G is a cyclic down-shifted version of the previous column. Such
h N −1
a matrix is called a circulant matrix. We have
g] =g 0 I +g 1 C N +···+g N −1 C N −1 , (4.7.16) and C N is the circulant permutation matrix
G =[gC N N g ···C N −1
The following result is easily verified.
Lemma 4.7.4.
The eigenvalues of the circulant matrix C N in (
4.7.17) are
ω j =e −2πj/N , j = 0 : N − 1,
510 Chapter 4. Interpolation and Approximation where ω is an Nth root of unity, i.e., ω N = 1. The columns of the DFT matrix F N ,
,...,ω j N = (1, ω j j −1 ) T , j = 0 : N − 1,
are eigenvectors. Since the matrix G in (4.7.16) is a polynomial in C N it has the same set of eigenvectors,
and thus G is diagonalized by the DFT matrix F N ,
G =F N XF N −1 , X = diag (λ 1 ,...,λ n ), (4.7.18) where the eigenvalues of G are λ i
)g, j = 0 : N − 1, which is the FFT of the first column in G. Hence X = diag (F N g) , where diag (x) denotes
= (1, ω N j ,...,ω −1
a diagonal matrix with diagonal elements equal to the elements in the vector x.
Theorem 4.7.5.
Let f i and g i , i = 0 : N − 1, be two sequences with DFTs equal to F N f and F N g. Then the DFT of the convolution of f and g is F N f. ∗F N g (. ∗ denotes the elementwise product).
Proof.
From G = F −1 N diag (F N g)F N it follows that
h = Gf = F N −1 diag (F N g)F N f =F N −1 ((F N g). ∗ (F N f )). (4.7.19) This shows that using the FFT algorithm the discrete convolution can be computed
in O(N log 2 N) operations as follows: First the two FFTs of f and g are computed and multiplied (pointwise) together. Then the inverse DFT of this product is computed. This is one of the most useful properties of the FFT!
Using the Gentleman–Sande algorithm F N =P N A T for the forward DFT and the Cooley–Tukey algorithm F N = AP N for the inverse DFT,
F −1
F N H = AP ¯ N ,
we get from (4.7.19)
A((A ¯ T f ). = T ∗ (A g)). (4.7.20) N
h = AP ¯ N ((P N A f ). ∗ (P N A T g))
This shows that h can be computed without the bit-reversal permutation P N which typically can save 10–30 percent of the overall computation time.
Parts
» Numerical Methods in Scientific Computing
» Solving Linear Systems by LU Factorization
» Sparse Matrices and Iterative Methods
» Software for Matrix Computations
» Characterization of Least Squares Solutions
» The Singular Value Decomposition
» The Numerical Rank of a Matrix
» Second Order Accurate Methods
» Adaptive Choice of Step Size
» Origin of Monte Carlo Methods
» Generating and Testing Pseudorandom Numbers
» Random Deviates for Other Distributions
» Absolute and Relative Errors
» Fixed- and Floating-Point Representation
» IEEE Floating-Point Standard
» Multiple Precision Arithmetic
» Basic Rounding Error Results
» Statistical Models for Rounding Errors
» Avoiding Overflowand Cancellation
» Numerical Problems, Methods, and Algorithms
» Propagation of Errors and Condition Numbers
» Perturbation Analysis for Linear Systems
» Error Analysis and Stability of Algorithms
» Interval Matrix Computations
» Taylor’s Formula and Power Series
» Divergent or Semiconvergent Series
» Properties of Difference Operators
» Approximation Formulas by Operator Methods
» Single Linear Difference Equations
» Comparison Series and Aitken Acceleration
» Complete Monotonicity and Related Concepts
» Repeated Richardson Extrapolation
» Algebraic Continued Fractions
» Analytic Continued Fractions
» Bases for Polynomial Interpolation
» Conditioning of Polynomial Interpolation
» Newton’s Interpolation Formula
» Barycentric Lagrange Interpolation
» Iterative Linear Interpolation
» Fast Algorithms for Vandermonde Systems
» Complex Analysis in Polynomial Interpolation
» Multidimensional Interpolation
» Analysis of a Generalized Runge Phenomenon
» Bernštein Polynomials and Bézier Curves
» Least Squares Splines Approximation
» Operator Norms and the Distance Formula
» Inner Product Spaces and Orthogonal Systems
» Solution of the Approximation Problem
» Mathematical Properties of Orthogonal Polynomials
» Expansions in Orthogonal Polynomials
» Approximation in the Maximum Norm
» Convergence Acceleration of Fourier Series
» The Fourier Integral Theorem
» Fast Trigonometric Transforms
» Superconvergence of the Trapezoidal Rule
» Higher-Order Newton–Cotes’ Formulas
» Fejér and Clenshaw–Curtis Rules
» Method of Undetermined Coefficients
» Gauss–Christoffel Quadrature Rules
» Gauss Quadrature with Preassigned Nodes
» Matrices, Moments, and Gauss Quadrature
» Jacobi Matrices and Gauss Quadrature
» Multidimensional Integration
» Limiting Accuracy and Termination Criteria
» Convergence Order and Efficiency
» Higher-Order Interpolation Methods
» Newton’s Method for Complex Roots
» Unimodal Functions and Golden Section Search
» Minimization by Interpolation
» Ill-Conditioned Algebraic Equations
» Deflation and Simultaneous Determination of Roots
» Finding Greatest Common Divisors
» Permutations and Determinants
» Eigenvalues and Norms of Matrices
» Function and Vector Algorithms
» Textbooks in Numerical Analysis
» Encyclopedias, Tables, and Formulas
Show more