FFTs of Real Data
4.7.3 FFTs of Real Data
Frequently the FFT of a real data vector is required. The complex FFT algorithm can still
be used, but is inefficient both in terms of storage and operations. By using symmetries in
4.7. The Fast Fourier Transform 511 the DFT, which correspond to the symmetries noted in the Fourier transform in Table 4.6.1,
better alternatives can be found. Consider the DFT matrix for N = 4 in (4.7.8). Note that the fourth row is the
conjugate of the second row. This is not a coincidence; the conjugate transpose of the DFT matrix F N can be obtained by reversing the order of the last N − 1 rows. Let T N
be the N × N permutation matrix obtained by reversing the last N − 1 columns in the unit matrix
I N . For example,
1000 T 4 = 0001
Then it holds that
F N H =F N =T N F N =F N T N .
We first verify that F N =T N F N , by observing that
1 ≤ j ≤ N − 1. Since F N and T N are both symmetric, we also have F H = (T N F N ) T =F N T N .
We say that a vector y ∈ C N is conjugate even if y = T N y , and conjugate odd if y = −T N y . Suppose now that f is real and u = F N f . Then it follows that
u =F N f =T N F N f =T N u,
i.e., u is conjugate even. If a vector u of even length N is conjugate even, this implies that
u j =u N −j , j = 1 : N/2.
In particular, u j is real for j = 0, N/2.
For purely imaginary data g and v = F N g , we have
H v H =F N g = −F N g = −T N F N g = −T N v, i.e., v is conjugate odd. Some other useful symmetry properties are given in Table 4.7.1.
We have proved the first two properties; the others are established similarly and we leave the proofs to the reader; see Problem 4.7.4.
Table 4.7.1. Useful symmetry properties of the DFT.
Data f
Definition
DFT F N f
real conjugate even imaginary
conjugate odd real even
f =T N f real
real odd
f = −T N f imaginary conjugate even f = T N f real conjugate odd f = −T N f imaginary
512 Chapter 4. Interpolation and Approximation We now outline how symmetries can be used to compute the DFTs u = F N f and
v =F N g of two real functions f and g simultaneously. First form the complex function
f + ig and compute its DFT
w =F N (f + ig) = u + iv
by any complex FFT algorithm. Multiplying by T N we have T N w =T N F N (f + ig) = T N (u + iv) = u + iv, where we have used that u and v are conjugate even. Adding and subtracting these two
equations we obtain
w +T N w = (u + u) + i(v + v), w −T N w = (u − u) + i(v − v).
We can now retrieve the two DFTs from
u =F N f =
Re(w + T N w) + i Im(w − T N w) , (4.7.22)
v =F N g =
Im(w + T N w) − i Re(w − T N w) . (4.7.23)
Note that because of the conjugate even property of u and v there is no need to save the entire transforms.
The above scheme is convenient when, as for convolutions, two real transforms are involved. It can also be used to efficiently compute the DFT of a single real function of length N = 2 p . First express this DFT as a combination of the two real FFTs of length N/ 2 corresponding to even and odd numbered data points (as in (4.7.5)). Then apply the procedure above to simultaneously compute these two real FFTs.
Parts
» Numerical Methods in Scientific Computing
» Solving Linear Systems by LU Factorization
» Sparse Matrices and Iterative Methods
» Software for Matrix Computations
» Characterization of Least Squares Solutions
» The Singular Value Decomposition
» The Numerical Rank of a Matrix
» Second Order Accurate Methods
» Adaptive Choice of Step Size
» Origin of Monte Carlo Methods
» Generating and Testing Pseudorandom Numbers
» Random Deviates for Other Distributions
» Absolute and Relative Errors
» Fixed- and Floating-Point Representation
» IEEE Floating-Point Standard
» Multiple Precision Arithmetic
» Basic Rounding Error Results
» Statistical Models for Rounding Errors
» Avoiding Overflowand Cancellation
» Numerical Problems, Methods, and Algorithms
» Propagation of Errors and Condition Numbers
» Perturbation Analysis for Linear Systems
» Error Analysis and Stability of Algorithms
» Interval Matrix Computations
» Taylor’s Formula and Power Series
» Divergent or Semiconvergent Series
» Properties of Difference Operators
» Approximation Formulas by Operator Methods
» Single Linear Difference Equations
» Comparison Series and Aitken Acceleration
» Complete Monotonicity and Related Concepts
» Repeated Richardson Extrapolation
» Algebraic Continued Fractions
» Analytic Continued Fractions
» Bases for Polynomial Interpolation
» Conditioning of Polynomial Interpolation
» Newton’s Interpolation Formula
» Barycentric Lagrange Interpolation
» Iterative Linear Interpolation
» Fast Algorithms for Vandermonde Systems
» Complex Analysis in Polynomial Interpolation
» Multidimensional Interpolation
» Analysis of a Generalized Runge Phenomenon
» Bernštein Polynomials and Bézier Curves
» Least Squares Splines Approximation
» Operator Norms and the Distance Formula
» Inner Product Spaces and Orthogonal Systems
» Solution of the Approximation Problem
» Mathematical Properties of Orthogonal Polynomials
» Expansions in Orthogonal Polynomials
» Approximation in the Maximum Norm
» Convergence Acceleration of Fourier Series
» The Fourier Integral Theorem
» Fast Trigonometric Transforms
» Superconvergence of the Trapezoidal Rule
» Higher-Order Newton–Cotes’ Formulas
» Fejér and Clenshaw–Curtis Rules
» Method of Undetermined Coefficients
» Gauss–Christoffel Quadrature Rules
» Gauss Quadrature with Preassigned Nodes
» Matrices, Moments, and Gauss Quadrature
» Jacobi Matrices and Gauss Quadrature
» Multidimensional Integration
» Limiting Accuracy and Termination Criteria
» Convergence Order and Efficiency
» Higher-Order Interpolation Methods
» Newton’s Method for Complex Roots
» Unimodal Functions and Golden Section Search
» Minimization by Interpolation
» Ill-Conditioned Algebraic Equations
» Deflation and Simultaneous Determination of Roots
» Finding Greatest Common Divisors
» Permutations and Determinants
» Eigenvalues and Norms of Matrices
» Function and Vector Algorithms
» Textbooks in Numerical Analysis
» Encyclopedias, Tables, and Formulas
Show more