Application to Digital Signal Processing 721

12.2 Application to Digital Signal Processing 721

and y[n] of lengths N and M is obtained in the frequency domain by following these three steps:

Compute the DFTs X[k], Y[k] of x[n], and y[n] of length M + N − 1.

Multiply these complex DFTs to get X[k]Y[k] = U[k].

Compute the IDFT of U[k] corresponding to the convolution x[n] ∗ y[n]. Implementing the DFT and the IDFT with the FFT algorithm it can be shown that the computa-

tional complexity of the above three steps is much smaller than that of computing the convolution sum directly using the conv function.

To demonstrate the efficiency of the FFT implementation we consider the convolution of a signal, for increasing lengths, with itself. The signal is a sequence of ones of increasing length of 1000 to 10,000 samples. The CPU times used by the functions conv and the FFT three-step procedure are measured and compared for each of the lengths. The CPU time used by conv is divided by 10 to be able to plot it with the CPU of the FFT-based procedure shown in the following script. The results are shown in Figure 12.4.

%%%%%%%%%%%%% % example 12.4 % conv vs fft %%%%%%%%%%%%% time1 = zeros(1,10);time2 = time1; for i = 1:10,

NN = 1000 ∗ i; x = ones(1,NN);

0.16 conv time /10

0.14 fft time

FIGURE 12.4

CPU Time (sec) 0.06

CPU times for the fft and the conv functions when

computing the convolution of sequences of ones of lengths

N = 1000 to 10,000. The CPU

time used by conv is divided 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 by 10.

Length of Convolution Sum

C H A P T E R 12: Applications of Discrete-Time Signals and Systems

M = 2 ∗ NN-1; t0 = cputime;

y = conv(x,x); % convolution using conv time1(i) = cputime-t0; t1 = cputime;

X = fft(x,M); X = fft(x,M); Y = X. ∗ X; y1 = ifft(Y); % convolution using fft time2(i) = cputime-t1

sum(y-y1) % check conv and fft results coincide pause % check for small difference

end ■

Gauss and the FFT

Going back to the sources used by the FFT researchers it was discovered that many well-known mathematicians had developed similar algorithms for different values of N . But that an algorithm similar to the modern FFT had been developed and used by Carl Gauss, the German mathematician, probably in 1805, predating even Fourier’s work on harmonic analysis in 1807, was an interesting discovery—although not surprising [31]. Gauss has been called the “prince of mathematicians” for his prodigious work in so many areas of mathematics, and for the dedication to his work. His motto was Pauca sed matura (few, but ripe); he would not disclose any of his work until he was very satisfied with it. Moreover, as it was customary in his time, his treatises were written in Latin using a difficult mathematical notation, which made his results not known or understood by modern researchers. Gauss’s treatise describing the algorithm was not published in his lifetime, but appeared later in his collected works. He, however, deserves the paternity of the FFT algorithm.

The developments leading to the FFT, as indicated by Cooley [14], point out two important concepts in numerical analysis (the first of which applies to research in other areas): (1) the divide-and-conquer approach—that is, it pays to break a problem into smaller pieces of the same structure; and (2) the asymptotic behavior of the number of operations. Cooley’s final recommendations in his paper are worth serious consideration by researchers in technical areas:

Prompt publication of significant achievements is essential.

Review of old literature can be rewarding.

Communication among mathematicians, numerical analysts, and workers in a wide range of applications can be fruitful.

Do not publish papers in neoclassic Latin.