EXTRAPOLATION OF BANDLIMITED SIGNALS

8.18 EXTRAPOLATION OF BANDLIMITED SIGNALS

Extrapolation means extending a signal outside a known interval. Extrapolation in the spatial coordinates could improve the spectral resolution of an image, whereas frequency domain extrapolation could improve the spatial resolution. Such prob­ lems arise in power spectrum estimation, resolution of closely spaced objects in radio-astronomy, radar target detection and geophysical exploration, and the like.

Analytic Continuation

A bandlimited signal f (x) can be determined completely from the knowledge of it over an arbitrary finite interval [ -a, a ] . This follows from the fact that a band­

limited function is an analytic function because its Taylor series

n�I ��

f(X + a) = f(x) +

is convergent for all x and a. By letting x E [- a , a ] and x +a> a, (8.227) can be used to extrapolate f(x) anywhere outside the interval [ -a, a ] .

Super-resolution The foregoing ideas can also be applied to a space-limited function (i . e. , f( x ) = 0 for

lx l > a) whose Fourier transform is given over a finite frequency band. This means, theoretically, that a finite object imaged by a diffraction limited system can be

perfectly resolved by extrapolation in the Fourier domain. Extrapolation of the

spectrum of an object beyond the diffraction limit of the imaging system is called super-resolution.

Sec. 8. 1 8 Extrapolation of Bandlimited Signals 323

Extrapolation via Prolate Spheroidal Wave Functions (PSWFs) [60] The high-order derivatives in (8.227) are extremely sensitive to noise and truncation

errors. This makes the analytic continuation method impractical for signal extrapo­ lation. An alternative is to evaluate f(x) by the series expansion

f(x) = L an <l>n (x),

n=O (8.228) an = - f(x)<l>n (x) dx 1

Vx

Ia -a

An (8.229)

where <l>n (x) are called the prolate spheroidal wave functions (PSWFs). These func­ tions are bandlimited, orthonormal over [-a, a], and complete in the class of bandlimited functions. Moreover, in the interval -a :5 x :5 a, <l>n (x) are complete and orthogonal, with (<!>n, <Pm ) = An S(n - m ) , where An > 0 is the norm ll<l>n 112• Using this property in (8.228), an can be obtained from the knowledge of f(x) over [-a, a] via (8.229). Given an in (8.228), f(x) can be extrapolated for all values of x.

In practice, we would truncate the above series to a finite but sufficient number of terms. In the presence of noise, the extrapolation error increases rapidly with the number of terms in the series (Problem 8.28). Also, the numerical

computation of the PSWFs themselves is a difficult task, which is marred by its own truncation and round-off errors. Because of these difficulties, the preceding extrapolation algorithm also is quite impractical. However, the PSWFs remain fundamentally important for analysis of bandlimited signals.

Extrapolation by Error Energy Reduction [61 , 62] An interesting and more practical extrapolation algorithm is based on a principle of

successive energy reduction (Fig. 8.30). First the given function g(x) � go (x) = f(x), x E [-a, a] , is low-pass filtered by truncating its Fourier transform to zero outside the interval (-£0, £0). This reduces the error energy in f1 (x) because the signal is known to be bandlimited. To prove this, we use the Parseval formula to obtain

f ., 1F(£) - Go (£)12 d£ = f :

IF(£) - Fi (£)12 d£ .,

f lf(x) - g (x)i2 dx =

J l�I > Eo

f -x lf(x) -f1 (x)i2dx

+ IGo (£)12 d£ > (8.230)

Now f1 (x) is bandlimited but does not match the observations over [-a, a]. The error energy is reduced once again if f1 (x) is substituted by f(x) over -a :5 x :5 a.

Letting J denote this space-limiting operation, we obtain

Image Filtering and Restoration Chap. 8 Image Filtering and Restoration Chap. 8

Figure 8.30 Extrapolation by successive energy reduction.

Now g1 (x), not being bandlimited anymore, is low-pass filtered, and the preceding procedure is repeated. This gives the iterative algorithm

8n (x) - 80 (x) + ( .9' J)fn (x ), n - ] 1, 2...

fn (x) �8n -11 (x), _ 8o (x) = g(x) Jf(x)

where § is the identity operator and � is the bandlimiting operator. In the limit as n � oo, bothfn (x) and 8n (x) converge to f(x) in the mean square sense [62]. It can be shown that this algorithm is a special case of a gradient algorithm associated with a

least squares minimization problem [65]. This algorithm is also called the method of alternating projections because the iterates are projected alternately in the space of

bandlimited and space-limited functions. Such algorithms are useful for solving image restoration problems that include a certain class of constraints [53, 63, 64].

Sec. 8. 1 8 Extrapolation of Bandlimited Signals 325

Extrapolation of Sampled Signals [65] For a small perturbation,

f (x) = f(x) + ETJ(X), E -# 0

where TJ(x) is not bandlimited, the desired analyticity of f (x) is lost. Then it is

possible to find a large number of functions that approximate f(x) very closely on the observation interval [-a, a] but differ greatly outside this interval. This

situation is inevitable when one tries to implement the extrapolation algorithms digitally. Typically, the observed bandlimited function is oversampled, so it can be

estimated quite accurately by interpolating the finite number of samples over [-a, a]. However, the interpolated signal cannot be bandlimited. Recognizing this difficulty, we consider extrapolation of sampled bandlimited signals. This approach

leads to more-practical extrapolation algorithms. Definitions. A sequence y (n) is called bandlimited if its Fourier transform

Y(w w<

) , -1T :s:

Y(w) W1 < lwl :51T

1T,

satisfies the condition

This implies that y ( n) comes from a bandlimited signal that has been oversampled with respect to its Nyquist rate. Analogous to iJ3 and J, the bandlimiting and space-limiting operators, denoted by L and S, respectively, are now oo x oo and (2M + 1) x oo matrix operators defined as

[Ly]m = n = -"' f sin(m TI(m - n) w1 y (n) J"{[Ly]m}

{Y(w), lwl < w1 0, W1<\wl:51T

[Sy]m = y (m), -M :s: m :s: M (8.237) By definition, then, L is symmetric and idempotent, that is, LT = L and L2 = L

( repeated ideal low-pass filtering produces the same result ) . The Extrapolation Problem. Let y (m) be a bandlimited sequence. We are

given a set of space-limited noise-free observations

(8.238) Given z (m), extrapolate y(m) outside the interval [-M, M].

z(m) = y(m),

-M :s: m :s: M

Minimum Norm Least Squares (MNLS) Extrapolation Let z denote the (2M + 1) x1 vector of observations and let y denote the infinite

vector of {y(n), \fn}. then z = Sy. Since y (n) is a bandlimited sequence, Ly = y, and we can write

(8.239) This can be viewed as an underdetermined image restoration problem, where A

z = SLy = Ay,

A � SL

represents a (2M + 1) x oo PSF matrix. A unique solution that is bandlimited and 326

Image Filtering and Restoration Chap. 8 Image Filtering and Restoration Chap. 8

y+ � Ar(AArr1 z= Lrsr(sLLrsri-1 z

= Lsr(sLsri-1 � Lsrt-1

where L is a (2M + 1) x (2M + 1), positive definite, Toeplitz matrix with elements {sin w1 (m - n)l n ( m - n), -M :5 m, n :5 M}. The matrix

(8.241) is the pseudoinverse of

A+ � Ar[AAri-1 = Lsrt-1

A and is called the pseudoinverse extrapolation filter. The extrapolation algorithm requires first obtaining x � L -i z and then low-pass filtering the sequence { x (m), -M :5 m :5 M} to obtain the extrapolation as

i= -M f n(m - 1 )

y+ (m) =

lm l> M (8.242)

This means the MNLS extrapolator is a time-varying FIR filter (L -I ] m, m' followed by

a zero padder (S� and an ideal low-pass filter (L) (Fig. 8.31). Iterative Algorithms

Although L is positive definite, it becomes increasingly ill-conditioned as M in­ creases. In such instances, iterative algorithms that give a stabilized inverse of L are useful [65]. An example is the conjugate gradient algorithm obtained by substi­ tuting A = L and go = - z into (8.136). At n = 2M + 1, let z � un. Then Yn � LSTz

converges to y+ . Whenever L is ill-conditioned, the algorithm is terminated when 13n

becomes small for n < 2M + 1. Compared to the energy reduction algorithm, the iterations here are performed on finite-size vectors, and only a finite number of iterations are required for convergence.

Discrete Prolate Spheroidal Sequences (DPSS) Similar to the PSWF expansion in the continuous case, it is possible to obtain the

MNLS extrapolation via the expansion

2M+ I

y+ (m) = L ak <Vdm),

[L-']m, m'

-M

Zero padding (Sr)

Figure

8.31 MNLS extrapolation.

Sec.

Extrapolation of Bandlimited Signals 327 Extrapolation of Bandlimited Signals 327

1 L tlik> ST

k= 1, ... , 2M + 1 (8.244) Like the PSWFs, the DPSS <l>k (m) are bandlimited (that is, L<f>k = <f>k), complete,

and orthogonal in the interval - M :s m :s M. They are complete and orthonormal

in the infinite interval. Using these properties, we can obtain S<f>k = � tlik and

simplify (8.243) to give the algorithm (8.245) In practice, the series summation is carried up to some K ::; 2M + 1, where the

neglected terms correspond to the smallest values of A.k . Mean Square Extrapolation

In the presence of additive independent noise, the observation equation becomes z= Ay + 11 = SLy + 11 (8.246)

The best linear mean square extrapolator is then given by the Wiener filter

(8.247) where Ry and RTJ are the autocorrelation matrices of y and 11 , respectively. If the

y = RyST [SRyST + RT]r1z

autocorrelation of y is unknown, it is convenient to assume Ry = cr 2L (that is, power spectrum of y is bandlimited and constant). Then, assuming the noise to be white with � = cr;I, we obtain

(8.248) If cr;--7 0, then y--7 y+ , the MNLS extrapolation.

A recursive Kalman filter imple­ mentation of (8.248) is also possible [65].

Example 8.9

Figure 8.32a shows the signal y (m) = sin (0.0792'!Tm) + sin (0.068'!Tm ) , which is given for -8 :s m :s 8 (Fig. 8.32b) and is assumed to have a bandwidth of less than w1 = 0.l'IT. Figures 8.32c and 8.32d show the extrapolations obtained via the iterative energy

reduction and conjugate gradient algorithms. As expected, the latter algorithm has superior convergence. When the observations contain noise (13 dB below the signal

power), these algorithms tend to be unstable (Fig. 8.32e ) , but the mean square extrapo­ lation filter (Fig. 8.32f) improves the result. Comparison of Figures 8.32d and 8.32f shows that the extrapolated region can be severely limited due to noise.

Generalization to Two Dimensions The foregoing extrapolation algorithms can be easily generalized to two (or

higher) dimensions when the bandlimited and space-limited regions are rectangles 328

Image Filtering and Restoration Chap. 8

Actua I signa I Observations

y(m)I o

z{m) 0

Extrapolated signal Extrapolated signal

y(m) 0 y(m) 0

(c) Extrapolation after 30 ( d ) Extrapolation after 5 iterations of energy

iterations of the conjugate reduction algorithm

gradient algorithm

Extrapolated signal 2

"O y(m) .,

(e) Extrapolation in the presence

of noise ( 13 dB below the (f) Stabilized extrapolation in the signal) using the extrapolation

presence of noise via the mean matrix

square extrapolation filter

Figure 8.32 Comparison of extrapolation algorithms.

Sec. 8. 1 8 Extrapolation of Bandlimited Signals 329

(or hyper-rectangles). Consider a two-dimensional sequence y (m, n ), which is known over a finite observation window [-M, M] x [-M, M] and bandlimited in

[ - w i . wi ]x [ -wi. w i ]. Let z(m, n) = y(m, n), -M ::s. m, n ::s. M. Then, using the operators L and S, we can write

Z = SLYLS7

where Z is a (2M + 1) x (2M + 1) matrix containing the z(m, n). Defining z, and .Y as the row-ordered mappings of Z and Y, J � S ® S, and$� L ® L, we get

= J,¥'y � vt.Y

(.A � J,¥')

z,

Similar to L, the two-dimensional low-pass filter matrix$ is symmetric and idem­ potent. All the foregoing one-dimensional algorithms can be recast in terms of J and ,¥', from which the following two-dimensional versions follow.

MNLS extrapolation:

y+ = LS7[L -I ZL -!]SL

= A+zE+T

Conjul!ate Gradient Algorithm. Same as (8.137) with A1 = A2 � L and G0 � -z. Then Yn � LS7UnSL converges to y+ at n = 2M + 1.

z, � (L ® L) + (I ® I) ]-1 .Y }

Mean Square Extrapolation Filter. Assume f7ly = cr 2 ,¥', f7lTJ = cr;(1 ® I).

;, = [(LS7) ® (LS7)].Z, ::? y = LS7ZSL Now the matrix to be inverted is (2M + 1) x (2M + 1) block Toeplitz with basic

dimension (2M + 1) x (2M + 1). The DPSS «f>k of (8.244) can be useful for this inversion. The.two-dimensional DPSS are given by the Kronecker product «f>k ® «f>1•