1 2 Semicausal Models and Semirecursive Filtering 31 1 1 2 Semicausal Models and Semirecursive Filtering 31 1

Sec. 8. 1 2 Semicausal Models and Semirecursive Filtering 31 1 Sec. 8. 1 2 Semicausal Models and Semirecursive Filtering 31 1

Vn q2 =

Hi Un - I + 1)n + L bn - I L (8.172)

qz

where Vn, Un, bn, and 1ln are N x 1 vectors and b" depends only on u(-p2 + 1, n ) ...

I = -q i

I = -q i

u (0, n ) , u (N + 1, n ), u (N + p1, n ), which are boundary elements of the nth column of u(m, n ) . The H1 are banded Toeplitz matrices.

Filter Formulation Let 'I'

be a fast unitary transform such that 'l'Hn 'l'*r, for every n, is nearly diagonal. From Chapter 5 we know that many sinusoidal transforms tend to diago­ nalize Toeplitz matrices. Therefore, defining

Yn � 'l'Vn,

X,, � 'l'Un,

Cn � 'l'bn,

Vn � '1'1)n 'l'Hn 'l'*T = Diag['l'Hn '1'*7] � f n � Diag[-yn (k))

(8.173) and multiplying both sides of (8.172) by 'I', we can reduce it to a set of scalar

equations, decoupled in k, as

Yn (k) � = L 'Y1(k)Xn-1(k) + Vn (k) + L Cn-1(k), k= 1, . . . ,N (8.174) In most situations the image background is known or can be estimated quite accu­

rately. Hence en (k) can be assumed to be known and can be absorbed in Yn (k) to give the observation system for each row of the transformed vectors as

(8.175) Now for each row, Xn (k), is represented by an AR model

Yn (k) = qi L 'Y1 (k)Xn-1 (k) + vn (k),

k= 1, ... ,N

Xn (k) =

a, (k)Xn-1 (k) + En (k),

k = 1, . .. ,N (8.176)

which together with (8.175) can be set up in the framework of Kalman filtering, as shown in Example

8.8. Alternatively, each line [Yn (k), n = 0, 1 .. . ] can be pro­ cessed by its one-dimensional Wiener filter. This method has been found useful in adaptive filtering of noisy images (Fig.

8.25) using the cosine transform. The entire image is divided into small blocks of size N x N (typically N = 16 or 32). For each k, the spectral density S y (w, k), of the sequence [Yn (k), n =

0, 1, . . . , N - 1], is esti­ mated by a one-dimensional spectral estimation technique, which assumes {y n} to

be an AR sequence [8]. Given S y (w, k) and CJ ; , the noise power, the sequence Yn (k) is Wiener filtered to give in (k), where the filter frequency response is given by

G( k) � Sx (w, k) w, - Sx(w,k) + CJ'; Sy(w,k) (8.l7?)

In practice, Sx (w, k) is set to zero if the estimated Sy (w, k) is less than CJ ; . Figures

8.17 and 8.18 show examples of this method, where it is called the COSAR (cosine­ AR) algorithm.

312 Image Filtering and Restoration Chap. 8 312 Image Filtering and Restoration Chap. 8

Spectrum estimator for channel

I 02 " I I

II

s. s. ll

II I Identify

(w, k) I AR model spectrum

Inverse " vn

transform +r

Figure

8.25 COSAR algorithm for adaptive filtering using semicausal models.

8.13 DIGITAL PROCESSING OF SPECKLE IMAGES

When monochromatic radiation is scattered from a surface whose roughness is of

the order of a wavelength, interference of the waves produces a noise called speckle.

Such noise is observed in images produced by coherent radiation from the micro­ wave to visible regions of the spectrum. The presence of speckle noise in an imaging system reduces its resolution, particularly for low-contrast images. Therefore, sup­ pression of speckle noise is an important consideration in design of coherent imaging systems. The problem of speckle reduction is quite different from additive

noise smoothing because speckle noise is not additive. Figure 8.26b shows a speckled test pattern image (Fig. 8.26a).

Speckle Representation

In free space, speckle can be considered as an infinite sum of independent, identical phasors with random amplitude and phase [41 , 42]. This yields a representation of its complex amplitude as

a (x, y) = aR (x, y) + ja1 (x, y)

(8. 178) where aR and a1 are zero mean, independent Gaussian random variables (for each

x, y) with variance er � . The intensity field is simply

s = s (x, y) = la (x, y)l2 = a� + ai

which has the exponential distribution of (8. 17) with variance er 2 � 2er� and mean

µ., = E[s] = er 2• A white noise field with these statistics is called the fully developed speckle.

For any speckle, the contrast ratio is defined as

standard deviation of s

-y =

mean value of s

Sec. 8. 1 3 Digital Processing o f Speckle Images

(a)

(b)

II II II II II

(c)

Figure

8.26 Speckle images.

For fully developed speckle, 'Y = 1. When an object with complex amplitude distribution g(x, y) is imaged by a coherent linear system with impulse response K (x, y ; x ', y ' ) , the observed image intensity can be written as

' v(x, y) �

K(x, y ;x ', y ')g(x ', y ')e iW.>"I dx ' dy · + �(x, y) (8.181)

where 11(x, y) is the additive detector noise and <l>(x, y) represents the phase dis­ tortion due to scattering. If the impulse response decays rapidly outside a region

Rceu (x, y), called the resolution cell, and g(x, y) is nearly constant in this region, then [44]

v(x, y) jg(x, y)j2 ja(x, y)j2 + 11(x, y) = )

= u (x, y)s (x, y) + 11(x, y)

JJ

where u(x, y) � jg(x, y)j2,

a (x, y) � ' K(x, y ;x ', y )e id>(x',y") dx ' dy ' (8.183)

R cl

314 Image Filteri ng and Restoration Chap. 8

The u (x, y) represents the object intensity distribution (reflectance or trans­ mittance) and s (x, y) is the speckle intensity distribution. The random field a (x, y) is Gaussian, whose autocorrelation function has support on a region twice the size of Reen· Equation (8. 182) shows that speckle appears as a multiplicative noise in the

coherent imaging of low-resolution objects. Note that there will be no speckle in an ideal imaging system. A uniformly sampled speckle field with pixel spacing equal to

or greater than the width of its correlation function will be uncorrelated. Speckle Reduction [46-47] : N-Look Method

A simple method of speckle reduction is to take several statistically independent intensity images of the object and average them (Fig. 8.26c). Assuming the detector noise to be low and writing the Ith image as

v1(x, y) = u (x, y)s1 (x, y), l = 1, . . . ,N (8.184) then the temporal average of N looks is simply

V1 (X, y) �/�! = U (X, y)§N (X, y)

(8.185) where §N (x, y) is the N-look average of the speckle fields. This is also the maximum

VN (X, y) �

likelihood estimate of [ v1 (x, y ), l =

1, . . . , N], which yields

(8.186) This gives the contrast ratio -y = l/N for vN. Therefore, the contrast improves by a

factor of VN for N -look averaging. Spatial Averaging of Speckle

If the available number of looks, N, is small, then it is desirable to perform some kind of spatial filtering to reduce speckle. A standard technique used in synthetic aperture radar systems (where speckle noise occurs) is to average the intensity values of several adjacent pixels. The improvement in contrast ratio for spatial averaging is consistent with the N -look method except that there is an accom­ panying loss of resolution.

Homomorphic Filtering The multiplicative nature of speckle suggests performing a logarithmic transforma­

tion on (8.185), giving

(8. 187) Defining wN � log vN , z � log u, and TIN � log§N , we get the additive noise observa­

log vN (x, y) = log u(x, y) + log§N (x, y)

tion model

(8. 188) where TIN (x, y) is stationary white noise.

wN(x, y) = z (x, y) + TIN(x, y)

Sec. 8. 1 3 Digi t al Processing of Speckle Images 315

Input image

log ( · )

Wiener filter

ex p( •

) Filtered output

(a) The algorithm

(b) Filtering Example

Figure

8.27 Homomorphic filtering of speckle.

For N � 2, TJN can be modeled reasonably well by a Gaussian random field [ 45], whose spectral density function is given by

N=l

N�2 Now z (x, y) can be easily estimated from wN (x, y) using Wiener filtering tech­

niques. This gives the overall filter algorithm of Fig.

8.27, which is also called the

homomorphic filter. Experimental studies have shown that the homomorp.\lic Wiener filter performs quite well compared to linear filtering or other homo­

morphic linear filters [46] . Figure

8.27 shows the performance of an adaptive FIR Wiener filter used in the homomorphic mode.

8.14 MAXIMUM ENTROPY RESTORATION

The inputs, outputs, and the PSFs of incoherent imaging systems (the usual case)

are nonnegative. The least squares or mean square criteria based restoration algo­ rithms do not yield images with nonnegative pixel values.

A restoration method

based on the maximum entropy criterion gives nonnegative solutions. Since entropy is a measure of uncertainty, the general argument behind this criterion is that it

assumes the least about the solution and gives it the maximum freedom within the limits imposed by constraints.

316 Image Filtering a n d Restoration Chap. 8

Distribution-Entropy Restoration For an image observed as

(8.190) where !fC is the PSF matrix, and @ and o- are the object and observation arrays

mapped into vectors, a maximum entropy restoration problem is to maximize

(8.191) subject to the constraint

(8.192) where CJ; > 0 is a specified quantity. Because u (n) is nonnegative and can be

normalized to give �" u (n) =

1, it can be treated as a probability distribution whose entropy is 0 ( @ ). Using the usual Lagrangian method of optimization, the solution

U,, is given by the implicit equation Iv = exp{- 1 - 'A.!fCT ( o- - !/CU,)}

(8.193) where exp{x} denotes a vector of elements exp[x (k)], k = 0, 1, . . . , 1 is a vector of

all ls and 'A. is a scalar Lagrange multiplier such that U,, satisfies the constraint of (8.192). Interestingly, a Taylor series expansion of the exponent, truncated to the

first two terms, yields the constrained least squares solution

gcr

(8.194) Note that the solution of (8.193) is guaranteed to be nonnegative. Experimental results show that this method gives sharper restorations than the least squares filters

Iv = (!fCT !fC + Ht1

o-

when the image contains· a small number of point objects (such as in astronomy images) [ 48].

A stronger restoration result is obtained by maximizing the entropy defined by (8.191) subject to the constraints

(n) ;::: 0,

n = 0, . . . ,N - _ 1

(8.195) L A (m, j)u (j) = o- (m),

� ) = _! exp L A [ n)'A.(/) , ] (n e

Now the solution is given by

M- I

(l,

n = 0 , ... , N-1 (8.196)

l=O

where 'A.(l) are Lagrange multipliers (also called dual variables) that maximize the functional

N- 1

M- 1

1(A.) � 2: � (n) - 2:

A.(l)u- (t)

Sec. 8. 1 4 Maximum Entropy Restoration 317

The above problem is now unconstrained in >..(n) and can be solved by invoking several different algorithms from optimization theory. One example is a coordinate

ascent algorithm, where constraints are enforced one by one in cyclic iterations, giving [49]

[ N- i

o- (m) ..v; (n) = ..v;- 1 (n) + A (m, n) log

k=O 2:

A (m, k)..v;- 1 (k)

is updated for all n and a fixed m. After m= = M, the iterations continue cyclically,

where m= j modulo M, k= 1, .. . , N and j

0, 1, . . . . At the jth iteration, x; (n)

updating the constraints. Convergence to the true solution is often slow but is assured as j - oo, for 0 :::; A, (m, n) :::;

1. Since the PSF is nonnegative, this condition is easily satisfied by scaling the observations appropriately.

Log-Entropy Restoration There is another maximum entropy restoration problem, which maximizes

subject to the constraints of (8.195). The solution now is obtained by solving the nonlinear equations

� (n) =

A u, n )>..(!)

where >..(/) maximizes J =-

� � log � [: � A (m, n)>..(m) + o- (m)>..(m) � (8.201) ] ��

Once again an iterative gradient or any other suitable method may be chosen to maximize (8.201). A coordinate ascent method similar to (8.198) yields the iterative solution [50]

A �; (n) u;+ 1 (n) =

' m=j modulo M, n = O, l, . . . ,N - l (8.202 )

1 + a;A ( m, n ) u; ( ) n where a; is determined such that the denominator term is positive and the constraint

N- 1

L A (m, n)�;+ 1 (n) = o- (m)

n=O

is satisfied at each iteration. This means we must solve for the positive root of the nonlinear equation

318 Image Filtering and Restoration Chap. 8

As before, the convergence, although slow, is assured as j � oo . For A (m, n) > 0, which is true for PSFs, this algorithm guarantees a positive estimate at any iteration

step. The speed of convergence can be improved by going to the gradient algorithm [51]

j = 0, 1, . . . (8.205)

N- 1

g1 (m) = o- (m) - L A (m, n)u1 (n)

(m, n)>-1(m)

where >-o (m) are chosen so that u0 (n) is positive and a1 is a positive root of the equation

N-1 f(a1) � 2: G1 (k)[A1 (k) + a1 G1 (k)r1 = 0

G1(k) � L A (m, k)g1(m), A1 (k) = 1L A (m, k)>-1(m) (8.209)

m=O

m=O

The search for a1 can be restricted to the interval [O, max{G1 (k)IA1 (k)}].

This maximum entropy problem appears often i � the theory of spectral esti­

mation (see Problem 8.26b ). The foregoing algorithms are valid in multidimensions if t£ (n) and o- (m) are sequences obtained by suitable ordering of elements of the multidimensional arrays u (i, j, . . . ) and v (i, j, . . . ), respectively.

8.15 BAYESIAN METHODS

In many imaging situations-for instance, image recording by film-the observation model is nonlinear of the form

v- = f(.9'Ca-) + 11 (8.210)

where f (x) is a nonlinear function of x. The a posteriori conditional density given by Bayes' rule

p u- v- - p(v-) (8.211)

( I ) _ p (v-lu)p (a-)

is useful in finding different types of estimates of the random vector a- from the observation vector v- . The minimum mean square estimate (MMSE) of w is the mean of this density. The maximum a posteriori (MAP) and the maximum likeli­

hood (ML) estimates are the modes of p (a-Iv-) and p (v-lu ), respectively. When the

Sec. 8. 1 5 Bayesian Methods 319 Sec. 8. 1 5 Bayesian Methods 319

p (o-) and are therefore easier to obtain. Under the assumption of Gaussian statistics for u, and "f), with covariances Ul,. and Ul n , respectively, the ML and MAP estimates can be shown to be the solution of the following equations:

ML estimate, U-ML: 9Cf flJUl�1[o- -f(.9l""U-ML)] = 0 (8.212) where

(8.213) and W-; are the elements of the vector W- �.9l""U-ML .

MAP estimate, U-MAP: U-MAP = µ .. + Ul,. 9Cf flJUl�1[0- -f(.9l""U-MAP)] (8.214)

where µ,. is the mean of u, and flJ is defined in (8.213) but now W- � .9l""U-MAP .

Since these equations are nonlinear, an alternative is to maximize the appro­ priate log densities. For example, a gradient algorithm for U, MAP is

�j+ I = Uj - a;{9Cf fl1;f7l�1[o- -f(.9Cu;)] - f7l�}[u; - p.,.]} (8.215) where a; > 0, and .0; is evaluated at W-; � .9CU-; .

Remarks If the function f(x) is linear, say f(x) = x, and Uln = a� I, then U,ML reduces to the

least squares solution (8.216) and the MAP estimate reduces to the Wiener filter output for zero mean noise [see

(8.217) where § = (Ul;;,1 + .'TC1' Gf,�1 gcr1 .'TC1' .9l�1 •

UMAP = µu + § (o- - µ,,.)

In practice, µ,,. may be estimated as a local average of o- and µ,. = gc+r1 (µ.,.), where � is the generalized inverse of .9C.

8.1 6 COORDINATE TRANSFORMATION AND GEOMETRIC CORRECTION

In many situations a geometric transformation of the image coordinates is required. An example is in the remote sensing of images via satellites, where the earth's rotation relative to the scanning geometry of the sensor generates an image on a distorted raster [55]. The problem then is to estimate a function f(x ' ,y ' ) given at

discrete locations of (x, y), where x ' = h1 (x, y), y ' = h2 (x, y) describe the geometric 320

Image Filtering and Restoration Chap. 8 Image Filtering and Restoration Chap. 8

[;:] [� �J[;J [�]

In principle, the image function in (x ', y ') coordinates can be obtained from its values on the (x; , y;) grid by an appropriate interpolation method followed by resampling on the desired grid. Some commonly used algorithms for interpolation at a point Q (Fig. 8.28) from samples at P1 , P2 , P3 , and P4 are as follows.

1. Nearest neighbor:

k: min{d;} =

dk

that is, Pk is the nearest neighbor of Q.

2. Linear interpolation: (8.220)

3. Bilinear interpolation:

F (Q) = F(Q1)1d5 + F(Q2)ld6 = F(Q1)d6 + F(Q2)d5

(lid;) (8.221a)

+ (lldf,)

d5 + df,

where (8.221b) These methods are local and require minimal computation . However, these

methods would be inappropriate if there was significant noise in the data. Smoothing splines or global interpolation methods, which use all the available data, would then be more suitable.

For many imaging systems the PSF is spatially varying in Cartesian coordi­ nates but becomes spatially invariant in a different coordinate system, for example, in systems with spherical aberrations, coma, astigmatism, and the like [56, 57]. These and certain other distortions (such as that due to rotational motion) may be

02 P3

P2 2 d' - _..q-- -

� Id� d3 \ I

--K

2 IQ

P, 1 01 4 P4 d' d'

___j,L��� Figure

8.28 Interpolation at Q.

Sec. 8. 1 6 Coordinate Transformation and Geometric Correction 321

Input image g(x, y)

f(x, y)

transformation coordinate

Figure 8.29 Spatially variant filtering by coordinate transformation.

corrected by the coordinate transformation method shown in Fig.

8.29. The input

image is transformed from (x, y) to (s, 11), coordinates where it is possible to filter by

a spatially invariant system. The filter output is then inverse transformed to obtain the estimate in the original coordinates. For example, the image of an object f(r, 0) obtained by an axially symmetric imaging system with coma aberration is