LMS ALGORITHM FOR COEFFICIENT ADJUSTMENT

11.1 LMS ALGORITHM FOR COEFFICIENT ADJUSTMENT

Suppose we have an FIR filter with adjustable coefficients {h(k), 0 ≤ k ≤ N − 1}. Let {x(n)} denote the input sequence to the filter, and let the corresponding output be {y(n)}, where

N −1 !

y(n) =

h(k)x (n − k) , n = 0, . . . , M

k=0

Suppose that we also have a desired sequence {d(n)} with which we can compare the FIR filter output. Then we can form the error sequence {e(n)} by taking the difference between d(n) and y(n), that is,

(11.2) The coefficients of the FIR filter will be selected to minimize the sum of

e(n) = d(n) − y(n), n = 0, . . . , M

squared errors. Thus we have

N −1

! 2 ! E= ! e (n) = d(n) − h(k)x (n − k)

h(k)r dx (k) +

h(k)h (ℓ) r xx (k − ℓ)

where, by definition,

dx (k) =

d(n)x (n − k) , 0≤k≤N−1 (11.4)

n=0 ! M

r xx (k) =

x(n)x (n + k) , 0≤k≤N−1 (11.5)

n=0

LMS Algorithm for Coefficient Adjustment 597

We call {r dx (k)} the crosscorrelation between the desired output sequence {d(n)} and the input sequence {x(n)}, and {r xx (k)} is the autocorrelation sequence of {x(n)}.

The sum of squared errors E is a quadratic function of the FIR filter coefficients. Consequently, the minimization of E with respect to the filter coefficients {h(k)} results in a set of linear equations. By differentiating

E with respect to each of the filter coefficients, we obtain ∂ E

= 0, 0≤m≤N−1

h(m)

and hence N −1 !

h(k)r xx (k − m) = r dx (m), 0≤m≤N−1 (11.7)

k=0

This is the set of linear equations that yield the optimum filter coefficients. To solve the set of linear equations directly, we must first compute the autocorrelation sequence {r xx (k)} of the input signal and the cross- correlation sequence {r dx (k)} between the desired sequence {d(n)} and the input sequence {x(n)}.

The LMS algorithm provides an alternative computational method for determining the optimum filter coefficients {h(k)} without explicitly com- puting the correlation sequences {r xx (k)} and {r dx (k)}. The algorithm is basically a recursive gradient (steepest-descent) method that finds the minimum of E and thus yields the set of optimum filter coefficients.

We begin with any arbitrary choice for the initial values of {h(k)},

0 ≤ k ≤ N −1. Then after each new input sample {x(n)} enters the adaptive FIR filter, we compute the corresponding output, say {y(n)}, form the error signal e(n) = d(n) − y(n), and update the filter coefficients according to the equation

say {h 0 (k)}. For example, we may begin with h 0 (k) = 0,

0 ≤ k ≤ N − 1, n = 0, 1, . . . (11.8) where △ is called the step size parameter, x(n − k) is the sample of the

h n (k) = h n−1 (k) + △ · e(n) · x (n − k) ,

input signal located at the kth tap of the filter at time n, and e(n)x (n − k) is an approximation (estimate) of the negative of the gradient for the kth filter coefficient. This is the LMS recursive algorithm for adjusting the filter coefficients adaptively so as to minimize the sum of squared errors E.

The step size parameter △ controls the rate of convergence of the algorithm to the optimum solution. A large value of △ leads to large step size adjustments and thus to rapid convergence, while a small value of △ results in slower convergence. However, if △ is made too large the algorithm becomes unstable. To ensure stability, △ must be chosen [22]

Chapter 11

APPLICATIONS IN ADAPTIVE FILTERING

to be in the range

where N is the length of the adaptive FIR filter and P x is the power in the input signal, which can be approximated by

The mathematical justification of equations (11.9) and (11.10) and the proof that the LMS algorithm leads to the solution for the optimum filter coefficients is given in more advanced treatments of adaptive filters. The interested reader may refer to the books by Haykin [8] and Proakis and Manolakis [23].

11.1.1 MATLAB IMPLEMENTATION The LMS algorithm (11.8) can easily be implemented in MATLAB. Given the input sequence {x(n)}, the desired sequence {d(n)}, step size △ , and the desired length of the adaptive FIR filter N , we can use (11.1), (11.2), and (11.8) to determine the adaptive filter coefficients {h(n), 0 ≤ n ≤ N − 1} recursively. This is shown in the following func- tion, called lms.

function [h,y] = lms(x,d,delta,N) % LMS Algorithm for Coefficient Adjustment % ---------------------------------------- % [h,y] = lms(x,d,delta,N) %

h = estimated FIR filter

y = output array y(n)

x = input array x(n)

d = desired array d(n), length must be same as x % delta = step size %

N = length of the FIR filter

% M = length(x); y = zeros(1,M);

h = zeros(1,N); for n = N:M x1 = x(n:-1:n-N+1); y = h * x1’; e = d(n) - y; h = h + delta*e*x1;

end

In addition, the lms function provides the output {y(n)} of the adaptive filter.

System Identification or System Modeling 599

We will apply the LMS algorithm to several practical applications involving adaptive filtering.