Linear Least Mean Squares Estimation Based on a Single Observation
Linear Least Mean Squares Estimation Based on a Single Observation
We are interested in finding a and b that minimize the mean squared estimation error E [(8 - aX - b)2] associated with a linear estimator aX + b of 8. Suppose that a has already been chosen. How should we choose b? This is the same as
438 Bayesian Statistical Inference Chap. 8
choosing a constant b to estimate the random variable 8 - aX. By the discussion in the beginning of Section 8.3, the best choice is
b= E[8 - aX] = E[8 ] - aE[X ] .
With this choice of b, it remains to minimize, with respect to a, the expression
E [ (8 - aX - E[8] + a E[X]) ] .
We write this expression as var(8 - aX) = a � + a 2a � + 2cov(8, -aX) =a � + a2a � - 2a · cov(8, X),
where ae and ax are the standard deviations of 8 and X, respectively, and
cov(8, X) = E [ (8 - E[8]) (X - E[X]) ]
is the covariance of 8 and X. To minimize var(8 - aX) (a quadratic function of a), we set its derivative to zero and solve for a. This yields
ae
cov(8, X) paeax
where
cov(8, X)
aeax
is the correlation coefficient. With this choice of a, the mean squared estimation error of the resulting linear estimator 8 is given by
var(8 - 8) = a � + a 2a � - 2a . cov(8, X)
a2 ae =a � + p2 a � - 2p-paea x
ax
ax
= (1 - p2 ) a � .
Linear LMS Estimation Formulas
The linear LMS estimator 8 of 8 based on X is
ae
8 = E[8] + cov(8, X) (X - E[X]) = E[8] + P (X - E[X]) ,
var(X)
ax
where
cov(8, X)
p= aeax
is the correlation coefficient.
The resulting mean squared estimation error is equal to
Sec. 8.4 Bayesian Linear Least Mean Squares Estimation 439
The formula for the linear LMS estimator only involves the means, vari ances, and covariance of 8 and X. Furthermore, it has an intuitive interpreta tion. Suppose, for concreteness, that the correlation coefficient p is positive. The estimator starts with the baseline estimate E[8] for 8, which it then adjusts by taking into account the value of X - E[X] . For example, when X is larger than
its mean, the positive correlation between X and 8 suggests that 8 is expected to be larger than its mean. Accordingly, the resulting estimate is set to a value larger than E[8] . The value of p also affects the quality of the estimate. When
Ipi is close to 1, the two random variables are highly correlated, and knowing X allows us to accurately estimate 8, resulting in a small mean squared error.
We finally note that the properties of the estimation error presented in Section 8.3 can be shown to hold when 8 is the linear LMS estimator; see the end-of-chapter problems.
Example 8.15. We revisit the model in Examples 8.2, 8.7, and 8.12, in which Juliet is always late by an amount X that is uniformly distributed over the interval
[0, e], and 8 is a random variable with a uniform prior PDF Ie
(0) over the interval [0, ] . Let us derive the linear LMS estimator of e based on X.
Using the fact that E[X 1 8] = e /2 and the law of iterated expectations, the
expected value of X is
E[ X] = E [E[X I eJ] = E =
Furthermore, using the law of total variance (this is the same calculation as in Example 4.17 of Chapter 4), we have
7 v a r( X) = 144 '