An estimator is said to be consistent if for any ⬎ 0, a.
K minimizes the mean squared error of this estimator when the
population distribution is normal? [Hint: It can be shown that E
[S
2 2
] ⫽ n ⫹ 1s
4
n ⫺ 1 In general, it is difficult to find to minimize
, which is why we look only at unbiased estimators and
minimize .]
35.
Let X
1
, . . . , X
n
be a random sample from a pdf that is symmet- ric about . An estimator for that has been found to perform
well for a variety of underlying distributions is the Hodges
–Lehmann estimator. To define it, first compute for each i ⱕ j and each j ⫽ 1, 2, . . . , n the pairwise average
i,j
⫽ X
i
⫹ X
j
2. Then the estimator is ⫽ the median of the
i,j
’s. Compute the value of this estimate using the data of Exercise
44 of Chapter 1. [Hint: Construct a square table with the x
i
’s listed on the left margin and on top. Then compute averages
on and above the diagonal.]
36.
When the population distribution is normal, the statistic median {| X
1
⫺ |, . . . , | X
n
⫺ |}.6745 can be used to
estimate s. This estimator is more resistant to the effects of outliers observations far from the bulk of the data
X |
X |
X m
ˆ X
m m
V ˆu
MSE ˆu ˆu
than is the sample standard deviation. Compute both the corresponding point estimate and s for the data of
Example 6.2.
37.
When the sample standard deviation S is based on a random sample from a normal population distribution, it can be
shown that
Use this to obtain an unbiased estimator for s of the form cS
. What is c when n ⫽ 20?
38.
Each of n specimens is to be weighed twice on the same scale. Let X
i
and Y
i
denote the two observed weights for the ith specimen. Suppose X
i
and Y
i
are independent of one another, each normally distributed with mean value
i
the true weight of specimen i and variance s
2
.
a.
Show that the maximum likelihood estimator of s
2
is . [Hint: If ⫽ z
1
⫹ z
2
2, then ⌺
z
i
⫺
2
⫽ z
1
⫺ z
2 2
2.]
b.
Is the mle an unbiased estimator of s
2
? Find an unbiased estimator of s
2
. [Hint: For any rv Z, EZ
2
⫽ V
Z ⫹ [EZ]
2
. Apply this to Z ⫽ X
i
⫺ Y
i
.] s
ˆ
2
z z
s ˆ
2
5 gX
i
2 Y
i 2
4n m
E S 5 12n 2 1⌫n2s⌫n 2 12
Bibliography
DeGroot, Morris, and Mark Schervish, Probability and Statistics 3rd ed., Addison-Wesley, Boston, MA, 2002. Includes an
excellent discussion of both general properties and methods of point estimation; of particular interest are examples show-
ing how general principles and methods can yield unsatisfac- tory estimators in particular situations.
Devore, Jay, and Kenneth Berk, Modern Mathematical Statistics with Applications,
Thomson-BrooksCole, Belmont, CA, 2007. The exposition is a bit more comprehensive and
sophisticated than that of the current book. Efron, Bradley, and Robert Tibshirani, An Introduction to the
Bootstrap, Chapman and Hall, New York, 1993. The bible of
the bootstrap. Hoaglin, David, Frederick Mosteller, and John Tukey,
Understanding Robust and Exploratory Data Analysis, Wiley, New York, 1983. Contains several good chapters on
robust point estimation, including one on M-estimation. Rice, John, Mathematical Statistics and Data Analysis 3rd ed.,
Thomson-BrooksCole, Belmont, CA, 2007. A nice blending of statistical theory and data.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook andor eChapters. Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.