Copyright 1996 Lawrence C. Marsh
The Error Term
y is a random variable composed of
two parts
:
I. Systematic component:
Ey =
β
1
+
β
2
x
This is the
mean of y
.
II. Random component: e = y - Ey
= y -
β
1
-
β
2
x
This is called the
random error
.
Together Ey and e form
the model
:
y =
β
1
+
β
2
x + e
3.11
Copyright 1996 Lawrence C. Marsh
Figure 3.5 The relationship among y, e and the true regression line.
.
. .
.
y
4
y
1
y
2
y
3
x
1
x
2
x
3
x
4
}
}
{
{
e
1
e
2
e
3
e
4
Ey = β
1
+ β
2
x
x y
3.12
Copyright 1996 Lawrence C. Marsh
}
.
}
. .
.
y
4
y
1
y
2
y
3
x
1
x
2
x
3
x
4
{
{
e
1
e
2
e
3
e
4
x y
Figure 3.7a The relationship among y, e and the fitted regression line.
y = b
1
+ b
2
x
.
.
.
.
y
1
y
2
y
3
y
4
3.13
Copyright 1996 Lawrence C. Marsh
{
{
.
. .
.
.
y
4
y
1
y
2
y
3
x
1
x
2
x
3
x
4
x y
Figure 3.7b The sum of squared residuals from any other line will be larger.
y = b
1
+ b
2
x
.
. .
y
1
y
3
y
4
y = b
1
+ b
2
x
e
1
e
2
y
2
e
3
e
4
{ {
3.14
Copyright 1996 Lawrence C. Marsh
f
.
fe fy
Figure 3.4 Probability density function for e and y β
1
+ β
2
x
3.15
Copyright 1996 Lawrence C. Marsh
The Error Term Assumptions
1. The value of y, for each value of x, is
y =
β
1
+
β
2
x + e
2. The average value of the random error e is:
Ee = 0
3. The variance of the random error e is:
vare =
σ
2
= vary
4. The covariance between any pair of e’s is:
cove
i
,e
j
= covy
i
,y
j
= 0
5. x must take at least two different values so that
x
≠
c
, where c is a constant.
6. e is normally distributed with mean 0, vare= σ
2
optional
e ~ N0,
σ
2
3.16
Copyright 1996 Lawrence C. Marsh
Unobservable Nature of the Error Term
1. Unspecified
factors explanatory variables, not in the model, may be in the error term.
2. Approximation error is in the error term if relationship between y and x is not exactly
a perfectly linear
relationship.
3. Strictly unpredictable
random behavior that may be unique to that observation is in error.
3.17
Copyright 1996 Lawrence C. Marsh
Population regression values:
y
t
= β
1
+ β
2
x
t
+ e
t
Population regression line:
Ey
t
|x
t
= β
1
+ β
2
x
t
Sample regression values:
y
t
= b
1
+ b
2
x
t
+ e
t
Sample regression line:
y
t
= b
1
+ b
2
x
t
3.18
Copyright 1996 Lawrence C. Marsh
y
t
=
β
1
+
β
2
x
t
+ e
t
Minimize error sum of squared deviations:
S
β
1
, β
2
=
Σ
y
t
-
β
1
-
β
2
x
t 2
3.3.4
t=1 T
e
t
= y
t
-
β
1
-
β
2
x
t
3.19
Copyright 1996 Lawrence C. Marsh
Minimize w. r.
t. β
1
and β
2
:
S
β
1
, β
2
=
Σ
y
t
-
β
1
-
β
2
x
t 2
3.3.4
t =1 T
=
-
2
Σ
y
t
-
β
1
-
β
2
x
t
=
-
2
Σ
x
t
y
t
-
β
1
-
β
2
x
t
∂ S
.
∂β
1
∂ S
.
∂β
2
Set each of these two derivatives equal to zero and
solve these two equations for the two unknowns:
β
1
β
2
3.20
Copyright 1996 Lawrence C. Marsh
S. S.
β
i
b
i
.
.
.
Minimize w. r.
t. β
1
and β
2
:
S
.
=
Σ
y
t
-
β
1
-
β
2
x
t 2
t =1 T
∂ S.
∂β
i
∂ S.
∂β
i
∂ S.
∂β
i
= 0
3.21
Copyright 1996 Lawrence C. Marsh
To minimize S., you set the two derivatives equal to zero to get:
=
-
2
Σ
y
t
-
b
1
-
b
2
x
t
= 0
=
-
2
Σ
x
t
y
t
-
b
1
-
b
2
x
t
= 0 ∂
S
.
∂β
1
∂ S
.
∂β
2
When these two terms are set to zero,
β
1
and
β
2
become
b
1
and
b
2
because they no longer
represent just any value of
β
1
and
β
2
but the special
values that correspond to the minimum of
S
.
.
3.22
Copyright 1996 Lawrence C. Marsh
-
2
Σ
y
t
-
b
1
-
b
2
x
t
= 0
-
2
Σ
x
t
y
t
-
b
1
-
b
2
x
t
= 0
Σ
y
t
-
T
b
1
-
b
2
Σ
x
t
= 0
Σ
x
t
y
t
-
b
1
Σ
x
t
-
b
2
Σ
x
t
= 0
2
T
b
1
+
b
2
Σ
x
t
=
Σ
y
t
b
1
Σ
x
t
+
b
2
Σ
x
t
=
Σ
x
t
y
t
2
3.23
Copyright 1996 Lawrence C. Marsh
Solve for
b
1
and
b
2
using definitions of
x
and
y T
b
1
+
b
2
Σ
x
t
=
Σ
y
t
b
1
Σ
x
t
+
b
2
Σ
x
t
=
Σ
x
t
y
t
2
T
Σ
x
t
y
t
-
Σ
x
t
Σ
y
t
T
Σ
x
t
-
Σ
x
t
2 2
b
2
=
b
1
=
y
-
b
2
x
3.24
Copyright 1996 Lawrence C. Marsh
elasticities
percentage change in y percentage change in x
η
= =
∆
xx
∆
yy =
∆
y x
∆
x y
Using calculus, we can get the elasticity at a point:
η
= lim =
∆
y x
∆
x y
∂
y x
∂
x y
∆
x
→
3.25
Copyright 1996 Lawrence C. Marsh
Ey = β
1
+ β
2
x
∂ Ey
∂ x
= β
2
applying elasticities
∂ Ey
∂ x
= β
2
η =
Ey x
Ey x
3.26
Copyright 1996 Lawrence C. Marsh
estimating elasticities
∂ y
∂ x
= b
2
η =
y x
y x
y
t
=
b
1
+ b
2
x
t
= 4 + 1.5 x
t
x
= 8 = average number of years of experience
y
= 10 = average wage rate
= 1.5 = 1.2 8
10 =
b
2
η
y x
3.27
Copyright 1996 Lawrence C. Marsh
Prediction
y
t
= 4 + 1.5 x
t
Estimated regression equation:
x
t
= years of experience
y
t
=
predicted wage rate
If
x
t
= 2 years, then
y
t
=
7.00
per hour
.
If
x
t
= 3 years, then
y
t
=
8.50
per hour
.
3.28
Copyright 1996 Lawrence C. Marsh
log-log models
lny = β
1
+ β
2
lnx
∂ lny
∂ x
∂ lnx
∂ x
= β
2
∂ y
∂ x
= β
2
1 y
∂ x
∂ x
1 x
3.29
Copyright 1996 Lawrence C. Marsh
∂ y
∂ x
= β
2
1 y
∂ x
∂ x
1 x
= β
2
∂ y
∂ x
x y
elasticity of y with respect to x:
= β
2
∂ y
∂ x
x y
η =
3.30
Copyright 1996 Lawrence C. Marsh
Properties of
Least Squares
Estimators
Chapter 4
Copyright © 1997 John Wiley Sons, Inc. All rights reserved. Reproduction or translation of this work beyond that permitted in Section 117 of the 1976 United States Copyright Act without the express written permission of the
copyright owner is unlawful. Request for further information should be addressed to the Permissions Department, John Wiley Sons, Inc. The purchaser may make back-up copies for hisher own use only and not for distribution
or resale. The Publisher assumes no responsibility for errors, omissions, or damages, caused by the use of these programs or from the use of the information contained herein.
4.1
Copyright 1996 Lawrence C. Marsh
y
t
=
household weekly food expenditures
Simple Linear Regression Model
y
t
= β
1
+ β
2
x
t
+ ε
t
x
t
=
household weekly income
For a given level of
x
t
, the expected
level of food expenditures will be:
E
y
t
|
x
t
= β
1
+ β
2
x
t
4.2
Copyright 1996 Lawrence C. Marsh
1.
y
t
= β
1
+ β
2
x
t
+ ε
t
2. E