Directory UMM :wiley:Public:college:hill:

(1)

Copyright 1996 Lawrence C. Marsh

PowerPoint Slides

for

Undergraduate Econometrics

by

Lawrence C. Marsh

To accompany: Undergraduate Econometrics

by R. Carter Hill, William E. Griffiths and George G. Judge Publisher: John Wiley & Sons, 1997


(2)

Copyright 1996 Lawrence C. Marsh

The Role of

Econometrics

in Economic Analysis

Chapter 1

Copyright © 1997 John Wiley & Sons, Inc. All rights reserved. Reproduction or translation of this work beyond that permitted in Section 117 of the 1976 United States Copyright Act without the express written permission of the copyright owner is unlawful. Request for further information should be addressed to the Permissions Department, John Wiley & Sons, Inc. The purchaser may make back-up copies for his/her own use only and not for distribution or resale. The Publisher assumes no responsibility for errors, omissions, or damages, caused by the use of these programs or from the use of the information contained herein.


(3)

Copyright 1996 Lawrence C. Marsh

Using Information:

1. Information from

economic theory.

2. Information from economic data.

The Role of Econometrics


(4)

Copyright 1996 Lawrence C. Marsh

Understanding Economic Relationships:

federal budget

Dow-Jones Stock Index

trade

deficit Federal Reserve

Discount Rate capital gains tax

rent control laws short term treasury bills power of labor unions crime rate inflation unemployment money supply 1.3


(5)

Copyright 1996 Lawrence C. Marsh

economic theory

economic data

}

economic decisions

To use information effectively:

*Econometrics* helps us combine

economic theory and economic data .


(6)

Copyright 1996 Lawrence C. Marsh

Consumption, c, is some function of income, i :

c = f(i)

For applied econometric analysis this consumption function must be specified more precisely.


(7)

Copyright 1996 Lawrence C. Marsh demand, qd, for an individual commodity:

qd = f( p, pc, ps, i )

supply, qs, of an individual commodity:

qs = f( p, pc, pf )

p = own price; pc = price of complements;

ps = price of substitutes; i = income

p = own price; pc = price of competitive products;

ps = price of substitutes; pf = price of factor inputs

demand

supply


(8)

Copyright 1996 Lawrence C. Marsh

Listing the variables in an economic relationship is not enough. For effective policy we must know the amount of change

needed for a policy instrument to bring about the desired effect:

How much ?

• By how much should the Federal Reserve raise interest rates to prevent inflation?

• By how much can the price of football tickets be increased and still fill the stadium?


(9)

Copyright 1996 Lawrence C. Marsh

Answering the How Much? question Need to estimate parameters

that are both:

1. unknown

and

2. unobservable


(10)

Copyright 1996 Lawrence C. Marsh

Average or systematic behavior

over many individuals or many firms.

Not a single individual or single firm. Economists are concerned with the

unemployment rate and not whether a particular individual gets a job.

The Statistical Model


(11)

Copyright 1996 Lawrence C. Marsh

The Statistical Model

Actual vs. Predicted Consumption:

Actual = systematic part + random error

Systematic part provides prediction, f(i), but actual will miss by random error, e.

Consumption, c, is function, f, of income, i, with error, e:

c = f(i) + e


(12)

Copyright 1996 Lawrence C. Marsh

c = f(i) + e

Need to define f(i) in some way. To make consumption, c,

a linear function of income, i : f(i) = β1 + β2 i

The statistical model then becomes: c = β1 + β2 i + e


(13)

Copyright 1996 Lawrence C. Marsh

• Dependent variable, y, is focus of study

(predict or explain changes in dependent variable). • Explanatory variables, X2 and X3, help us explain observed changes in the dependent variable.

y = β1 + β2 X2 + β3 X3 + e

The Econometric Model


(14)

Copyright 1996 Lawrence C. Marsh

Statistical Models

Controlled (experimental) vs.

Uncontrolled (observational)

Uncontrolled experiment (econometrics) explaining consump-tion, y : price, X2, and income, X3, vary at the same time. Controlled experiment (“pure” science) explaining mass, y : pressure, X2, held constant when varying temperature, X3, and vice versa.


(15)

Copyright 1996 Lawrence C. Marsh

Econometric

model

• economic model

economic variables and parameters. • statistical model

sampling process with its parameters. • data

observed values of the variables.


(16)

Copyright 1996 Lawrence C. Marsh

• Uncertainty regarding an outcome.

• Relationships suggested by economic theory. • Assumptions and hypotheses to be specified. • Sampling process including functional form. • Obtaining data for the analysis.

• Estimation rule with good statistical properties. • Fit and test model using software package.

• Analyze and evaluate implications of the results. • Problems suggest approaches for further research.

The Practice of Econometrics


(17)

Copyright 1996 Lawrence C. Marsh

Note: the textbook uses the following symbol to mark sections with advanced material

:

“Skippy”


(18)

Copyright 1996 Lawrence C. Marsh

Some Basic

Probability

Concepts

Chapter 2

Copyright © 1997 John Wiley & Sons, Inc. All rights reserved. Reproduction or translation of this work beyond that permitted in Section 117 of the 1976 United States Copyright Act without the express written permission of the copyright owner is unlawful. Request for further information should be addressed to the Permissions Department, John Wiley & Sons, Inc. The purchaser may make back-up copies for his/her own use only and not for distribution or resale. The Publisher assumes no responsibility for errors, omissions, or damages, caused by the use of these programs or from the use of the information contained herein.


(19)

Copyright 1996 Lawrence C. Marsh

random variable:

A variable whose value is unknown until it is observed. The value of a random variable results from an experiment.

The term random variable implies the existence of some known or unknown probability distribution defined over the set of all possible values of that variable.

In contrast, an arbitrary variable does not have a probability distribution associated with its values.


(20)

Copyright 1996 Lawrence C. Marsh

Controlled experiment values

of explanatory variables are chosen with great care in accordance with an appropriate experimental design.

Uncontrolled experiment values

of explanatory variables consist of nonexperimental observations over which the analyst has no control.


(21)

Copyright 1996 Lawrence C. Marsh

discrete random variable:

A discrete random variable can take only a finite number of values, that can be counted by using the positive integers.

Example: Prize money from the following lottery is a discrete random variable:

first prize: $1,000 second prize: $50 third prize: $5.75

since it has only four (a finite number) (count: 1,2,3,4) of possible outcomes:

$0.00; $5.75; $50.00; $1,000.00


(22)

Copyright 1996 Lawrence C. Marsh

continuous random variable:

A continuous random variable can take any real value (not just whole numbers) in at least one interval on the real line.

Examples:

Gross national product (GNP) money supply

interest rates price of eggs

household income

expenditure on clothing


(23)

Copyright 1996 Lawrence C. Marsh

A discrete random variable that is restricted to two possible values (usually 0 and 1) is called a dummy variable (also, binary or indicator variable).

Dummy variables account for qualitative differences: gender (0=male, 1=female),

race (0=white, 1=nonwhite),

citizenship (0=U.S., 1=not U.S.), income class (0=poor, 1=rich).


(24)

Copyright 1996 Lawrence C. Marsh

A list of all of the possible values taken by a discrete random variable along with

their chances of occurring is called a probability function or probability density function (pdf).

die x f(x)

one dot 1 1/6

two dots 2 1/6

three dots 3 1/6

four dots 4 1/6

five dots 5 1/6

six dots 6 1/6


(25)

Copyright 1996 Lawrence C. Marsh A discrete random variable X

has pdf, f(x), which is the probability

that X takes on the value x.

f(x) = P(X=x)

0 < f(x) < 1

If X takes on the n values: x1, x2, . . . , xn, then f(x1) + f(x2)+. . .+f(xn) = 1.

Therefore,


(26)

Copyright 1996 Lawrence C. Marsh Probability, f(x), for a discrete random

variable, X, can be represented by height:

0 1 2 3 X

number, X, on Dean’s List of three roommates

f(x)

0.2

0.4

0.1

0.3


(27)

Copyright 1996 Lawrence C. Marsh

A continuous random variable uses area under a curve rather than the

height, f(x), to represent probability:

f(x)

X $34,000 $55,000

. .

per capita income, X, in the United States

0.1324 0.8676

red area green area


(28)

Copyright 1996 Lawrence C. Marsh

Since a continuous random variable has an

uncountably infinite number of values, the probability of one occurring is zero.

P [ X = a ] = P [ a < X < a ] = 0

Probability is represented by area. Height alone has no area.

An interval for X is needed to get an area under the curve.


(29)

Copyright 1996 Lawrence C. Marsh

P [ a < X < b ] =

f(x) dx

b a

The area under a curve is the integral of the equation that generates the curve:

For continuous random variables it is the

integral of f(x), and not f(x) itself, which

defines the area and, therefore, the probability.


(30)

Copyright 1996 Lawrence C. Marsh

n

Rule 2: Σ axi = a Σ xi

i = 1 i = 1

n

Rule 1: Σ xi = x1 + x2 + . . . + xn

i = 1

n

Rule 3: Σ (xi + yi) = Σ xi + Σ yi

i = 1 i = 1 i = 1

n n n

Note that summation is a linear operator which means it operates term by term.


(31)

Copyright 1996 Lawrence C. Marsh

Rule 4: Σ (axi + byi) = a Σ xi + b Σ yi

i = 1 i = 1 i = 1

n n n

Rules of Summation (continued)

Rule 5: x = Σ xi =

i = 1

n n

1 x1 + x2 + . . . + xn

n

The definition of x as given in Rule 5 implies the following important fact:

Σ (xi − x) = 0

i = 1

n


(32)

Copyright 1996 Lawrence C. Marsh

Rule 6: Σ f(xi) = f(x1) + f(x2) + . . . + f(xn)

i = 1

n

Notation: Σ f(xi) = Σ f(xi) = nΣ f(xi)

x i i = 1

n

Rule 7: Σ Σ f(xi,yj) = Σ [ f(xi,y1) + f(xi,y2)+. . .+ f(xi,ym)]

i = 1 i = 1

n m

j = 1

The order of summation does not matter :

Σ Σ f(xi,yj) = Σ Σ f(xi,yj)

i = 1

n m

j = 1 j = 1

m n

i = 1

Rules of Summation (continued)


(33)

Copyright 1996 Lawrence C. Marsh

The

mean

or arithmetic average of a

random variable is its mathematical

expectation or expected value, EX.

The Mean of a Random Variable


(34)

Copyright 1996 Lawrence C. Marsh

Expected Value

There are two entirely different, but mathematically equivalent, ways of determining the expected value:

1. Empirically:

The expected value of a random variable, X, is the average value of the random variable in an infinite number of repetitions of the experiment.

In other words, draw an infinite number of samples, and average the values of X that you get.


(35)

Copyright 1996 Lawrence C. Marsh

Expected Value

2. Analytically:

The expected value of a discrete random variable, X, is determined by weighting all the possible values of X by the corresponding probability density function values, f(x), and summing them up.

E[X] = x1f(x1) + x2f(x2) + . . . + xnf(xn)

In other words:


(36)

Copyright 1996 Lawrence C. Marsh

In the empirical case when the

sample goes to infinity the values

of X occur with a frequency

equal to the corresponding f(x)

in the analytical expression.

As sample size goes to infinity, the

empirical and analytical methods

will produce the same value.


(37)

Copyright 1996 Lawrence C. Marsh

x =

Σ

n xi

i = 1

where n is the number of sample observations. Empirical (sample) mean:

E[X] =

Σ

xi f(xi)

i = 1

n

where n is the number of possible values of xi. Analytical mean:

Notice how the meaning of n changes.


(38)

Copyright 1996 Lawrence C. Marsh

E X =

Σ

xi f(xi)

i=1

n

The expected value of X-squared:

E X =

Σ

xi f(xi)

i=1

n

2 2

It is important to notice that f(xi) does not change!

The expected value of X-cubed:

E X =

Σ

xi f(xi)

i=1

n

3 3


(39)

Copyright 1996 Lawrence C. Marsh

EX = 0 (.1) + 1 (.3) + 2 (.3) + 3 (.2) + 4 (.1)

2

EX = 0 (.1) + 1 (.3) + 2 (.3) + 3 (.2) + 4 (.1)2 2 2 2 2

= 1.9

= 0 + .3 + 1.2 + 1.8 + 1.6

= 4.9

3

EX = 0 (.1) + 1 (.3) + 2 (.3) + 3 (.2) +4 (.1)3 3 3 3 3 = 0 + .3 + 2.4 + 5.4 + 6.4

= 14.5


(40)

Copyright 1996 Lawrence C. Marsh

E[g(X)]

=

Σ

g

(

xi

)

f(xi)

n

i = 1

g(X) = g1(X) + g2(X)

E[g(X)]

=

Σ [

g1(xi) + g2(xi)] f(xi)

n

i = 1

E[g(X)]

=

Σ

g1(xi) f(xi) +

Σ

g2(xi) f(xi)

n

i = 1

n

i = 1

E[g(X)]

=

E[g1(X)] + E[g2(X)]


(41)

Copyright 1996 Lawrence C. Marsh

Adding

and

Subtracting

Random Variables

E(X-Y) = E(X) - E(Y)

E(X+Y) = E(X) + E(Y)


(42)

Copyright 1996 Lawrence C. Marsh

E(X+a) = E(X) + a

Adding

a

constant

to a variable will

add a constant to its expected value:

Multiplying by

constant

will multiply

its expected value by that constant:

E(bX) = b E(X)


(43)

Copyright 1996 Lawrence C. Marsh

var(X) = average squared deviations around the mean of X.

var(X) = expected value of the squared deviations around the expected value of X.

var(X) = E [(X - EX) ] 2


(44)

Copyright 1996 Lawrence C. Marsh

var(X) = E [(X - EX) ]

= E [X - 2XEX + (EX) ]

2

2

2

= E(X ) - 2 EX EX + E (EX)

2

2

= E(X ) - 2 (EX) + (EX) 2 2 2 = E(X ) - (EX) 2 2

var(X) = E [(X - EX) ] 2

var(X) = E(X ) - (EX) 2 2


(45)

Copyright 1996 Lawrence C. Marsh

variance

of a discrete

random variable, X:

standard deviation is square root of variance

var ( X ) =

(x

i

- EX )

2

f(x

i

)

i = 1 n


(46)

Copyright 1996 Lawrence C. Marsh

xi f(xi) (xi - EX) (xi - EX) f(xi)

2 .1 2 - 4.3 = -2.3 5.29 (.1) = .529 3 .3 3 - 4.3 = -1.3 1.69 (.3) = .507 4 .1 4 - 4.3 = - .3 .09 (.1) = .009 5 .2 5 - 4.3 = .7 .49 (.2) = .098 6 .3 6 - 4.3 = 1.7 2.89 (.3) = .867

Σ xi f(xi) = .2 + .9 + .4 + 1.0 + 1.8 = 4.3

Σ (xi - EX) f(xi) = .529 + .507 + .009 + .098 + .867 = 2.01

2

2

calculate the variance for a discrete random variable, X:

i = 1

n n

i = 1


(47)

Copyright 1996 Lawrence C. Marsh

Z = a + cX

var(Z) = var(a + cX)

= E [(a+cX) - E(a+cX)]

= c var(X)

2

2

var(a + cX) = c var(X)

2


(48)

Copyright 1996 Lawrence C. Marsh

A

joint

probability density function,

f(x,y), provides the probabilities

associated with the joint occurrence

of all of the possible pairs of X and Y.


(49)

Copyright 1996 Lawrence C. Marsh

college grads in household

.15

.05

.45

.35

joint pdf

f(x,y)

Y = 1 Y = 2

vacation homes

owned

X = 0

X = 1

Survey of College City, NY

f

(0,1)

f

(0,2)

f

(1,1)

f

(1,2)


(50)

Copyright 1996 Lawrence C. Marsh

E[g(X,Y)] =

Σ

Σ

g(x

i

,y

j

) f(x

i

,y

j

)

i j

E(XY) = (0)(1)(.45)+(0)(2)(.15)+(1)(1)(.05)+(1)(2)(.35)=.75

E(XY) =

Σ

Σ

x

i

y

j

f(x

i

,y

j

)

i j

Calculating the expected value of

functions of two random variables.


(51)

Copyright 1996 Lawrence C. Marsh

The

marginal

probability density functions,

f(x) and f(y), for discrete random variables,

can be obtained by summing over the f(x,y)

with respect to the values of Y to obtain f(x)

with respect to the values of X to obtain f(y).

f(x

i

) =

Σ

f(x

i

,y

j

)

f(y

j

) =

Σ

f(x

i

,y

j

)

i j


(52)

Copyright 1996 Lawrence C. Marsh

.15

.05

.45

.35

marginal

Y = 1 Y = 2

X = 0

X = 1

.60

.40

.50

.50

f

(X = 1)

f

(X = 0)

f

(Y = 1)

f

(Y = 2)

marginal pdf for Y:

marginal pdf for X:


(53)

Copyright 1996 Lawrence C. Marsh

The

conditional

probability density

functions of X given Y=y , f(x

|

y),

and of Y given X=x , f(y

|

x),

are obtained by dividing f(x,y) by f(y)

to get f(x

|

y) and by f(x) to get f(y

|

x).

f(x

|

y) =

f(x,y)

f(y

|

x) =

f(x,y)

f(y)

f(x)


(54)

Copyright 1996 Lawrence C. Marsh

.15

.05

.45

.35

conditonal

Y = 1 Y = 2

X = 0

X = 1

.60

.40

.50

.50

.25 .75 .875 .125 .90 .10 .70 .30

f(Y=2|X= 0)=.25

f(Y=1|X = 0)=.75

f(Y=2|X = 1)=.875

f(X=0|Y=2)=.30

f(X=1|Y=2)=.70

f(X=0|Y=1)=.90

f(X=1|Y=1)=.10

f(Y=1|X = 1)=.125


(55)

Copyright 1996 Lawrence C. Marsh

X and Y are

independent

random

variables if their joint pdf, f(x,y),

is the product of their respective

marginal pdfs, f(x) and f(y) .

f(x

i

,y

j

) = f(x

i

) f(y

j

)

for independence this must hold for all pairs of i and j


(56)

Copyright 1996 Lawrence C. Marsh

.15

.05

.45

.35

not independent

Y = 1 Y = 2

X = 0

X = 1

.60

.40

.50

.50

f

(X = 1)

f

(X = 0)

f

(Y = 1)

f

(Y = 2)

marginal pdf for Y:

marginal pdf for X:

.50x.60=.30 .50x.60=.30

.50x.40=.20 .50x.40=.20

The calculations in the boxes show the numbers

required to have

independence.


(57)

Copyright 1996 Lawrence C. Marsh

The

covariance

between two random

variables, X and Y, measures the

linear association between them.

cov(X,Y) = E[(X - EX)(Y-EY)]

Note that variance is a special case of covariance.

cov(X,X) = var(X) = E[(X - EX) ]

2


(58)

Copyright 1996 Lawrence C. Marsh

cov(X,Y) = E [(X - EX)(Y-EY)]

= E [XY - X EY - Y EX + EX EY]

= E(XY) - 2 EX EY + EX EY

= E(XY) - EX EY

cov(X,Y) = E [(X - EX)(Y-EY)]

cov(X,Y) = E(XY) - EX EY

= E(XY) - EX EY - EY EX + EX EY


(59)

Copyright 1996 Lawrence C. Marsh

.15

.05

.45

.35

Y = 1 Y = 2

X = 0

X = 1

.60

.40

.50

.50

EX=0(.60)+1(.40)=.40 EY=1(.50)+2(.50)=1.50

E(XY) = (0)(1)(.45)+(0)(2)(.15)+(1)(1)(.05)+(1)(2)(.35)=.75

EX EY = (.40)(1.50) = .60

cov(X,Y) = E(XY) - EX EY

= .75 - (.40)(1.50) = .75 - .60

= .15

covariance


(60)

Copyright 1996 Lawrence C. Marsh

The

correlation

between two random

variables X and Y is their covariance

divided by the square roots of their

respective variances.

Correlation is a pure number falling between -1 and 1.

cov(X,Y)

ρ

(X,Y) =

var(X) var(Y)


(61)

Copyright 1996 Lawrence C. Marsh

.15

.05

.45

.35

Y = 1 Y = 2

X = 0

X = 1

.60

.40

.50

.50

EX=.40 EY=1.50

cov(X,Y) = .15

correlation

EX=0(.60)+1(.40)=2 2 2 .40

var(X) = E(X ) - (EX)

= .40 - (.40) = .24

2 2

2

EY=1(.50)+2(.50) = .50 + 2.0

= 2.50

2 2 2

var(Y) = E(Y ) - (EY)

= 2.50 - (1.50) = .25

2 2

2

ρ(X,Y) = cov(X,Y)

var(X) var(Y)

ρ(X,Y) = .61


(62)

Copyright 1996 Lawrence C. Marsh

Independent random variables

have zero covariance and,

therefore, zero correlation.

The converse is not true.

Zero Covariance & Correlation


(63)

Copyright 1996 Lawrence C. Marsh

The expected value of the weighted sum

of random variables is the sum of the

expectations of the individual terms.

Since expectation is a linear operator, it can be applied term by term.

E[c

1

X + c

2

Y] = c

1

EX + c

2

EY

E[c

1

X

1

+...+ c

n

X

n

] = c

1

EX

1

+...+ c

n

EX

n

In general, for random variables X1, . . . , Xn :


(64)

Copyright 1996 Lawrence C. Marsh The variance of a weighted sum of random

variables is the sum of the variances, each times

the square of the weight, plus twice the covariances of all the random variables times the products of

their weights.

var(c1X + c2Y)=c1 2var(X)+c2 2var(Y) + 2c1c2cov(X,Y)

var(c1X − c2Y) = c21 var(X)+c2 2var(Y) − 2c1c2cov(X,Y)

Weighted sum of random variables:

Weighted difference of random variables:


(65)

Copyright 1996 Lawrence C. Marsh

The Normal Distribution

Y ~ N(

β

,

σ

2

)

f(y) =

2

π

σ

2

1

exp

β

y

f(y)

2

σ

2

(y -

β

)

2


(66)

Copyright 1996 Lawrence C. Marsh

The Standardized Normal

Z ~ N(

0

,

1

)

f(z) =

2

π

1

exp

-

2

z

2

Z = (y -

β

)/

σ


(67)

Copyright 1996 Lawrence C. Marsh

P [ Y > a ] = P Y - β > = a - β P Z > a - β

σ σ σ

β

y

f(y)

a


(68)

Copyright 1996 Lawrence C. Marsh

P [ a < Y < b ] = P < <

= P < Z < a - β Y - β

σ

σ b - σβ

a - β

σ

b - β

σ

β

y

f(y)

a

Y ~ N(

β

,

σ

2

)

b


(69)

Copyright 1996 Lawrence C. Marsh

Y1 ~ N(β112), Y2 ~ N(β222), . . . , Yn ~ N(βnn2)

W = c

1

Y

1

+ c

2

Y

2

+ . . . + c

n

Y

n

Linear combinations of jointly

normally distributed random variables

are themselves normally distributed.

W ~ N

[

E(W), var(W)

]


(70)

Copyright 1996 Lawrence C. Marsh

mean: E[V] = E[

χ

(m) ] = m

If Z1, Z2, . . . , Zm denote m independent N(0,1) random variables, and

V = Z21 + Z22 + . . . + Z2m, then V ~

χ

(m) 2

V is chi-square with m degrees of freedom.

Chi-Square

variance: var[V] = var[

χ

(m) ] = 2m

If Z1, Z2, . . . , Zm denote m independent N(0,1) random variables, and

V = Z21 + Z22 + . . . + Z2m, then V ~

χ

(m) 2

V is chi-square with m degrees of freedom.

2

2


(71)

Copyright 1996 Lawrence C. Marsh

mean: E[

t

] = E[

t

(m) ] = 0 symmetric about zero variance: var[

t

] = var[

t

(m) ] = m / (m

2)

If Z ~ N(0,1) and V ~

χ

(m) and if Z and V are independent then,

~

t

(m)

t

is student-t with m degrees of freedom.

2

t =

Z

V m


(72)

Copyright 1996 Lawrence C. Marsh

If V1 ~

χ

(m

1) and V2 ~

χ

(m2) and if V1 and V2

are independent, then

~

F

(m

1,m2)

F

is an F statistic with m1 numerator degrees of freedom and m2 denominator degrees of freedom.

2

F =

V1 m

1

V2

m2

2


(73)

Copyright 1996 Lawrence C. Marsh

The Simple Linear

Regression

Model

Chapter 3

Copyright © 1997 John Wiley & Sons, Inc. All rights reserved. Reproduction or translation of this work beyond that permitted in Section 117 of the 1976 United States Copyright Act without the express written permission of the copyright owner is unlawful. Request for further information should be addressed to the Permissions Department, John Wiley & Sons, Inc. The purchaser may make back-up copies for his/her own use only and not for distribution or resale. The Publisher assumes no responsibility for errors, omissions, or damages, caused by the use of these programs or from the use of the information contained herein.


(74)

Copyright 1996 Lawrence C. Marsh

1. Estimate a relationship among economic variables, such as y = f(x).

2. Forecast or predict the value of one variable, y, based on the value of

another variable, x.

Purpose of Regression Analysis


(75)

Copyright 1996 Lawrence C. Marsh

Weekly Food Expenditures

y = dollars spent each week on food items. x = consumer’s weekly income.

The relationship between x and the expected value of y , given x, might be linear:

E(y|x) = β1 + β2 x


(76)

Copyright 1996 Lawrence C. Marsh f(y|x=480)

f(y|x=480)

y

µy|x=480

Figure 3.1a Probability Distribution f(y|x=480) of Food Expenditures if given income x=$480.


(77)

Copyright 1996 Lawrence C. Marsh f(y|x) f(y|x=480) f(y|x=800)

y

µy|x=480 µy|x=800

Figure 3.1b Probability Distribution of Food

Expenditures if given income x=$480 and x=$800.


(78)

Copyright 1996 Lawrence C. Marsh

{

β1

∆x

∆E(y|x) E(y|x)

Average

Expenditure

x (income) E(y|x)=β12x

β2= ∆E(y|x)

∆x

Figure 3.2 The Economic Model: a linear relationship between avearage expenditure on food and income.


(79)

Copyright 1996 Lawrence C. Marsh

.

.

xt x1=480 x2=800

y t

f(yt)

Figure 3.3. The probability density function for yt at two levels of household income, xt

expenditure

Homoskedastic Case

income


(80)

Copyright 1996 Lawrence C. Marsh

.

xt x1 x2

y t

f(yt)

Figure 3.3+. The variance of yt increases as household income, xt , increases.

expenditure

Heteroskedastic Case

x3

.

.

income


(81)

Copyright 1996 Lawrence C. Marsh

Assumptions of the Simple Linear

Regression Model - I

1. The average value of y, given x, is given by the linear regression:

E(y) = β1 + β2x

2. For each value of x, the values of y are

distributed around their mean with variance:

var(y) = σ2

3. The values of y are uncorrelated, having zero covariance and thus no linear relationship:

cov(yi ,yj) = 0

4. The variable x must take at least two different values, so that x c, where c is a constant.


(82)

Copyright 1996 Lawrence C. Marsh

5. (optional) The values of y are normally distributed about their mean for each

value of x:

y ~ N [(β1+β2x), σ2 ]

One more assumption that is often used in

practice but is not required for least squares:


(83)

Copyright 1996 Lawrence C. Marsh

The Error Term

y is a random variable composed of two parts:

I. Systematic component: E(y) = β1 + β2x

This is the mean of y.

II. Random component: e = y - E(y) = y - β1 - β2x

This is called the random error.

Together E(y) and e form the model:

y = β1 + β2x + e


(84)

Copyright 1996 Lawrence C. Marsh

Figure 3.5 The relationship among y, e and the true regression line.

.

.

.

.

y4 y1 y2 y3

x1 x2 x3 x4

}

}

{

{

e1 e2 e3

e4 E(y) = β

1 + β2x

x


(85)

Copyright 1996 Lawrence C. Marsh

}

.

}

.

.

.

y4 y1

y2 y

3

x1 x2 x3 x4

{

{

e1 e2 e3 e4 x y

Figure 3.7a The relationship among y, e and the fitted regression line.

^

y = b1 + b2x ^

.

.

.

.

y1 y2

y3 y4

^ ^ ^ ^ ^ ^ ^ ^ 3.13


(86)

Copyright 1996 Lawrence C. Marsh

{

{

.

.

.

.

.

y4 y1

y2 y 3

x1 x2 x3 x4 x

y

Figure 3.7b The sum of squared residuals from any other line will be larger.

y = b1 + b2x ^

.

.

.

y^1

y3 ^

y4

^ y = b^* *1 + b*2x

*

e1 ^*

e2 ^*

y^*2

e3 ^*

* ^e4 *

*

{

{


(87)

Copyright 1996 Lawrence C. Marsh

f(

.

) f(e) f(y)

Figure 3.4 Probability density function for e and y

0 β

1+β2x


(88)

Copyright 1996 Lawrence C. Marsh

The Error Term Assumptions

1. The value of y, for each value of x, is

y = β1 + β2x + e

2. The average value of the random error e is:

E(e) = 0

3. The variance of the random error e is:

var(e) = σ2 = var(y)

4. The covariance between any pair of e’s is:

cov(ei ,ej) = cov(yi ,yj) = 0

5. x must take at least two different values so that

x c, where c is a constant.

6. e is normally distributed with mean 0, var(e)=σ2 (optional) e ~ N(0,σ2)


(89)

Copyright 1996 Lawrence C. Marsh

Unobservable Nature

of the Error Term

1. Unspecified factors / explanatory variables, not in the model, may be in the error term. 2. Approximation error is in the error term if

relationship between y and x is not exactly a perfectly linear relationship.

3. Strictly unpredictable random behavior that may be unique to that observation is in error.


(90)

Copyright 1996 Lawrence C. Marsh

Population

regression values:

y

t

=

β

1

+

β

2

x

t

+ e

t

Population

regression line:

E(y

t

|x

t

) =

β

1

+

β

2

x

t

Sample

regression values:

y

t

= b

1

+ b

2

x

t

+ e

t

Sample

regression line:

y

t

= b

1

+ b

2

x

t

^

^


(91)

Copyright 1996 Lawrence C. Marsh

y

t

=

β

1

+

β

2

x

t

+ e

t

Minimize error sum of squared deviations:

S(

β

1

,

β

2) =

Σ

(

y

t

-

β

1

-

β

2

x

t )

2 (3.3.4)

t=1 T

e

t

= y

t

-

β

1

-

β

2

x

t


(92)

Copyright 1996 Lawrence C. Marsh

Minimize w.

r.

t.

β

1

and

β

2

:

S(

β

1

,

β

2) =

Σ

(

y

t

-

β

1

-

β

2

x

t )

2 (3.3.4)

t =1 T

= - 2

Σ

(

y

t

-

β

1

-

β

2

x

t )

= - 2

Σ

x

t(

y

t

-

β

1

-

β

2

x

t )

∂S(

.

)

∂β

1

∂S(

.

)

∂β

2

Set each of these two derivatives equal to zero and

solve these two equations for the two unknowns:

β

1

β

2


(93)

Copyright 1996 Lawrence C. Marsh

S(.)

S(.)

βi

bi

.

.

.

Minimize w.

r.

t.

β

1

and

β

2

:

S(

.

) =

Σ

(

y

t

-

β

1

-

β

2

x

t )

2

t =1 T

∂S(.)

∂βi < 0

∂S(.)

∂βi > 0 ∂S(.)

∂βi = 0


(94)

Copyright 1996 Lawrence C. Marsh

To minimize S(.), you set the two derivatives equal to zero to get:

= - 2

Σ

(

y

t

-

b

1

-

b

2

x

t ) = 0

= - 2

Σ

x

t(

y

t

-

b

1

-

b

2

x

t ) = 0

∂S(

.

)

∂β

1

∂S(

.

)

∂β

2

When these two terms are set to zero,

β

1 and

β

2 become

b

1 and

b

2 because they no longer represent just any value of

β

1 and

β

2 but the special values that correspond to the minimum of S(

.

) .


(95)

Copyright 1996 Lawrence C. Marsh

- 2

Σ

(

y

t

-

b

1

-

b

2

x

t ) = 0

- 2

Σ

x

t(

y

t

-

b

1

-

b

2

x

t ) = 0

Σ

y

t -

T

b

1 -

b

2

Σ

x

t = 0

Σ

x

t

y

t -

b

1

Σ

x

t -

b

2

Σ

x

t 2 = 0

T

b

1 +

b

2

Σ

x

t =

Σ

y

t

b

1

Σ

x

t +

b

2

Σ

x

2t =

Σ

x

t

y

t


(96)

Copyright 1996 Lawrence C. Marsh

Solve for

b

1 and

b

2 using definitions of

x

and

y

T

b

1 +

b

2

Σ

x

t =

Σ

y

t

b

1

Σ

x

t +

b

2

Σ

x

2t =

Σ

x

t

y

t

T

Σ

x

t

y

t -

Σ

x

t

Σ

y

t

T

Σ

x

t2

- (

Σ

x

t)2

b

2 =

b

1 =

y

-

b

2

x


(97)

Copyright 1996 Lawrence C. Marsh

elasticities

percentage change in y percentage change in x

η = =

∆x/x

∆y/y

= ∆y x

x y

Using calculus, we can get the elasticity at a point:

η = lim ∆y x =

∆x y

∂y x

∂x y

x0


(98)

Copyright 1996 Lawrence C. Marsh

E(y) =

β

1

+

β

2

x

E(y)

x

=

β

2

applying elasticities

E(y)

x

=

β

2

η

=

E(y)

x

E(y)

x


(99)

Copyright 1996 Lawrence C. Marsh

estimating elasticities

y

x

=

b

2

η

=

y

x

y

x

^

y

^t

=

b

1

+ b

2

x

t

= 4 + 1.5 x

t

x

= 8 = average number of years of experience

y

= $10 = average wage rate

= 1.5 = 1.2

8

10

=

b

2

η

y

x

^


(100)

Copyright 1996 Lawrence C. Marsh

Prediction

y

^t

= 4 + 1.5 x

t

Estimated regression equation:

x

t = years of experience

y

^t = predicted wage rate

If

x

t = 2 years, then

y

^ t =

$7.00

per hour

.

If

x

t = 3 years, then

y

^t =

$8.50

per hour

.


(1)

Copyright 1996 Lawrence C. Marsh

Data from Surveys

i)

identify the population of interest.

ii)

designing and selecting the sample.

iii)

collecting the information.

iv)

data reduction, estimation and inference.

The survey process has four distinct aspects

:


(2)

Copyright 1996 Lawrence C. Marsh

Controlled Experiments

1. Labor force participation: negative income tax:

guaranteed minimum income experiment. 2. National cash housing allowance experiment:

impact on demand and supply of housing. 3. Health insurance: medical cost reduction:

sensitivity of income groups to price change. 4. Peak-load pricing and electricity use:

daily use pattern of residential customers.

Controlled experiments were done on these topics:


(3)

Copyright 1996 Lawrence C. Marsh

Economic Data Problems

I. poor implicit experimental design

(i) collinear explanatory variables.

(ii) measurement errors.

II. inconsistent with theory specification

(i) wrong level of aggregation.

(ii) missing observations or variables.

(iii) unobserved heterogeneity.


(4)

Copyright 1996 Lawrence C. Marsh

Selecting a Topic

• “What am I interested in?”

• Well-defined, relatively simple topic. • Ask prof for ideas and references.

• Journal of Economic Literature (ECONLIT) • Make sure appropriate data are available.

• Avoid extremely difficult econometrics. • Plan your work and work your plan.

General tips for selecting a research topic

:

ð ð ð ð ð ð ð


(5)

Copyright 1996 Lawrence C. Marsh

Writing an Abstract

(i) concise statement of the problem.

(ii) key references to available information. (iii) description of research design including:

(a) economic model (b) statistical model (c) data sources

(d) estimation, testing and prediction (iv) contribution of the work

Abstract of less than 500 words should include

:


(6)

Copyright 1996 Lawrence C. Marsh

Research Report Format

1. Statement of the Problem.

2. Review of the Literature.

3. The Economic Model.

4. The Statistical Model.

5. The Data.

6. Estimation and Inferences Procedures.

7. Empirical Results and Conclusions.

8. Possible Extensions and Limitations.

9. Acknowledgments.

10. References.