THE CLASSICAL TESTS

Chapter 14 THE CLASSICAL TESTS

  Let the null hypothesis be represented by

  Ω= {θ ∈ Θ : ϕ (θ) = 0}

  where θ is the vector of parameters and ϕ (θ) = 0 are the restrictions. Consequently the Neyman ratio test is given by:

  where L (θ) is the Likelihood function. As now the ln (·) is a monotonic, strictly increasing, an equivalent test can be based on

  ∙µ ∧ ¶

  ³ ´¸

  LR = 2 ln (λ (x)) = 2

  θ − eθ

  where where LR is the well known Likelihood Ratio test and (θ) is the log-likelihood function.

  ³ ´

  Using a Taylor expansion of eθ around θ and employing the Mean Value Theorem we get:

  2 ∂θ∂θ

  where ∗ − θ

  °θ θ °≤ eθ −

  °. Now, ∂θ =s θ =0 due to the fact the the first

  ∧

  order conditions are satisfied by the ML estimator θ . Consequently, the LR test is

  The Classical Tests

  given by:

  Now we know that

  and consequently

  n ∂θ∂θ

  Now from assumption A5. we have

  We can now state the following Theorem:

  Theorem 52 Under the usual assumptions and under the null Hypothesis we have that

  ∙µ ∧ ¶

  ³ ´¸ d 2

  LR = 2

  θ − eθ →χ r .

  The Classical Tests

  Proof: The Likelihood Ratio is written as

  and Z 0 Z 0 Z 0 Z 0 is symmetric idempotent. Hence µ h i −1 ¶

  Consequently, we get the result. ¥

  ∧ ¶ ϕ θ should be close to zero.

  µ The Wald test is based on the idea that if the restrictions are correct the vector

  µ ∧ ¶ Expanding ϕ θ around ϕ (θ 0 ) we get:

  ∂θ as under the null ϕ (θ 0 )=0 . Hence

  and consequently

  Furthermore recall that

  µ

  ¶

  ∙

  √ ∧

  d ³ _

  ´ −1 ¸ n θ −θ 0 →N 0, J (θ 0 )

  Hence,

  µ ¶

  ∙

  √

  d ³ _

  ´ −1

  ¸

  ∧

  nϕ θ →N 0, F (θ

  0 ) J (θ 0 )

  F (θ 0 ) .

  The Classical Tests

  Let us now consider the following quadratic:

  which is the square of the Mahalanobis distance of the nϕ θ vector. However the above quantity can not be considered as a statistic as it is a function of the unknown

  parameter θ 0 . The Wald test is given by the above quantity if the unknown vector of

  ∧

  parameters θ 0 is substituted by the ML estimator θ , i.e.

  where J θ is the estimated information matrix. In case that J θ does not have an explicit formula it can be substituted by a consistent estimator, e.g. by

  i=1 ∂θ∂θ

  or by the asymptotically equivalent

  Hence the Wald statistic is given by

  Now we can prove the following Theorem: Theorem 53 Under the usual regularity assumptions and the null hypothesis we that

  ∙ µ

  ∧ ¶¸

  µ ∧ ¶µ ∧ ¶ −1 µ ∧ ¶

  −1

  µ ¶

  d W= 2 ϕ θ F θ J F θ ϕ θ →χ

  ∧

  r .

  The Classical Tests

  Proof: For any consistent estimator of θ 0 we have that

  and the result follows. ¥

  The Lagrange Multiplier (LM ) test considers the distance from zero of the estimated Lagrange Multipliers. Recall that

  Consequently, the square Mahalanobis distance is

  Again, the above quantity is not a statistic as it is a function of the unknown

  parameters θ 0 . However, we can employ the restricted ML estimates of θ 0 to find the

  ³ ´

  ³ ´

  the unknown quantities, i.e. e F=F eθ and e J=J eθ . Hence we can prove the following:

  Theorem 54 Under the usual regularity assumptions and the null hypothesis we have

  ³ ´ h i −1 ³ ´ d 2 LM = eλ F e J e F e eλ →χ r .

  Proof: Again we have that for any consistent estimator of θ 0 , as is the restricted

  MLE e θ , we have that

  Ã ! Ã !

  ³ ´ h i −1 ³ ´

  eλ

  eλ

  LM = eλ F e J e F e eλ = √

  P (θ 0 ) √

  +o p (1)

  n

  n

  The Classical Tests

  and by the asymptotic distribution of the Lagrange Multipliers we get the result. ¥

  Now we have that the Restricted MLE satisfy the first order conditions of the Lagrangian, i.e.

  Consequently the LM test can be expressed as:

  Now Rao has suggested to find the score vector and the information matrix of the unrestricted model and evaluate them at the restricted MLE. Under this form the LM statistic is called efficient score statistic as it measures the distance of the score vector, evaluated at the restricted MLE, from zero.

  14.1 The Linear Regression Let us consider the classical linear regression model:

  where y is the n × 1 vector of endogenous variables, X is the n × k matrix of weakly exogenous explanatory variables, β is the k × 1 vector of mean parameters and u is

  ³ ´

  the n × 1 vector of errors. Let us call the vector of parameters θ, i.e. θ = β ,σ 2

  a (k + 1) × 1 vector. The log-likelihood function is:

  n n ¡

  2 ¢ 1 (y − Xβ) (y − Xβ)

  (θ) = − ln (2π) − ln σ −

  2 2 2 σ 2

  The first order conditions are:

  ∂ (θ) X (y − Xβ)

  ∂β

  σ 2

  and

  ∂ (θ) n 1 (y − Xβ) (y − Xβ)

  2 = − 2 ∂σ + 2σ 2 σ 4 = 0.

  The Linear Regression

  Solving the equations we get:

  Notice that the MLE of β is the same as OLS estimator. Something which is not true

  for the MLE of σ 2 .

  The Hessian is

  ∂β∂β ∂β∂σ 2 σ 2 X X − 2σ 4 X u

  Hence the Information matrix is

  J (θ) = E [ −H (θ)] = ⎝

  ⎠,

  n 2σ 4

  and the Cramer-Rao limit

  Notice that under normality, of the errors, the OLS estimator is asymptotically effi- cient.

  Let us now consider r linear constrains on the parameter vector β, i.e.

  ϕ (β) = Qβ −q=0

  where Q is the r × k matrix of the restrictions (with r < k) and q a known vector. Let us now form the Lagrangian, i.e.

  L = (θ) + λ ϕ (β) = (θ) + ϕ (β) λ = (θ) + (Qβ − q) λ,

  where λ is the vector of the r Lagrange Multipliers. The first order conditions are:

  ∂L ∂ (θ)

  X (y − Xβ)

  = +Q λ=

  +Q λ=0

  ∂β

  ∂β

  σ 2

  The Classical Tests

  ∂L ∂ (θ) n 1 (y − Xβ) (y − Xβ)

  2 = ∂σ − ∂σ 2σ 2 + 2 σ 4 =0 (14.3)

  Now from (14.2) we have that

  2 X y=X Xβ −σ Q λ

  and it follows that

  It follows that

  Now from (14.4) we have that Qβ = q. Hence we get

  Substituting out λ from (14.5) employing the above and solving for β we get:

  Solving (14.3) we get that

  ( σ e 2 u) e u = e , eu = y − Xeβ,

  n

  and from (14.6) we get:

  h i −1 µ ∧

  ¶

  ¡ ¢ −1

  eλ = − Qe VQ

  Q β −q , V=e e σ 2 X X .

  The Linear Regression

  The above 3 formulae give the restricted MLEs.

  Now the Wald test for the linear restrictions in (14.1) is given by

  The restricted and unrestricted residuals are given by

  and consequently, if X ∧ u=0 , i.e. the regression has a constant we have that

  It follows that

  Hence the Wald test is given by

  The LR test is given by

  and the LM test is

  We can now state a well known result.

  The Classical Tests

  Theorem 55 Under the classical assumptions of the Linear Regression Model we have that

  W ≥ LR ≥ LM.

  Proof: The three test can be written as

  ≥ 1. Now we know that ln (x) ≥

  where r = u e u e x−1

  and the result follows by

  considering x = r and x = 1r.

  14.2 Autocorrelation Apply the LM test to test the hypothesis that ρ = 0 in the following model

  ¡ 2 ¢

  y t =x t β+u t , u t = ρu t−1 +ε t , ε t vN

  i.i.d.

  0, σ .

  Discuss the advantages of this LM test over the Wald and LR tests of this hypothesis.

  First notice that from u t = ρu t−1 +ε t we get that

  E (u t ) = ρE (u t−1 ) + E (ε t ) = ρE (u t−1 )

  as E (ε t )=0 and for |ρ| < 1 we get that

  E (u t ) − ρE (u t−1 )=0 ⇒ E (u t )=0

  as E (u t ) = E (u t−1 ) independent of t. Furthermore

  as the first equality follows from the fact that E (u t )=0 , and the last from the fact that

  E (u t−1 ε t ) = E [u t−1 E (ε t |I t−1 )] = E [u t−1 0] = 0

  Autocorrelation

  where I t−1 the information set at time t − 1, i.e. the sigma-field generated by {ε t−1 ,ε t−2 , ... }. Hence

  as E (u t )=E u t−1 independent of t.

  Substituting out u t we get

  y

  t =x t β + ρu t−1 +ε t ,

  and observing that u

  where by assumption the ε 0 t s are i.i.d. Hence the log-likelihood function is

  l (θ) = − ln (2π) − ln σ −

  where we assume that y −1 =0 , and x −1 =0 . as we do not have any observations for t = −1. In any case, given that |ρ| < 1, the first observation will not affect the

  distribution LM test, as it is based in asymptotic theory, i.e. T → ∞. The first order conditions are:

  The second derivatives are:

  σ ∂β∂β 2 t=1

  The Classical Tests

  ∂ρ∂σ 2 σ t=1 4

  4 = ∂β∂σ −

  Notice now that the Information Matrix J is

  J (θ) = −E [H (θ)] =

  σ 4 ,E σ 2 = σ 2 = 1−ρ 2 , i.e. the matrix is block diagonal between β, ρ, and σ .

  Consequently the LM test has the form

  t=1 σ 2 ,J ρρ = 1−ρ 2 . All these quantities evaluated under the null. Hence under H 0 :ρ=0 we have that

  J ρρ = T, and u t =ε t

  Autocorrelation

  i.e. there is no autocorrelation. Consequently, we can estimate β by simple OLS, as OLS and ML result in the same estimators and σ 2 by the ML estimator, i.e.

  where eu

  t =y t −x t e β= ε e t the OLS residuals. Hence

  ³P T u u ´ 2 e t e t −1

  Ã T

  ! 2 Ã T

  ! −2

  σ f 2 X X 2

  t=1

  LM =

  =T

  eu t e u t−1

  e u t

  T

  t=1

  t=1

  138

  The Classical Tests

  Book References

  1. T. Amemiya: Advanced Econometrics.

  2. E. Berndt: The Practice of Econometrics: Classic and Cotemporary

  3. G. Box and G. Jenkins (1976) TimeSeries Analysis forecasting and Control. K. Cuthbertson, S.G. Hall and M. P. Taylor: Applied Econometric Techniques.

  4. R. Davidson and J. MacKinnon: Econometric Theory and Methods.

  5. C. Gourieroux and A. Monfort: Statistics and Econometric Models, Vol I

  and II.

  6. W.H. Greene: Econometric Analysis.

  7. J. Hamilton Time Series Analysis

  8. A. Harvey: The Econometric Analysis of Time Series.

  9. A. Harvey: Time Series Models.

  10. J. Johnston: Econometric Methods.

  11. G. Judge, R. Hill, H. Lutkepohl and T. Lee: Introduction to the Theory

  and Practice of Econometrics.

  12. R. Pindyck and D. Rubinfeld: Econometric Models and Economic Fore-

  casts.

  13. P. Ruud: An Introduction to Classical Econometric Theory.

  14. R. Serfling: Approximation Theorems of Mathematical Statistics.

  15. H White: Asymptotic Theory for Econometricians.

Dokumen yang terkait

Analisis Komparasi Internet Financial Local Government Reporting Pada Website Resmi Kabupaten dan Kota di Jawa Timur The Comparison Analysis of Internet Financial Local Government Reporting on Official Website of Regency and City in East Java

19 819 7

ANTARA IDEALISME DAN KENYATAAN: KEBIJAKAN PENDIDIKAN TIONGHOA PERANAKAN DI SURABAYA PADA MASA PENDUDUKAN JEPANG TAHUN 1942-1945 Between Idealism and Reality: Education Policy of Chinese in Surabaya in the Japanese Era at 1942-1945)

1 29 9

Docking Studies on Flavonoid Anticancer Agents with DNA Methyl Transferase Receptor

0 55 1

EVALUASI PENGELOLAAN LIMBAH PADAT MELALUI ANALISIS SWOT (Studi Pengelolaan Limbah Padat Di Kabupaten Jember) An Evaluation on Management of Solid Waste, Based on the Results of SWOT analysis ( A Study on the Management of Solid Waste at Jember Regency)

4 28 1

Improving the Eighth Year Students' Tense Achievement and Active Participation by Giving Positive Reinforcement at SMPN 1 Silo in the 2013/2014 Academic Year

7 202 3

The Correlation between students vocabulary master and reading comprehension

16 145 49

Improping student's reading comprehension of descriptive text through textual teaching and learning (CTL)

8 140 133

The correlation between listening skill and pronunciation accuracy : a case study in the firt year of smk vocation higt school pupita bangsa ciputat school year 2005-2006

9 128 37

Pengaruh kualitas aktiva produktif dan non performing financing terhadap return on asset perbankan syariah (Studi Pada 3 Bank Umum Syariah Tahun 2011 – 2014)

6 101 0

Transmission of Greek and Arabic Veteri

0 1 22