43
As  noted  earlier,  given  the  assumptions  of  the  classical  linear  regression  model, the  least-squares  estimates  possess  some  ideal  or  optimum  properties.  These
properties  are  contained  in  the  well-known  Gauss –Markov  theorem.  To
understand  this  theorem,  we  need  to  consider  the  best  linear  unbiasedness property
of an estimator. The OLS estimator is said to be a best linear unbiased estimator BLUE if the following hold Brooks, 2002:
1.  It  is  linear,  that  is,  a  linear  function  of  a  random  variable,  such  as  the
dependent variable Y in the regression model.
2.  It is unbiased, that is, its average or expected value, E
β
2
, is equal to the true value,
β
2
. 3.  It has minimum variance in the class of all such linear unbiased estimators;
an unbiased estimator with the least variance is known as an efficient
estimator.
The classical assumption test needed to ensure that the regression model is the best estimator or BLUE. The classical assumption test also used to detect any
mislead of the classical linear model. The test used are Heteroscedastic Test, Auto-Correlation Test, and Multi-Colinearity Test.
a. Heteroscedastic Test
One of the important assumptions of the classical linear regression model is that the variance of each disturbance term u
i
, conditional on the chosen values of  the  explanatory  variables,  is  some  constant  number  equal  to
σ
2
. This  is  the
assumption of homoscedasticity, or equal homo spread scedasticity. When the
44
variance of each disturbance u
i
is not constant however, there is heteroscedasticity on  that  variance.  Still  according  to  Gujarati  in  his  book  Basic  Econometric,  the
consequences of heteroscedasticity in the regression model is as follows:
1. The estimator produced will still be consistent, but that estimator will be
no  longer  efficient.  Meaning  that,  there  are  variance  that  has  little  error than  the  estimator  produced  in  the  regression  model  that  contains
heteroscedasticity.
2. Estimator produced from the regression linear model is no longer have an
accurate heteroscedastic. This will cause the hypothesis testing become not accurate.
In  short,  if  we  persist  in  using  the  usual  testing  procedures  despite heteroscedasticity,  whatever  conclusion  we  draw  or  influences  we  make  may  be
very misleading. To detect whether heteroscedasticity is present in the data, researcher will conduct
a  formal  test  using  White  Test  method.  Th e reason why researcher use White‟s
General  Heteroscedasticity  Test  is  because  it  does  not  rely  on  the  normality assumption and easy to implement. The White test proceeds as follows:
Step 1. Given the data, we estimate regression model and obtain the residuals
,u
i
�
3.13 Step 2.
We then run the following auxiliary regression:
45
3.14 Step 3.
Formulate the Hypothesis Test H
= There is no heteroscedastic H
a
= There is heteroscedastic Under the null hypothesis that there is no heteroscedasticity, it can be shown that
sample size n times the R
2
obtained from the auxiliary regression asymptotically follows  the  chi-square  distribution  with  df  equal  to  the  number  of  regressors
excluding the constant term in the auxiliary regression. That is,
�
3.15 Step  4.
If  the  chi-square  value  obtained  in  3.15  exceeds  the  critical  chi-square value  at  the  chosen  level  of  significance,  the  conclusion  is  that  there  is
heteroscedasticity.  If  it  does  not  exceed  the  critical  chi-square  value,  there  is  no heteroscedasticity, which is to say that in the auxiliary regression 3.14.
If  heteroscedasticity  truly  exist,  one  can  use  the  Generelized  Square Method or White Test method. White Test method developed heteroscedasticity-
corrected  standard  error. Software  Eviews  5 has  already provided  White  method to overcome heteroscedastic.
46
b. Autocorrelation Test
The term autocorrelation may be defined as correlation between members
of  series  of  observations  ordered  in  time  [as  in  time  series  data]  or  space  [as  in cross-sectional  data]  Gujarati,  2004,  p.442.  In  the  fifth  assumption  of  the
classical  linear  regression  model,  it  assumes  that  such  autocorrelation  does  not exist in the disturbance u
j,.
The classical model assumes that the disturbance term relating  to  any  observation  is  not  influenced  by  the  disturbance  term  relating  to
any other observation. As  in  the  case  of  heteroscedasticity,  in  the  presence  of  autocorrelation  the  OLS
estimators  are  still  linear  unbiased  as  well  as  consistent  and  asymptotically normally  distributed,  but  they  are  no  longer  efficient  i.e.,  minimum  variance.
Therefore,  the  usual  t  and  F  tests  of  significance  are  no  longer  valid,  and  if applied,  are  likely  to  give  seriously  misleading  conclusions  about  the  statistical
significance of the estimated regression coefficients.
To detect autocorrelation, the writer use The Breusch-Godfrey BG Test in the
application  software  EViews  5.  BG  Test,  which  is  also  known  as  Lagrange- Multiplier LM Test involves the following hypothesis:
Ho: There is auto-correlation H1: There is no auto-correlation
After we run the test, we can analyze the result by comparing the value of ObsR- squared
,  which  comes  from  the  coefficient  determination  R  squared  multiple
47
with  the  number  of  observation,  and  the  value  of  the  probability  with  the significant value
α, as follows:   If the Probability
α = 5, there is no auto-correlation, thus we reject H
o
.   If  the  Probability
α  =  5,  there  is  auto-correlation,  thus  we  failed  to reject H
c. Multi-Collinearity Test
Estimator  that  has  BLUE  characteristic  supposed  to  be  not  contains multicollinearity.    Since  multicollinearity  is  essentially  a  sample  phenomenon,
arising out of the largely nonexperimental data collected in most social sciences, there  is  no  one  unique  method  of  detecting  it  or  measuring  its  strength.  In  this
research,  the  writer  will  detect  the  existence  of  multi-collinearity  by  using
software Eviews 5. The method that will be used is through the  examination of partial  correlation
.  This  method  is  developed  by  Farrar  and  Glauber.  They suggested that in examining the existence of multicollinearity one should look at
the partial correlation coefficient Gujarati, 2004. Thus, in the regression of Y on X
2
, X
3
, and X
4
, a finding that R
2 1.234
is very high but r
2 12.34
, r
2 13.24
, and r
2 14.23
are comparatively  low  may  suggest  that  the  variables  X
1
,  X
2
,  and  X
3
are  highly intercorrelated and that at least one of these variables is superfluous.
48
3. Hypothesis Test