H
1
= fixed effect If Chi Square
statistic
Chi Square
table
or in other word, p-value 0.005, where it means that we should reject null hypothesis H
and determine that fixed effect model is the suitable model to use Winarno, 2009. Hausman test is also
available through Eviews-6 command program.
Table 4.1 Hausman Test Result
Test Summary Chi-Sq. Statistic
p -value
Effect CPI
7.300554 0.0069
Fixed
RER
2.667002 0.1024
Fixed
GFER
4.548718 0.0329
Fixed
GRVT 0.855584
0.3550 Fixed
GRRVT 0.857151
0.3545 Fixed
GRYPC 0.840744
0.3592 Fixed
Source: Eviews-6, 2010.
Note: fixed effect while p-value 0,005
4.2.3 Classic Assumption Test Analysis
a. Normality Test
Normality test is done by examining Jarque-Bera value through X
2
table. From regression through Eviews 6.0 we find that J-B statistics as shown in Table 4.2, where it is described that CPI, RER, GFER, GRVT,
GRRVT, and GRYPC, has a normal distribution, where are shown from their µ residual value.
Positive Auto
correlation Zone of
indecision
Do not reject
H or H
1
or both Zone of
indecision
Negative Auto
correlation
1.720 1.746
2.254 2.280
4
Table 4.2 Normality Test Result
Test Summary Df
X
2
-table Jarque-Bera
Result CPI
9.139 23.5893
4.0928 Normal Distribution
RER
9.139 23.5893
1.1423 Normal Distribution
GFER 9.139
23.5893 0.1963
Normal Distribution
GRVT
9.139 23.5893
10.0599 Normal Distribution
GRRVT 9.139
23.5893 15.2685
Normal Distribution
GRYPC
9.139 23.5893
10.6329 Normal Distribution
Source: Eviews-6, 2010. Note: Jarque-Bera JB test method is measuring value of skewness and kurtosis where if JB X
2
Chi-square value table, it means that residual value distribution is normal Gujarati, 2003.
b. Autocorrelation Test
One of formal test to detect autocorrelation is Durbin-Watson. This test is based on error model shown below;
Figure 4.25 Durbin-Watson Test
Note: H0: No positive autocorrelation
H1: No negative autocorrelation
Based on Durbin-Watson, this study found that in this research the equations are generally high potential to be free from autocorrelation, as it
is described on Table 4.3.
Table 4.3 Durbin-Watson Test Result
Test Summary
K dL
Du Dw
R
2
DwR
2
Result CPI
1 1.720
1.746 1.744521
0.824031 2.117
Negative Autocorrelation
RER
1 1.720
1.746 1.820670
0.932277 1.953
Negative Autocorrelation
GFER 1
1.720 1.746
2.280812 0.228518
9.981 Negative Autocorrelation
GRVT
1 1.720
1.746 2.279058
0.227245 10.029
Negative Autocorrelation
GRRVT 1
1.720 1.746
2.137546 0.153866
13.892 Negative Autocorrelation
GRYPC
1 1.720
1.746 2.266261
0.058549 38.707
Negative Autocorrelation
Source: Eviews-6, 2010.
c. Heteroscedasticity Test
Heteroscedasticity test purpose is to know whether all the disturbance term are similar variants or not Gujarati, 2003. This research
study used White’s Heteroscedasticity-Consistent Variances and Standard Errors
.
White has shown that this estimate can be performed so that there is asymptotically valid i.e., large-sample statistically inference can be
made about true parameter values. As the preceding result show, White’s heteroscedasticity-corrected standard errors are considerably larger than
the OLS standard errors and therefore the estimated t values are much smaller than those obtained by OLS. On the basis of the latter, both the
regressors are statistically significant at the 5 percent level, whereas on the basis of White’s estimators they are not. However, it should be pointed out
that White’s heteroscedasticity-corrected standard errors can be larger or smaller than the uncorrected standard errors Gujarati, 2003.
Table 4.4 Heteroscedasticity Test Result
Test Summary Probability
Result CPI
0.000000 Heteroscedasticity free
RER
0.000000 Heteroscedasticity free
GFER 0.000229
Heteroscedasticity free
GRVT
0.000060 Heteroscedasticity free
GRRVT 0.000146
Heteroscedasticity free
GRYPC
0.000000 Heteroscedasticity free
Source: Eviews-6, 2010.
Through Eviews-6, this research study examined the heteroscedasticity by Eviews-6 Equation Estimation command of White heteroscedasticity-
consistent standard errors and covariance, where the result is injured by heteroscedasticity if the probability is significant, in the other side, the
result is free from heteroscedasticity if the probability 0.005. The
heteroscedasticity test summary result that described in Table 4.4. d.
Multicollinearity Test
Multicollinearity is a condition that describes a linear relationship across independent variables. Multicollinearity happens when there are
more than one independent variables in the research study. Whether this research study independent variable is only one, because of that reason this
research study econometric is free from multicollinearity.
4.2.4 Regression Statistic Test Analysis Hypothesis Test
a. Jointly Regression Coefficient Test F-test
F-test goal is to determine the significance of independent variable groups in influencing the dependent variable. In this research we use 95
degree of freedom α = 5. The conclusion of jointly regression coefficient test is described in Table 4.5. It means that independent
variable groups influence the dependent variable. It is significant H is
rejected and H
1
is accepted.
Table 4.5 Jointly Regression Coefficient Test F test
Test Summary Prob F - statistic
Result CPI
0.000000 Significant
RER 0.000000
Significant GFER
0.000229 Significant
GRVT
0.000060 Significant
GRRVT
0.000146 Significant
GRYPC
0.000000 Significant
Source: Eviews-6, 2010.
b. Individuality Coefficient Regression Test t-Test