Econometric Criteria a. Goodness of Fit Test .1 Economic Criteria

22

b. Heteroskedasticity

Heteroskedasticity occurs due to the variance of the error term not being consistent, so that it does not satisfy the Gauss Markov theorem; this is a problem that is commonly seen in cross-sectional data. The impact arising from heteroskedasticity issues, among others, is that the variance is not constant, causing the value of the variance to be larger than estimated. The high variance causes the hypothesis test F-test and t-test to become less precise, with the confidence intervals becoming larger due to large standard errors, and further resulting in an improper conclusion. To eliminate these problems, the cross- sectional weighted regression should be applied; this is otherwise known as the Generalized Least Squares GLS method Nachrowi, 2006. Table 3 Identification Framework of Autocorrelation DW Value 4-dl DW 4 Reject H , negative autocorrelation 4-dl DW 4-dl Results cannot be determined 2 DW 4-du Accept H , there is no autocorrelation du DW 2 Accept H , there is no autocorrelation dl DW du Results cannot be determined 0 DW dl Positive autocorrelation

c. Multicollinearity

Multicollinearity indicates a strong linear relationship between the independent variables in a multiple regression analysis. According to Gujarati 2011, the presence of multicollinearity can be determined as follows: the sign of the coefficient is not as expected, and have high r 2 but in the result of many individual-test t-test is not significant. In other words, if the correlation between the variable is high r ij 0.8, R 2 r ij indicates that multicolinearity happens. The presence of multicollinearity leads to the inability to determine the least squares coefficient, as well as the variance and the covariance values of the coefficients becoming infinite. Multicollinearity also leads to a high standard error in the statistical equation, which causes the confidence interval to become larger and further results in the coefficient value becoming imprecise. d. Normality The Normality test is conducted to determine whether the error term is close to a normal distribution or not. A normality test of the error term is conducted by using the Jarque Bera test, with the following hypotheses: H : α = 0, the error term is normally distributed H 1 : α ≠ 0, the error term is not normally distributed The region of acceptance is Jarque Bera X 2 df -2 ; probability p-value α, whereas the rejection region Jarque Bera X 2 df -2 ; probability pvalue α. Normality of the data is required in the multiple regression analysis; due to this method is one of parametric analysis method. Normality is determined through the equitable distribution of the regression of each value. The acceptance of H indicates that the data is normally distributed. 23

4.4.3 Statistical Criteria

There are several tests that can be used to determine the suitability of the statistically derived regression model.

a. F-Test

The F-test is a statistical test used to determine the effect of independent variables on the dependent variable as a whole. The first step in performing the F- test is to determining and writing the hypotheses. H : β1 = β2 = ... = βt = 0 No independent variables that affect the dependent variable H 1 : at least one βt ≠ 0 At least one of the independent variables significantly influences the dependent variable. 1. If the F-statistic significance level α, then reject H and it conclude that at least one independent variable affects the dependent variable. 2. If the F-statistic significance level α, then accept H and conclude that there are no independent variables that affect the dependent variable.

b. T-Test

The T-test is a statistical test used to measure whether the parameters of the equation are individually significant or not, and is also known as a partial test of significance because the significance of each variable can be observed in the model. A T-test is used in this study to determine the effect of each explanatory factor to the three main exporters of natural rubber. The first step to performing a t-test is determining and writing the hypotheses. H : βt = 0 to t = 1,2,3, ...., N H 1 : ≠ 0 β t If the t-statistic obtained on the real level of α is greater than in the t-table t-statistic t-table, then H is rejected. Rejection of the H � = 0 implies that the variables tested significantly affect the dependent variable. Conversely, if the t-statistic is less than in the t-table t-statistic t-table on the real level of α, then H is accepted. Accepting H β = 0 indicates that the variables tested did not significantly affect the dependent variable. A smaller α implies further risk reduction. The result of the model is expected to be better with each additional independent variable that has a significant effect on the dependent variable. c. R 2 and adj-R 2 Test The R 2 and the adjusted R 2 adj-R 2 are used to determine whether the variables in the model can explain the variation that occurs in the independent variable The higher the R 2 or adj-R 2 , the better the result of the model. In econometric practice, the use of the adj-R 2 value is preferred to the use of R 2 because adj-R 2 tends to give a better overview of the results of the regression. This is especially true when there are a large number of independent variables in the model, or the number is close to the total number of observations Gujarati, 2011.