Model Formulation Competitiveness Analysis and Factors Affecting Trade Flow of Natural Rubber in International Market

22

b. Heteroskedasticity

Heteroskedasticity occurs due to the variance of the error term not being consistent, so that it does not satisfy the Gauss Markov theorem; this is a problem that is commonly seen in cross-sectional data. The impact arising from heteroskedasticity issues, among others, is that the variance is not constant, causing the value of the variance to be larger than estimated. The high variance causes the hypothesis test F-test and t-test to become less precise, with the confidence intervals becoming larger due to large standard errors, and further resulting in an improper conclusion. To eliminate these problems, the cross- sectional weighted regression should be applied; this is otherwise known as the Generalized Least Squares GLS method Nachrowi, 2006. Table 3 Identification Framework of Autocorrelation DW Value 4-dl DW 4 Reject H , negative autocorrelation 4-dl DW 4-dl Results cannot be determined 2 DW 4-du Accept H , there is no autocorrelation du DW 2 Accept H , there is no autocorrelation dl DW du Results cannot be determined 0 DW dl Positive autocorrelation

c. Multicollinearity

Multicollinearity indicates a strong linear relationship between the independent variables in a multiple regression analysis. According to Gujarati 2011, the presence of multicollinearity can be determined as follows: the sign of the coefficient is not as expected, and have high r 2 but in the result of many individual-test t-test is not significant. In other words, if the correlation between the variable is high r ij 0.8, R 2 r ij indicates that multicolinearity happens. The presence of multicollinearity leads to the inability to determine the least squares coefficient, as well as the variance and the covariance values of the coefficients becoming infinite. Multicollinearity also leads to a high standard error in the statistical equation, which causes the confidence interval to become larger and further results in the coefficient value becoming imprecise. d. Normality The Normality test is conducted to determine whether the error term is close to a normal distribution or not. A normality test of the error term is conducted by using the Jarque Bera test, with the following hypotheses: H : α = 0, the error term is normally distributed H 1 : α ≠ 0, the error term is not normally distributed The region of acceptance is Jarque Bera X 2 df -2 ; probability p-value α, whereas the rejection region Jarque Bera X 2 df -2 ; probability pvalue α. Normality of the data is required in the multiple regression analysis; due to this method is one of parametric analysis method. Normality is determined through the equitable distribution of the regression of each value. The acceptance of H indicates that the data is normally distributed.