21 where R
2 DV
equals the R
2
of LSDV, R
2 p
is the R
2
of PLS and K is the number of variables.
If the value of the F-stat test result is greater than the values in the F-table, then there is evidence for the rejection of the null hypothesis so that the
assumption that α is the same for all individuals can be rejected.
4.3 Model Formulation
The dependent variable used in the model is the natural rubber export volume to the destination countries. Meanwhile, the independent variables are,
among others, the natural rubber production from exporting countries, the real GDP of destination countries, remoteness, and the real exchange rate. The gravity
equation model of natural rubber in international trade can be formulated as follows:
���
= � +
� ln ����
+ �
ln ���� +
� �
+ �
ln �� + �
� +
� �
+ � �
+ � �
+ ⋯ + � �
+ � where Y
is the natural rubber export value from country i to country j US, with PROD
it
as the volume of the natural rubber production in country i kg, the variable RGDP
jt
represents the real GDP of country j US, R
jt
accounts for the remoteness of country j, ER
ijt
is the currency exchange rate from country i to country j, with α
� representing the dummy variable for the exporting country
effect and γ �
representing the dummy variable for the importing country effect. β
indicates the intercept, while β indicates the parameter n= 1, 2, …, N, t is
the year, i represents the exporting countries and j represents the importing countries.
4.4 Goodness of Fit Test 4.4.1 Economic Criteria
The economic criteria will be tested by looking at the sign and the magnitude of each constant and variable. Economic criteria require that the sign
and magnitude of the coefficient results are in accordance with economic theory.
4.4.2 Econometric Criteria a.
Autocorrelation Autocorrelation is the correlation between members of series of
observations, which are then sorted by time and space Gujarati, 2011. Autocorrelation is detected when there is a significant relationship between the
estimation errors of the primary observation with the estimation errors of other observations. Autocorrelation is a problem that generally occurs when dealing
with time series data. The presence of autocorrelation results in an inefficient estimation or forecast, even though the estimator is still unbiased and consistent.
Another effect is that the standard error is biased and inconsistent, so that the result of the hypothesis becomes invalid. Guidelines on number DW Durbin-
Watson, which is used to detect can be seen in Table 3.
22
b. Heteroskedasticity
Heteroskedasticity occurs due to the variance of the error term not being consistent, so that it does not satisfy the Gauss Markov theorem; this is a problem
that is commonly seen in cross-sectional data. The impact arising from heteroskedasticity issues, among others, is that the variance is not constant,
causing the value of the variance to be larger than estimated. The high variance causes the hypothesis test F-test and t-test to become less precise, with the
confidence intervals becoming larger due to large standard errors, and further resulting in an improper conclusion. To eliminate these problems, the cross-
sectional weighted regression should be applied; this is otherwise known as the Generalized Least Squares GLS method Nachrowi, 2006.
Table 3 Identification Framework of Autocorrelation
DW Value
4-dl DW 4 Reject H
, negative autocorrelation 4-dl DW 4-dl
Results cannot be determined 2 DW 4-du
Accept H , there is no autocorrelation
du DW 2 Accept H
, there is no autocorrelation dl DW du
Results cannot be determined 0 DW dl
Positive autocorrelation
c. Multicollinearity
Multicollinearity indicates a strong linear relationship between the independent variables in a multiple regression analysis. According to Gujarati
2011, the presence of multicollinearity can be determined as follows: the sign of the coefficient is not as expected, and have high r
2
but in the result of many individual-test t-test is not significant. In other words, if the correlation between
the variable is high r
ij
0.8, R
2
r
ij
indicates that multicolinearity happens. The presence of multicollinearity leads to the inability to determine the least
squares coefficient, as well as the variance and the covariance values of the coefficients becoming infinite. Multicollinearity also leads to a high standard error
in the statistical equation, which causes the confidence interval to become larger and further results in the coefficient value becoming imprecise.
d.
Normality
The Normality test is conducted to determine whether the error term is close to a normal distribution or not. A normality test of the error term is conducted by
using the Jarque Bera test, with the following hypotheses: H
: α = 0, the error term is normally distributed H
1
: α ≠ 0, the error term is not normally distributed The region of acceptance is Jarque Bera X
2
df
-2
; probability p-value α, whereas the rejection region Jarque Bera X
2
df
-2
; probability pvalue α. Normality of the data is required in the multiple regression analysis; due to this
method is one of parametric analysis method. Normality is determined through the equitable distribution of the regression of each value. The acceptance of H
indicates that the data is normally distributed.