This analysis also gave information about the determination coefficient R
2
that is to know the big capacity variable independent explained variable independent explained variable dependent by
seeing R Square and Adjusted R Square that have been matched with the number variable independent that was used in the research. The
R Square value R
2
was said good if above 0.5 because of the R Square value revolved between 0 to 1.
3. The Classic Assumption test Multiple Regression
The linear regression model multiplied could be acknowledged as the good model if this model filled the assumption normality the data and
freest from the classic assumption statistic, good that autocorrelation, multicollinearity and heteroskesdastisity.
a. Autocorrelation Autocorrelation is relations that happened between the members
from a set of observation that was compiled in the series of time that regarding restrained time that could be interpreted that, correlation
relations from each one variable time now is the same his situation in the period that will come. One of the testing that was used to know
autocorrelation was with the Durbin Watson Test. The Durbin Watson Test
formula is as follows: d =
∑ e
n
– e
n-1
∑ e
2 n
Hypothesis: Ho : there is no correlation
Ha : there is autocorrelation positivenegative Criteria the testing:
Ho accepted if the Dw value around the figure -2d2 The Durbin Watson value counted approaching or in and around figure 2
To diagnose the existence autocorrelation in a regression model is carried out through the testing towards the Durbin Watson value of
the test Dw test with the provisions as follows:
Table 3.1 Category High and Low Reliability Instrument
Durbin Watson
Durbin Watson Conclusion
Less than 1.10 Autocorrelation available
1.10 and 1.54 Without conclusion
1.55 and 2.46 No autocorrelation available
2.47 and 2.90 Without conclusion
More than 2.91 Autocorrelation available
Source: Muhammad Firdaus 2004:101
b. Multicollinearity The test multicollinearity was needed to know was not variable
independent that had the resemblance with variable independent other in one model. The resemblance between variable independent
in a model will cause the occurrence of the correlation that was very strong between variable independent with variable independent that
was other. Moreover, the detection against multicollinearity also aimed at avoiding the habit in the process of the taking of the
conclusion concerning the influence of the partial test respectively variable independent against variable dependent.
The detection multicollinearity to a model could be seen from several matters:
1 If the result Variance Inflation Factor VIF not more than 10 and the Tolerance value not less than 0.1, then the model could
be it was said freest from multicollinearity VIF = 1 Tolerance, if VIF = 10 then Tolerance = 110 = 0.1. It was increasingly
high that VIF then increasingly low Tolerance. 2 If the value of the correlation coefficient between each one
variable independent less than 0.70, then the model could be stated free from the classic assumption multicollinearity.
3 If thought coefficient the determinant R-Square on 0.60 but was not variable independent that was influential against
variable dependent,
then the
model was
affected multicollinearity.
c. Heteroskesdastisity Heterokedastisity tested the occurrence of the difference
variance residual a period of observation to the period of other observation, or the picture of relations between the value that was
predicted and Studentized Delete Residual this value. The regression model that had the equality variance residual a period of observation
with the period of other observation, or the existence of relations befween the value that was predicted and Studentized Delete
Residual this value so as to be able to be said this model
homokedastisity. Heterokedastisity showed that the variation variable not be the
same for all observation. In heterokedastisity the mistake that was systematic in accordance with the size of one or more variable.
To know there is or there is not heterokedastisity, the writer had several methods, which are:
1 See the plot graph between the value of the prediction variable dependent ZPRED and residual him SRESID. The detection
there was or not heterokedastisity could be carried out with looking there was or not the certain pattern to the graph
scatterplot between SRESID and ZPRED where the Y fuse was
Y was predicted and the X fuse was residual Y the prediction of - Y actually.
2 Analysis basis, if having the certain pattern like the plots that formed the pattern that was arranged waved, widened,
afterwards narrowed, then indicated heterokedastisity. If not having the clear pattern in a manner the plots spread above and
under the point origin to the Y fuse, then did not happen Heterokedastisity
E. Operational Variable