42
2. Data Processing
a. Classic Assumption Test
1 Normality
Normality test is used to ensure that all the data is still at the normal level or not there is an extreme value that interfere with
research results. Here are the results of the testing of normality
Figure 4.1 Normality Test
To see the normality test, it appears that the whole dots still are at about the linearity means the whole of the data used still meet the
assumption of normality.
2 Multicolinearity Test
Multicollinearity test has purpose to test whether the regression model found a correlation between the independent variables. A
good regression models should have no correlation between independent variables. If the independent variables are correlated,
then this variable is not orthogonal. Orthogonal variable is the independent variable in which the correlation value between the
43 members of independent variables is equal to zero 0.
Multicollinearity can be known from the value of tolerance and Variance Inflation Factor VIF. If the tolerance value is greater
than 0.1 and VIF values smaller than 10, then the regression is free from multicollinearity. VIF and Tolerance value from independent
variables on regression model can be seen in table below:
Table 4.8 The Result of Multicolinearity Test
Coefficients
a
Model Collinearity Statistics
Conclusion Tolerance
VIF
1 Constant
CR ,947
1,056 No Multicollinearity
AR ,702
1,425 No Multicollinearity
DR ,558
1,791 No Multicollinearity
INFLATION ,545
1,835 No Multicollinearity
a. Dependent Variable: PR
The result shows that the VIF values of all independent variables having a value smaller than 10 1.056, 1.425, 1.791 and 1.835, It
means that the variables of the study did not show any multicollinearity in the regression model.
3 Heteroscedacticity Test Aims to test whether in the regression model have the variance
inequality from one residual observation to the order. If the variance of residual is fixed, then it is called homoscedasticity and
if different called heteroscedasticity.
A good regression mo del is homoscedasticity or doesn’t have
heteroscedasticity.
44 The method used to detect the presence of absence of
heteroscedasticity is to look at the heteroscedasticity graph between the predicted values of the dependent variable with independent
variables. From the scatterplot below show the dots spread as randomly and spread above and below 0 and Y axis, it means that
there is no heteroscedasticity on the regression model, so the regression model is proper to use on the test. To be more can be
seen on figure 4.11 below:
Figure 4.2 Scatterplot Dependent Variable
4 Autocorrelation
Autocorrelation is used to determine the presence or absence of classic assumption deviation autocorrelation is the correlation
between the residuals on the observation by other observations in the regression model.
45
Table 4.9 Autocorrelation
Model Summary
b
Model R
R Square Adjusted R
Square Std. Error of the
Estimate Durbin-Watson
1 .901
a
.811 .783
7.84257 1.333
a. Predictors: Constant, Inflation, Current_Ratio, Activity_Ratio, Debt_Ratio b. Dependent Variable: Profitability_Ratio
From the results obtained above output value of DW generated from the regression model is 1.333. While the DW table by 0.05
and the number of data n = 32, k = 4 k is the number of independent variables obtained value dU dL at 1.24 and 1.65.
Because the value of DW 1,333 are in the area between dL and dU, it does not produce definitive conclusions.
c. Hypothesis Test 1