Data Analysis Method RESEARCH METHODOLOGY
33 According to Hair et al. 2006 cited in Adinugraha et al 2007,
the purpose of the normality test is to determine whether the regression model variables are normally distributed or not. The
normality test conducted to determine whether the inferential statistics to be used is a parametric or non-parametric statistics. There are two
ways to test, i.e. the graph analysis and statistical tests Ghozali 2011. Researcher chooses two tools to test whether the data is normally
distributed or not. 1
Graph Analysis When using graph analysis, normality test can be done by looking
at the spread of the data dots on the diagonal axis of the graph or by looking at the histogram from the residual.
a If the dots spread around the diagonal line and follow the
direction of the diagonal line, the regression model meets the normality assumption.
b If the dots spread away from diagonal lines and or do not
follow the direction of the diagonal line, the regression model does not meet the normality assumption.
2 Statistical Test
Kolmogorov-Smirnov Z 1 - Sample KS uses for making decision regarding the normality test.
a If the value Asymp. Sig. 2-tailed less than 0.05, it means
that the data are not normally distributed.
34 b
If the value Asymp. Sig. 2-tailed of more than 0.05, it means that the data are normally distributed.
b. Multicollinearity Test
Multicollinearity test aims to test whether the regression model found a correlation between the independent variables Ghozali 2011.
A good regression model should not happen correlation between the independent variables. To detect the presence or absence of
multicollinearity in the regression model can be seen from the value of tolerance and the variance inflation factor opponent VIF.
Multicollinearity views of the tolerance value 0.10 or VIF 10. Both of these measurements indicate each independent variable, which is
explained by the other independent variables. c.
Heteroscedasticity Test Heteroscedasticity test aims to test if there is variance difference
from residual of one observation to an other observations occurred Santoso 2010. Furthermore, if the variance remains constant, it is
called homoscedasticity and if it is changing or different, it is called heteroscedasticity Santoso 2010. A good regression model is
homoscedasticity or there is no heteroscedasticity. In this study, heteroscedasticity test can be viewed by using the
Scatter plot graph between the standardized predicted variable ZPRED and studentized residual SRESID. Y-axis becomes the axis
35 that has been predicted and the X-axis is the residual Y predicted-Y
actual. Decision-making can be made by this consideration: 1
If there is a specific pattern, like dots, which form well-ordered pattern waving, spreading then narrowing, it indicates that
heteroscedasticity occurs. 2
If there are no well-ordered pattern and the dots spread above and below 0 in Y-axis, heteroscedasticity does not prevail.
d. Autocorrelation Test
Autocorrelation test aims to find if there is correlation in linear regression model between disturbances in t period with previous
period t-1 Santoso 2010. A good regression model is a regression that is free from autocorrelation.
Autocorrelation can be determined using DW Durbin- Watson Test and Breusch-Godfrey Test.
Table 3.1 DW Durbin- Watson Test
Formula Decision
DW -2 Positive Autocorrelation
-2 DW +2 No Decision
DW +2 Negative Autocorrelation
36 3.
Multiple Regression Analysis Multiple regression analysis used to test the effect of two or
more independent variables toward the dependent variable Ghozali 2011. Regression analysis divided into two kinds, simple regression
analysis if there is only one independent variable and multiple regression analysis if there is more than one independent variables.
Multiple regression analysis can be measured partially indicated by coefficient of partial regression jointly indicated by coefficient of
multiple determination or R
2
. Independent variable in this research is audit committee
effectiveness, dependent variable is timeliness, which is separated into audit lag and report lag, and control variables are financial condition,
company size, and audit firm’s size. Structural equation model that proposed as an empirical model is as follows:
Y
1
= β + β
1
X
1
+ β
2
X
2
+ β
3
X
3
+ β
4
X
4
+ β
5
X
5
+ β
6
X
6
+ β
7
X
7
+ ε
Where: Y
1
= Audit Lag X
1
= Audit Committee Independence X
2
= Audit Committee Expertise X
3
= Audit Committee Size X
4
= Audit Committee Meeting X
5
= Company Size X
6
= Auditor Firm’s Size
37 X
7
= Profitability β
1
=Regression Variable Audit Committee Independence β
2
= Regression Variable Audit Committee Expertise β
3
= Regression Variable Audit Committee Size β
4
= Regression Variable Audit Committee Meeting β
5
= Regression Variable Company Size β
6
= Regression Variable Auditor Firm’s Size β
6
= Regression Variable Profitability ε
= Error a.
Simultaneous Regression Analysis Test - F Essentially, F- test has purpose to know whether among
independent variables simultaneously have significant influence toward dependent variable. Independent variables in this research are good
corporate governance and ownership structure whereas dependent variable is firm value. So, F- test has a function to know the influence
among good corporate governance and ownership structure towards firm value. α used for this research is 0.05 5 with assumption:
1 α 5, Ho is accepted.
2 α 5, Ho is rejected.
b. Partial Regression Testing T-test
T-test basically indicates the influence of independent variable to dependent variable. The value of t-test is compared with the degree of
believes.
38 The level of significance used in this test is 5 or α 0.05. The
decision-making is based on probability values: 1
If the value Significance is error rate α = 0.05, then Ho1 and Ho2 are rejected
2 If the value Significance is error rate α = 0.05, then Ho1 and
Ho2 are accepted. 4.
Coefficient Determination Test R
2
Coefficient determination R
2
is a statistical measurement of how well the regression line approximates the real data point. By knowing
the value of R
2
, it can determine the magnitude contribution of independent variables toward the dependent variable. R
2
expresses a value between zero and one.
If R
2
is near to 0, the regression model cannot explain most of data variations. In this case, the regression model fits the data poorly.
On the other hand, if R
2
is near to 1, the regression model can explain most of the variation in the dependent variable. In other words, the
regression model fits the data well Sekaran and Bougie 2010.