E. Data Analysis Techn
38
iques
For data processing, there are several statistical techniques were used for different purposes. This includes frequency, descriptive statistics, data quality test,
correlation analysis, and regression analysis F test and t test.
1. Frequency analysis
In this research, the researchers used frequency analysis to measure the pattern of respondent’s background. Frequency analysis will analyze gender,
age, job, experience, last position and education.
2. Descriptive statistics
After all data collected, then it will be processed and analyzed by descriptive statistics. Based on Lind et al 2005, p. 6, descriptive statistics
are methods of organizing, summarizing, and presenting data in an informative way. Descriptive statistics are numbers that are used to
summarize and describe data. The word data refers to the information that has been collected from an experiment, a survey, an historical record, etc. One
important use of descriptive statistics is to summarize a collection of data in a clear and understandable way. The limitation of descriptive analysis is the
risk of distorting the original data or losing important detail. Even given
these limitations, descriptive statistics provide a powerful summary that may enable comparisons across people or other units.
A descriptive statistic is a numerical summary of a dataset. In this research, mean is used to calculate data. There are several different types of
mean, but by far the most commonly used is the arithmetic mean, which is simply the sum of the measurements divided by the number of measurements.
This is typically what people refer to as the average.
3. Quantitative Analysis
The quantitative analysis Tcan be ensure by validity, reliability test, normality test and classic assumption test.
39
st a.
Validity te
Validity test intended to measure the extent to which the variables used to measure really what should be measured. Testing the validity of using the
Pearson correlation that is by calculating the correlation between the scores of each item questions with a total score Ghozali, 2001. The criteria used is
valid or invalid if the correlation between the score of each of the questions with a total score has a level of significance under 0:05 so the questions can
be said to be valid, and if the correlation score of each of the questions with a total score has a significance level above 0:05 hence the questions are not
valid.Santoso,2000, p. 277.
The formulation coefficient that used in this study is product moment Sutrisno Hadi,1991 as follows:
rXY = n ∑ XY – ∑X ∑Y
√[n∑X² - ∑X²] [n∑Y² - ∑Y²] Where:
n = Total of resondent x = Answer score on the question item
y = Score total of question item
40
st b.
Reliability te
Reliability test was used to measure the variables used completely free of errors that produce consistent results. Reliability test results with the help
of SPSS will produce a Cronbach Alpha. If the results of the Cronbach Alpha below 0.05 it is said that the data which has reliability reliable is relatively
low Singgih, 2000, p. 290. Test the quality of this data using SPSS version 17.0.
r =
1
∑
Where: r = Reliability
k = Number of question
41
article =
Table 3.1 Scale of Instrument Reliability
Inter lity
∑
= Total variance p Total Variance
val Coefficient Level of Reliabi
0.200 Very low
0.200 - 0.399 Low
0.400 - 0.599 Sufficient
0.600 - 0.799 High
0.800 - 1.00 Very High
Source : Sugiono 2005
c. Normality test
Normality test performed to see whether the data was normally distributed or not, because the data obtained directly from the first party via a
questionnaire.
d. Classic Assumption Test
There are four type of classic assumption test such as Normality test, Autocorrelation, Heteroskesdastisity and multicollinearity.
1. Autocorrelation
Autocorrelation is aimed to test a linear regression model whether there is a correlation between disturbance variables e
t
with the previous disturbance variable e
t – 1
. If there is no correlation then the problem called autocorrelation. To detect
autocorrelation, we can see Durbin Watson test.
The formulation of Durbin Watson as follows :
Table 3.2 Durbin Watson Autocorrelation Measurement
Durbin Watson Conclusion
Less than 1.10 Autocorrelation Available
1.10 – 1.54 Without conclusion
1.55 – 2.46 No Autocorrelation Available
2.46 – 2.90 Without conclusion
42
More than 2.90 Autocorrelation Available
Source : Muhammad Firdaus 2004,101
2. Heteroskedasticity
Heteroskedasticity is aimed to test whether the regression model of the residual variance inequality occurs to one observation
to another. Heteroskedasticity occurs when the variance of the disturbance is not constant. But homoskedastic occurs when the
variance of the disturbance is constants. A good regression model is homoskedastic.
3. Multicollinearity
Multicollinearity is aimed to test whether the regression models found a correlation between independent variables or not
.
In a good regression model, there is no correlation between independent variables, because if this happens then these variables
are similar. This test is to avoid the habit in decision-making process regarding the partial effect of each independent variable
toward dependent variable. To detect whether there is a multicollinearity problem or not, it can be seen at the value of
tolerance and Variace Inflation Factor VIF.
4. Multiple Linier Regression
43
Regression analysis will be used to test hypotheses formulated for this study. Two variables seniority and ethical attitudes were entered.
a. F- test
An F-test is any statistical test in which the test statistic has an F-distribution if the null hypothesis is true. It is most often used when
comparing statistical models in order to identify the model that best fits a given set of data Lomax, 2007.
b. t-test
A t-test is any statistical hypothesis test in which the test statistic follows a t distribution if the null hypothesis is true. It is most
commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were
known Buonowikarto,2009, p. 27. t test to know the significance of each independent variable .
t test = b – Se Sb
Sb = Se
√∑ Y² - ∑X ² n
Se = √ ∑ Y² - α ∑ Y – b ∑ XY
n-2
44
Hypothesis: H
= there is no significant influence between independent variable with dependent variable
Hı = there is significant influence between independent variable and dependent variable
Decision-making based on probability: If the probability 0.05 hence H
is accepted If probability 0.05 hence H
is rejected
F. Variable Used