103
Table 4.47 Customer Will Recommend Coca-Cola to Their Relatives or Friends
Frequency Percentage
Strongly Disagree 1
1.7 Disagree
3 5
Neutral 4
6.7 Agree
44 73.7
Strongly Agree 8
13.3 Total
60 100
Source: Primary Data Output from SPSS 20 As shown in the table 4.47 above 1 respondent or 1.7 stated
strongly disagree, 3 respondents or 5 stated disagree, 4 respondents or 6.7 stated neutral, 44 respondents or 73.7 agree and 8
respondents or 13.3 stated strongly agree on the statement customer will recommend Coca-Cola to their relatives or friends.
c. Classical Assumption Test
1 Normality Test
a P-P Plot Figure 4.3
Normality Test Result
Source: Primary Data Output from SPSS 20
104
Normality data test aims to know the distribution of data in the variables that use in the research. A good data used in the research is
data, which has a normal distribution. Normality data can be seen from various ways, which is by looking at the normal curve of Q-Q
plot. A normal variable is when the diagram of distribution with the dots spread around the diagonal line, and the spreading of dots data
is one same along diagonal line, it can be said that the data has a normal distribution.
Based on figure 4.3, it can be seen that the plots are distributed along the diagonal line. Thus, can be concluded that the data used in
this research has a normal distribution.
b One Sample Kolmogrov Smirnov Table 4.48
Normality Test Result One-Sample Kolmogorov-Smirnov Test
Unstandardiz ed Residual
N 60
Normal Parameters
a,b
Mean 0E-7
Std. Deviation
2.37585420 Most Extreme
Differences Absolute
0.136 Positive
0.136 Negative
-0.050 Kolmogorov-Smirnov Z
1.052 Asymp. Sig. 2-tailed
0.218 a. Test distribution is Normal.
b. Calculated from data.
Source: Primary Data Output from SPSS 20
105
Based on table 4.48 above, can be seen that the value of signification Asymp. Sig. 2-tailed is 0.218. It means the
signification more than 0.05 0.218 0.05, thus residual value shows normal distribution. So, the regression model requires
normality assumes.
2 Multicolinearity Test
Table 4.49 Multicolinearity Statistics
Variable Collinearity Statistics
Tolerance VIF
Customer Value 0.372
2.688 Customer Satisfaction
0.283 3.529
Trust in Brand 0.647
1.545 Source: Primary Data Output from SPSS 20
Multicolinearity test aims to test a correlation among the independent variable in the regression model. A good regression
model should have no correlation among the independent variable. Analyze data tolerance value shows there is no independent variable
which has tolerance value less than 0.10, that means there is no correlation among independent variables that have a value higher
than 95 percent. On the other hand VIF column shows similar things that there is no independent variable that has VIF value higher than
106
10, thus, it can be concluded, that there is no multicolinearity among independent variables in regression model.
3 Heteroskesdasticity Test
Figure 4.4 Heteroskesdasticity
Source: Primary Data Output from SPSS 20 According to Duwi Priyatno 2012:165 a multiple linear
regression is free of heteroskesdasticity if: a. there is no clear pattern
b. point spread above and below zero on the Y axis Heteroskesdasticity test is aimed to examine whether in the
model occurs any residual variance in certain monitoring period to another monitoring period. If the characteristic is fulfilled, it means
that the factors of intruder variation toward the data have the characteristic
of heteroskesdasticity.
A good
model is
homokesdasticity, not heteroskesdasticity. From the Scatter plot diagram in figure 4.4 above it can be seen
that the dots are spread widely, below and above the number of 0, or
107
in other words, it is not grouping in one side only, but in both sides. The dots also have no pattern. Thus, it can be concluded that this
data are free from heteroskesdasticity problem. d.
Multiple Linear Regressions
Regression analysis is mainly used for seen an association between one or more independent variables of dependent variable. Regression
was used for prediction purposes how much influence the independent variables of dependent variable.
The calculation of statistics in regression analysis used in this study is to use aid computer program SPSS 20 for windows. A summary of
the results of the data processing by using the SPSS program can be seen in table 4.50:
Table 4.50 Result of Multiple Linear Regressions
Coefficients
a
Model Unstandardized
Coefficients Standardized
Coefficients t
Sig. B
Std. Error
Beta
1 Constant
8.158 1.736
4.700 0.000
Customer Value
0.284 0.066
0.440 4.324
0.000 Customer
Satisfactio n
0.232 0.046
0.588 5.048
0.000
Trust in Brand
-0.108 0.038
-0.216 -2.805 0.007
a. Dependent Variable: Customer Loyalty Source: Primary Data Output from SPSS 20
108
From this result, when written in the form of standardized regression equation is as follows:
Y = 0.284 X1 + 0.232 X2 – 0.108 X3
Where: Y =
Customer loyalty X1
= Customer value variable X2
= Customer satisfaction variable X3
= Trust in brand variable
1 Coefficient Determination R²
Table 4.51 Coefficient Determination
Model Summary
b
Model R
R Square
Adjusted R Square
Std. Error of the Estimate
1 0.886
a
0.784 0.773
2.439 a. Predictors: Constant, Trust in Brand, Customer Value,
Customer Satisfaction b. Dependent Variable: Customer Loyalty
Source: Primary Data Output from SPSS 20 The correlation coefficient R shows how much the relationship
between the independent variables simultaneously with the dependent variable. Correlation coefficient ranges from 0 to 1, if it is close to 1 then
the increasingly close relationship, but if it is close to 0 then the relationship is getting weaker Duwi Priyatno, 2012:134. From the
results as shown in the table 4.51, the number correlation R between
109
customer value, customer satisfaction, and trust in brand are 0.886. It means that there is a strong relationship between independent variables
to dependent variable. From table 4.51 coefficient determination R², the results of
calculation using SPSS 20 program can be seen that the adjusted R square is 0.773.
This means 77.3 independent variables customer value
X1, customer satisfaction X2, and trust in brand X3 effect customer loyalty as dependent variable Y and a rest 22.7 influenced by another
variable that is unknown and not included in this regression analysis.
e. t-Test Partial