Autocorrelation Test DATA AND METHODOLOGY

VIF shows how the variance of an estimator is inflated by the presence of multicollinearity. As � � 2 approaches 1, the VIF approaches infinity. That is, as the extent of collinearity increases, the variance of an estimator increases, and in the limit it can become infinite. As it can be readily seen, if there is no collinearity between X 2 and X 3 , VIF will be 1. If the speed with which variances and covariance increase, which can be seen with the variance-inflating factor VIF, it may be noted that the inverse of the VIF is called tolerance TOL. That is: ��� � = 1 ��� � = �1 − � � 2 � 3.10 When � � 2 = 1 i.e., perfect collinearity, ��� � = 0 and � � 2 = 0 i.e., no collinearity whatsoever, ��� � is 1. Because of the intimate connection between VIF and TOL, one can use them interchangeably. In this research multicollinearity test will be done through auxiliary regression to detect is there any multicollinearity. The criteria is if R 2 of regression equation more than R 2 auxiliary regression it means that there is no multicollinearity inside. Auxiliary regression model are, � � = � 2 ∙ � 1 ∙ � 2 ∙ � 3 ∙∙∙ � � � − 2 1 − � 2 ∙ � 1 ∙ � 2 ∙ � 3 ∙∙∙ � � � − � + 1 3.11

c. Autocorrelation Test

Autocorrelation or serial correlation is a correlation which happens beyond observed items that are closed each other. If this assumption clays not as happen it cause OLS estimator not efficient anymore, because of the width range of degree of freedom, it means that ‘t’ test and ‘F’ test will have become low validity and weak. Autocorrelation is defined as correlation between a group of observed items which are sorted based on time such as inside the time series or based on spaces such as inside the cross section. Autocorrelation generally happens in time series data but it does not happen in cross sectional data. In time series data, observation sorted by chronological sequence which gives a high possibilities of inter correlation happen if the interval between both observations is very short. One of the famous tests to find out autocorrelation indication is Durbin-Watson test. This test is actually based on error model. The correlation equation is described below: µ t = ρ µ t-1 + v t 3.12 Where: µ t = error that happened in t time µ t-1 = error that happened in t-1 time ρ = autocorrelation coefficient lag-1 to measure correlation between residuals of t time and residuals of t-1 time v t = error which is independent characteristic and in a normal distribution which is with a median value equals to zero median value = 0, and in σ 2 varians. If ρ = 0, it can take as a result that is no serial correlation in residual, therefore this test uses this hypothesis: HO : ρ = 0 H1 : ρ ≠ 0 Durbin-Watson statistics, are; �� = ∑ � � −� �−1 2 � �−2 ∑ � � 2 � �−1 3.13 Where: µ t = Y t – β – β 1 X t = Y t – Y t , which is residual in t time µ t-1 = Y t-1 – β – β 1 X t-1 = Y t-1 – Y t-1 , which is residual in t-1 time Equation 3.6 can be written in the form, below; �� = 2[1 −∑ � � ∙� �−1 ] ∑ � � 2 = 21 − � 3.14 Equation 3.7 can be written in shape below; � = � � −� �−1 2 ∑ � � 2 3.15 As mentioned before that ρ is the autocorrelation coefficient which has a value -1 ≤ ρ ≤ 1. Based on 3.15 they mean: 1. If DW statistic s value is 2, ρ is 0 ρ = 0, it means there is no autocorrelation. 2. If DW st atistics value is 0, ρ is 1 ρ = 1, where it means there is no positive autocorrelation. 3. If DW statistic value is 4, ρ is -1 ρ = -1, where it means there is no negative autocorrelation. Positive Auto correlation Zone of indecision Do not reject H or H 1 or both Zone of indecision Negative Auto correlation dL dU 4-Du 4-dL 4 Figure 3.1 Durbin-Watson If the assumption of the classical linear regression model that the error disturbance µ i entering into the population regression function PRF are random whether uncorrelated is violated, the problem of serial or autocorrelation arises. The term autocorrelation may be defined as “correlation between members of time series of observations ordered in time as in time series data or space as in cross section data.” In regression context, the classical linear regression model assumes that such autocorrelation does not exist in the disturbances µ i . Symbolically, Eµ i µ j = 0 i ≠ j 3.16 Autocorrelation can arise for several reasons, such as inertia or sluggishness of economic time series, specification bias resulting from excluding important variables from the model or using incorrect functional form, the cobweb phenomenon, data massaging, and data transformation. As a result, it is useful to distinguish between pure autocorrelation and “induced” autocorrelation because of one or more factors just discussed.

d. Heteroscedasticity Test