35
2. Types of Data
This research uses a secondary type of data, which is obtained from certain sources. The independent variable data, which are inflation rate and exchange
rate of Indonesian Rupiah to US Dollar is obtained from Indonesia‟s Economic and Financial Statistic published by Bank of Indonesia while the data of inflation
rate is gathered from National Statistical Bureau BPS.
B. Research Models
This research uses multiple linear regression models as follow: Model 1
� �
��� �� � ��� �
3.1
Model 2 �
� ���
�� � ��� �
3.2
where, R
i
= Stock return of property and real estate sector R
j
= Stock return of consumer goods sector α
= Intercept β
= Regression coefficient IR
= the Indonesia Interest Rate SBI change INFLATION
= the inflation rate change
36
EXCHANGE RATE = the change in exchange rate of Indonesian Rupiah to
US Dollar
C. Operational Variable
Operational variable is a statement of the specific dimensions and elements through which a concept will become measurable Sekaran, 2006.There
are two kinds of variable, which are independent variable and dependent variable, which we turn into certain dimension and definition.
The operational definition of each variable is as follow:
1. Stock’s Rate of Return
In calculating a stock‟s rate of return, the writer uses continuous compounding
method as follow:
[ ]
3.3
Where, r
t
= Continuously compounded returns on t period P
t
= Stock Price on t period P
t-1
= Stock Price on t-1 period From the formula above, we can see that a stock‟s rate of return can be
seen by the margi n of change in stock‟s price. The data of stock price used is the
closing price which gathered monthly during December 2005- December 2010.
37
2. Inflation Rate
The inflation rate data used in this research calculated from change in monthly inflation rate based on Consumer Price Index CPI. Inflation per month
can be calculated as follows:
� � �
�
3.4
After we calculate the monthly inflation rate, then to measure the change in inflation rate we can use this following formula:
���
3.5
Where, INFLATION
t
= Change in inflation rate during t period INF
t
= Inflation on t period INF
t-1
= Inflation on t-1 period
3. Exchange Rate
This research uses exchange rate of Indonesian Rupiah to US Dollar. The exchange rate uses is the mid-point between buy and write price.
38
���� �
� �
3.6
Where, EXRATE
t
= Change in exchange rate of Indonesian Rupiah to US Dollar in a t period
ER
t
= Exchange rate of Indonesian Rupiah to US Dollar in t period
ER
t-1
= Exchange rate of Indonesian Rupiah to US Dollar in t-1 period
This data is gained from the exchange rate of Indonesian Rupiah to US Dollar in monthly basis during December 2005-December 2010 published by Bank of
Indonesia.
4. Interest Rate
In this research, interest rate refers to Bank Indonesia Certificate SBI Rate on monthly basis. The operational definition used in this research is the
change in SBI rate. Change in Indonesia Interest Rate is defined as follows:
��� �
� �
3.7
Where, ∆IRATE
= Change in interest rate SBI rate on t period
39
IR
t
= Interest rate SBI rate on t period IR
t-1
= Interest rate SBI rate on t-1 period
D. Data Analysis Technique Regression analysis will be used to test hypotheses formulated for this
study. Three variables inflation, interest rate, and exchange rate were entered.
Multiple regressions will determine the significant relationship between dependent and independent variables, the direction of the relationship, the degree
of the relationship and strength of the relationship Sekaran, 2006. Multiple regression are most sophisticated extension of correlation and are used to explore
the predict ability of a set of independent variables on dependent variable Pallant, 2001. Three hypotheses generated. From the hypothesis it gives direction to
assess the statistical relationship between the dependent and independent variables. The convention of P value has been set as of 5 i.e. 0.05 used as
evidence of a statistical association between the dependent and independent variables.
To gather the best model of research, researcher must perform other pre-tests. The test are: normality test, assumption test heteroscedasticity test, auto-correlation
test, multi-collinearity test, and hypothesis test.
1. Normality Test
In statistics, normality tests are used to determine whether a data set is well-modeled by a normal distribution or not, or to compute how likely an
40
underlying random variable is to be normally distributed. An informal approach to testing normality is to compare a histogram of the residuals to a normal
probability curve. The actual distribution of the residuals the histogram should be bell-shaped and resemble the normal distribution.
There are certain methods to detect whether data is normally distributed or not. The methods are using Histogram of Residual, Normal Probability Plot, and
Jarque-Bera Test. In this research, researchers want to use a method proposed by Jarque Bera or commonly known as Jarque Bera Test to gather the most accurate
model of data.
a. Jarque-Bera Test of Normality
The JB test of normality is an asymptotic, or large-sample, test. It is also based on
the OLS residuals Gujarati, 2004. This test first computes the skewness and kurtosis
measures of the OLS residuals and uses the following test statistic: [
]
3.8
Where, n = sample size
S = skewness coefficient K = kurtosis coefficient
For a normally distributed variable, S=0 and K=3. Therefore, the JB test of normality is a test of the joint hypothesis that S and K are 0 and 3, respectively. In
that case the value of the JB statistic is expected to be 0.
41
Regarding this, the hypothesis of Jarque-Bera Test is described as follows: H
: Data is not normally distributed H
a
: Data is normally distributed To detect whether the variable is normally distributed or not, one can compare the
value of Jarque Bera statistic with the value of Jarque Bera table X
2
., as follows: a. If JB Statistic X
2
, the data is not normally distributed, and thus we do not reject H
0.
b. If JB Statistic X
2
, the data is normally distributed, and thus we reject H
0.
2. Classical Assumption Test
The Gaussian, standard, or classical linear regression model CLRM, which is the cornerstone of most econometric theory, makes 10 assumptions
underlying of Ordinary Least Square method Gujarati, 2004, p.65. This research will focus on its 6 basic assumption in context of the two-variable regression
model.
Assumption 1 : Linear Regression Model. The regression model is linear in
the parameters
Assumption 2 : X values The independent variable are fixed in repeated
sampling. Values taken by the regressor X are considered fixed
in repeated samples. More technically, X is assumed to be nonstochastic.
Assumption3 : Zero mean value of disturbance u
i
.
Given the value of X, the mean, or expected, value of the random disturbance term u
i
is
42
zero. Technically, the conditional mean value of u
i
is zero. Symbolically, we have
|�
3.9 Assumption 4
: Homoscedasticity or equal variance of u
i .
Given the value of X, the variance of u
i
is the same for all observations. That is, the conditional variances of u
i
are identical. Symbolically, we have
|�
3.10 Assumption 5
: No autocorrelation between the disturbances. Given any
two X values, X
i
and X
j
i≠ j, the correlation between any two u
i
and u
j
i ≠j is zero. Symbolically, we have
� |�
�
3.11 Assumption 6
: Zero covariance between u
i
and X
i
, or Eu
i
X
i
= 0. By Assumption,
� �
3.12
43
As noted earlier, given the assumptions of the classical linear regression model, the least-squares estimates possess some ideal or optimum properties. These
properties are contained in the well-known Gauss –Markov theorem. To
understand this theorem, we need to consider the best linear unbiasedness property
of an estimator. The OLS estimator is said to be a best linear unbiased estimator BLUE if the following hold Brooks, 2002:
1. It is linear, that is, a linear function of a random variable, such as the
dependent variable Y in the regression model.
2. It is unbiased, that is, its average or expected value, E
β
2
, is equal to the true value,
β
2
. 3. It has minimum variance in the class of all such linear unbiased estimators;
an unbiased estimator with the least variance is known as an efficient
estimator.
The classical assumption test needed to ensure that the regression model is the best estimator or BLUE. The classical assumption test also used to detect any
mislead of the classical linear model. The test used are Heteroscedastic Test, Auto-Correlation Test, and Multi-Colinearity Test.
a. Heteroscedastic Test
One of the important assumptions of the classical linear regression model is that the variance of each disturbance term u
i
, conditional on the chosen values of the explanatory variables, is some constant number equal to
σ
2
. This is the
assumption of homoscedasticity, or equal homo spread scedasticity. When the
44
variance of each disturbance u
i
is not constant however, there is heteroscedasticity on that variance. Still according to Gujarati in his book Basic Econometric, the
consequences of heteroscedasticity in the regression model is as follows:
1. The estimator produced will still be consistent, but that estimator will be
no longer efficient. Meaning that, there are variance that has little error than the estimator produced in the regression model that contains
heteroscedasticity.
2. Estimator produced from the regression linear model is no longer have an
accurate heteroscedastic. This will cause the hypothesis testing become not accurate.
In short, if we persist in using the usual testing procedures despite heteroscedasticity, whatever conclusion we draw or influences we make may be
very misleading. To detect whether heteroscedasticity is present in the data, researcher will conduct
a formal test using White Test method. Th e reason why researcher use White‟s
General Heteroscedasticity Test is because it does not rely on the normality assumption and easy to implement. The White test proceeds as follows:
Step 1. Given the data, we estimate regression model and obtain the residuals
,u
i
�
3.13 Step 2.
We then run the following auxiliary regression:
45
3.14 Step 3.
Formulate the Hypothesis Test H
= There is no heteroscedastic H
a
= There is heteroscedastic Under the null hypothesis that there is no heteroscedasticity, it can be shown that
sample size n times the R
2
obtained from the auxiliary regression asymptotically follows the chi-square distribution with df equal to the number of regressors
excluding the constant term in the auxiliary regression. That is,
�
3.15 Step 4.
If the chi-square value obtained in 3.15 exceeds the critical chi-square value at the chosen level of significance, the conclusion is that there is
heteroscedasticity. If it does not exceed the critical chi-square value, there is no heteroscedasticity, which is to say that in the auxiliary regression 3.14.
If heteroscedasticity truly exist, one can use the Generelized Square Method or White Test method. White Test method developed heteroscedasticity-
corrected standard error. Software Eviews 5 has already provided White method to overcome heteroscedastic.
46
b. Autocorrelation Test
The term autocorrelation may be defined as correlation between members
of series of observations ordered in time [as in time series data] or space [as in cross-sectional data] Gujarati, 2004, p.442. In the fifth assumption of the
classical linear regression model, it assumes that such autocorrelation does not exist in the disturbance u
j,.
The classical model assumes that the disturbance term relating to any observation is not influenced by the disturbance term relating to
any other observation. As in the case of heteroscedasticity, in the presence of autocorrelation the OLS
estimators are still linear unbiased as well as consistent and asymptotically normally distributed, but they are no longer efficient i.e., minimum variance.
Therefore, the usual t and F tests of significance are no longer valid, and if applied, are likely to give seriously misleading conclusions about the statistical
significance of the estimated regression coefficients.
To detect autocorrelation, the writer use The Breusch-Godfrey BG Test in the
application software EViews 5. BG Test, which is also known as Lagrange- Multiplier LM Test involves the following hypothesis:
Ho: There is auto-correlation H1: There is no auto-correlation
After we run the test, we can analyze the result by comparing the value of ObsR- squared
, which comes from the coefficient determination R squared multiple
47
with the number of observation, and the value of the probability with the significant value
α, as follows: If the Probability
α = 5, there is no auto-correlation, thus we reject H
o
. If the Probability
α = 5, there is auto-correlation, thus we failed to reject H
c. Multi-Collinearity Test
Estimator that has BLUE characteristic supposed to be not contains multicollinearity. Since multicollinearity is essentially a sample phenomenon,
arising out of the largely nonexperimental data collected in most social sciences, there is no one unique method of detecting it or measuring its strength. In this
research, the writer will detect the existence of multi-collinearity by using
software Eviews 5. The method that will be used is through the examination of partial correlation
. This method is developed by Farrar and Glauber. They suggested that in examining the existence of multicollinearity one should look at
the partial correlation coefficient Gujarati, 2004. Thus, in the regression of Y on X
2
, X
3
, and X
4
, a finding that R
2 1.234
is very high but r
2 12.34
, r
2 13.24
, and r
2 14.23
are comparatively low may suggest that the variables X
1
, X
2
, and X
3
are highly intercorrelated and that at least one of these variables is superfluous.
48
3. Hypothesis Test
a. t- test
According Bhuono Theories 2005 if t test t table therefore Ho rejected and Ha accepted, that means independent variables partially as influence
significantly toward dependent variable. If t test t table therefore Ho accepted and Ha rejected, that means independent variable partially has no influence
significantly toward dependent variable. Level of significant use amount 5 or α 0.05. Based on the theory above, so the test for each hypothesis is as follow:
a. Hypothesis related to interest rate H
: β
1
≥ 0 H
1
: β
1
b. Hypothesis related to inflation H
: β
2
≥ 0 H
1
: β
2
c. Hypothesis related to exchange rate H
: β
3
≥ 0 H
1
: β
3
b. F test
The function of F
test
is to see and understand the influence of both independent variables toward dependent variables. Steps of this test:
a. Create the hypothesis formulation H
O
:
β
1
=
β
2
=
β
3
= 0, There was no influence that is significant from the independent variable X together against the dependent variable Y.
49
H
A
:
β
1
≠
β
2
≠
β
3
≠ 0, There was influence that is significant from the independent variable X together against the dependent variable Y
b. Determine the level of the significant of 5
c. R Square R
2
Test
We know that one of the measures of goodness of fit of a regression model is R
2
, which is defined as: �
� �
�
3.16
R
2
, thus defined, of necessity lies between 0 and 1. The closer it is to 1, the better is the fit.
d. Adjusted R Squared Test
The value of adjusted R Squared always smaller than the value of R Squared. Adjusted R Squared penalizes for adding more regressors. Unlike R
2
, adjusted R
2
will increase only if the absolute t value of the added variable is greater than 1. The closer it is to 1, the better is the fit. This means that the
independent variable used could explain almost 100 of the variance in the dependent variable.
50
E. Research Design
Figure 3.1 Research Framework
Sampling Process
Dependent Variable Independent Variable
Sectoral market return indicies
Macroeconomic factorsinflation,SBI
rate,exchange rate
Calculate based on operational variable
definition Calculate based on
operational variable definition
� �
��� �� � ��� �
� �
��� �� � ��� �
Regression Model Model 1 :
Model 2 :
Regression model test Hypothesis test
Analysis Conclusion
51
CHAPTER 4 RESEARCH FINDINGS AND ANALYSIS
A. Brief Introduction
This chapter will present the process of data processing through the statistical tools as well as the analysis of the findings. The data will be processed
through the statistical software Eviews 5 using the multiple regression analysis model. The model will be tested using the OLS Originally Least Squared
Method before conducting the multiple regression analysis model. To ensure that the model has the characteristic BLUE Best Linear Unbiased Estimator, before
testing the significance of the model, we conduct the classical assumption test which consists of normality test, heteroscedasticity test, multi-collinearity test,
and autocorrelation test. If the model is passing those classical assumption tests, thus we assume that the model has BLUE characteristic and we will be ready to
test the significance of the model. But if the model has not passed yet, we will do some remedial actions accordingly. After achieving a model that has BLUE
characteristic, the writer can interpret the result and make the analyses as well as compare it with the theoretical assumption.