8
2 1
2
1 1
Z Z
X X
X d
~
2
,
N
, where
2 1
2 2
1 1
Z Z
Var
X X
X
, 1
2 1
1
2 1
3 2
2 1
2 2
Z Z
Cov Z
Var Z
Var
X X
X X
X
1 ,
3 1
, 1
2 ,
2 2
1 2
1 1
V V
V
X X
X X
X
. Thus, by Theorem 2 we conclude that
d X
X n
t t
t n
t t
X X
n X
n n
1 ,
1 ,
1
2 1
1 2
2
,
N
. Similarly and by applying the
plug-in
principle, Theorem 3 gives the bootstrap version for the above result,
1
, 1
, 1
1 ,
, 2
2 1
1 2
, 1
ˆ ˆ
1 1
V V
V V
N X
X n
X n
n
d n
n n
t t
t n
t t
and
ˆ ˆ
n
=
n t
t t
n t
t n
t t
t n
t t
X X
n X
n X
X n
X n
n
2 1
1 2
2 1
1 2
1 ,
1 1
, 1
2
,
N
d
,
where
1 ,
3 1
, 1
2 ,
2 2
2
ˆ 1
ˆ 2
ˆ 1
ˆ 1
ˆ
V V
V
n n
n n
n
.
The results which are obtained by these two ways methods lead to the same conclusion,
i.e.
both results converge in distribution to normal distribution, but the variances of both normal distribution are different. The difference of both variances is a
reasonable property, because they were concluded by different approach.
4. Results of Monte Carlo Simulations
9 The simulation is conducted using S-Plus and the data sets are 20, 30, 50, and 100 time
series data of exchange rate of US dollar compared to Indonesian rupiah. Let
4 ,
3 ,
2 ,
1 ,
i
n
i
be the size of
i
th data set respectively, The data is taken from authorized website of Bank Indonesia,
i.e.
, http:www.bi.go.id
for transactions during May up to August 2012. Let the count for the
t
th transaction be
t
X
and identified as a sample. After centering the four data sets replacing
t
X
by
t
X
, then we fit an AR1 model
, ,
, 2
, 1
,
1
i t
t t
n t
X X
and 4
, 3
, 2
, 1
i
, where
t
WN
2
, . For
the data of size
1
n
= 20, the simulation gives the estimate
ˆ
turned out to be 0.7126 with an estimated standard error
= 0.1569. The simulation shows that the larger
n
the smaller estimated standard error, so larger
n
means a better estimate of , as seen in
Table 1.
Table 1 The estimates of
ˆ
and as compared to
ˆ
and respectively,
for various sample size
n
and bootstrap sample size
B
B
50 200
1,000 2,000
n
= 20 ˆ
0.5947
0.5937 0.6044
0.6224
ˆ = 0.7126
0.1368
0.1428 0.1306
0.1295 = 0.1569
n
= 30 ˆ
0.6484
0.6223 0.6026
0.6280
ˆ = 0.7321
0.1049
0.1027 0.1108
0.1185 = 0.1244
n
= 50 ˆ
0.5975
0.6051 0.5792
0.6002
ˆ = 0.6823
0.1162
0.1178 0.1093
0.1103 = 0.1034
n
= 100 ˆ
0.6242
0.6104 0.6310
0.6197
ˆ = 0.6884
0.0962
0.1006 0.0994
0.0986 = 0.0736
The bootstrap estimator of
ˆ
is usually denoted by
ˆ
. How accurate is
ˆ
as an estimator for
ˆ
? To answer the question, we need a bootstrap estimated standard error which is denoted by
, as a measure of statistical accuracy. To do so, we resample the data
t
X
as many
B
ranging from 50 to 2,000 for each sample of size
4 ,
3 ,
2 ,
1 ,
i
n
i
. To produce a good approximation, Davison and Hinkley [4] and Efron and Tibshirani [5] suggested to use the number of bootstrap samples
B
at least
B =
50. Table 1 shows the results of simulation for various size of data sets and the
10 number of bootstrap samples. As we can see, the increasing of the number of bootstrap
samples tends to yield the estimates of are close to the estimate of standard
error, . For example, even for small sample of size
20
1
n
, the bootstrap shows a good performance. Using bootstrap samples
B
= 50, the resulting of its bootstrap standard error
turned out to be 0.1368, while the estimated standard error = 0.1569. The difference between the two estimates is relative small. Meanwhile,
if we employ the 1,000 and 2,000 bootstrap samples, the simulation yields to
be 0.1306 and 0.1295 repectively, versus their estimated standard error of 0.1569. This fact shows a better performance of the bootstrap method
along with the increasing number of bootstrap samples used
. A better performance of bootstrap is also shown when we simulate a larger sample, as we can see in Table 1. For
100
4
n
the bootstrap estimate of standard errors are 0.0962 and 0.0986 for
B =
50 and 2,000 respectively, agreeing nicely with the estimated standard error of 0.1034.
Meantime, the histogram and density estimates of
ˆ ˆ
i
n
, with 4
, 3
, 2
, 1
i
are presented in Fig. 1. The top row of Fig. 1 shows the distribution of random variable
ˆ ˆ
i
n
looks skewed because of employing the small size of samples used,
i.e.
20 and 30. At overall, from Fig. 1 we can see that the four resulting histograms are closely related to the probability density of normal random variables. In
fact, the four plots of density estimates are resemble a plot of the probability density function pdf of an
2
,
N
random variable, where
1 ,
3 1
, 1
2 ,
2 2
2
ˆ 1
ˆ 2
ˆ 1
ˆ 1
ˆ
V V
V
n n
n n
n
.
Again, we can see that the larger
n
the closer density estimates in estimating the pdf of an
2
,
N
random variable. This result agrees with the result of [2] and [6].
11
0.0 0.2
0.4 0.6
0.8 1
2 3
den s
ity estim
ate
0.2 0.4
0.6 0.8
1 2
3
den s
ity estim
ate
Fig. 1 Histogram and plot of density estimates of 1,000 bootstrap random
samples
4 ,
3 ,
2 ,
1 ,
ˆ ˆ
i n
i
, with sample of size
1
n
= 20 top left,
2
n
= 30 top right,
3
n
= 50 buttom left, and
4
n
= 100 buttom right.
5. Conclusions