Paper WorldCongress Istanbul Bambang Suprihatin
Delta Method for Deriving the Consistency of Bootstrap Estimator
for Parameter of Autoregressive Model
1
Bambang Suprihatin, 2Suryo Guritno, and 3Sri Haryatmi
1
Mathematics Department, Sriwijaya University, INDONESIA
(Ph.D student of Mathematics Department, Gadjahmada University)
E-mail: suprihatin.b@mail.ugm.ac.id
2,3
Mathematics Department, University of Gadjahmada, INDONESIA
E-mail: 2suryoguritno@ugm.ac.id 3s_kartiko@yahoo.com
Abstract. Let
X t , X t ,… , X t
{ X t , t∈T }
be the first order of autoregressive model and let
be the sample that satisfies such model, i.e. the sample follows the
X t =θX t−1 +ε t where { ε t } is a zero mean white noise process with
relation
2
^
constant variance σ . Let θ be the estimator for parameter θ . Brockwell and
1
2
n
Davis (1991) showed that
^ θ
θ→
p
and
^
) →d N ( 0, σ 2 ( γ^ n ( 0 ))−1 )
√ n ( θ−θ
.
Meantime, Central Limit Theorem asserts that the distribution of
√ n ( X̄−μ )
2
converges to Normal distribution with mean 0 and variance σ
as n→ ∞ . In
bootstrap view, the key of bootstrap terminology says that the population is to the
sample as the sample is to the bootstrap samples. Therefore, when we want to
investigate the consistency of the bootstrap estimator for sample mean, we
¿
investigate the distribution of √ n ( X̄ − X̄ ) contrast to √ n ( X̄−μ ) , where X̄
¿
is a bootstrap version of X̄ computed from sample bootstrap X . Asymptotic
theory of the bootstrap sample mean is useful to study the consistency for many other
¿
¿
θ^ be the bootstrap estimator for θ^ . In this paper we study the
^¿
consistency of θ using delta Method. After all, we construct a measurable map
¿
'
n
m
φ:ℜ →ℜ
√ n ( θ^ ¿ −θ^ ) = √ n ( φ ( X̄ )−φ ( X̄ ) ) →d φ X̄ ( G )
such that
n ( X̄ ¿ − X̄ ) →d G , where
conditionally almost surely, by applying the fact that √
statistics. Let
G is a normal distribution. We also present the Monte Carlo simulations
emphisize the conclusions.
to
Keywords: Bootstrap, consistency, autoregressive model, delta method, Monte Carlo
simulations
1. Introduction
153
Studying of estimation of the unknown parameter
θ^
θ
involves: (1) what estimator
should be used? (2) having choosen to use particular
θ^ , is this estimator
θ ? (3) how accurate is
consistent to the population parameter
θ^
as an
θ ? (4) the interesting one is, what is the asymptotic
estimator for true parameter
distribution of such estimator? The bootstrap is a general methodology for answering
the second and third questions, while the delta method is used to answer the last
question. Consistency theory is needed to ensure that the estimator is consistent to
the actual parameter as desired.
Consider the parameter θ
for
θ
is the sample mean
is the population mean. The consistent estimator
^ X̄= 1 ∑n X
θ=
n i=1 i . The consistency theory is then
extended to the consistency of bootstrap estimator for mean. According to the
bootstrap terminology, if we want to investigate the consistency of bootstrap
estimator for mean, we investigate the distribution of
√ n ( X̄ ¿ − X̄ )
√ n ( X̄−μ )
and
. The consistency of bootstrap under Kolmogorov metric is defined
as
sup |P F ( √ n ( X̄−μ )≤x )−P F
x
(n √ n ( X̄ ¿ − X̄ ) ≤x ) |.
(1)
Bickel and Freedman (1981) and Singh (1981) showed that (1) converges almost
surely to zero as. The consistecy of bootstrap for mean is a worthy tool for studying
the consistency and limiting distribution of other statistics. In this paper, we study the
asymptotic distribution of
¿
θ^ , i.e. bootstrap estimator for parameter of the AR(1)
process. Suprihatin, et.al (2013) also studied the advantage of bootstrap for
estimating the median, and the results gave a good accuracy.
The consistency of bootstrap estimator for mean is then applied to study the
asymptotic distribution of
¿
θ^ , i.e. bootstrap estmate for parameter of the AR(1)
process using delta method. We describe the consistency of bootstrap estimates for
mean and investigate the limiting distribution of
154
¿
θ^ . Section 2 reviews the
consistency of bootstrap estimate for mean under Kolmogorov metric and describe
the estimation of autocovariance function. Section 3 deal with asymptotic
^
distribution of θ
¿
using delta method. Section 4 discuss the results of Monte Carlo
simulations involve bootstrap standard errors and density estmation for mean and
¿
θ^ . Section 5, is the last section, briefly describes the conclusions of the paper.
2. Consistency of Bootstrap Estimator For Mean and Estimation of
Autocovariance Function
Let
( X 1 , X 2 ,…, X n )
be a
random sample of size n from a population with
common distribution F and let
T ( X 1 , X 2 ,… , X n ; F )
be the specified random
variable or statistic of interest, possibly depending upon the unknown distribution F.
Let
Fn
denote the empirical distribution function of
( X 1 , X 2 ,…, X n )
X 1 , X 2 ,…, X n . The
distribution putting probability 1/n at each of the points
bootstrap method is to approximate the distribution of
F by that of
T ( X ¿1 , X ¿2 ,… , X ¿n ; F n )
under
bootstrapping random sample of size n from
Fn whrere
, i.e., the
T ( X 1 , X 2 ,… , X n ; F )
¿
¿
under
¿
( X 1 , X 2 ,…, X n )
denotes a
Fn .
We start with definition of consistency. Let F and G be two distribution
functions on sample space X. Let
on
X.
For
X 1 , X 2 ,…, X n
T ( X 1 , X 2 ,… , X n ; F )
ρ ( F ,G ) be a metric on the space of distribution
i.i.d
from
F,
and
a
given
functional
, let
H n ( x )=P F ( T ( X 1 , X 2 ,… , X n ; F ) ≤x )
H Boot ( x )=P ¿ ( T ( X ¿1 , X ¿2 ,…, X ¿n ; F n ) ≤x )
,
.
We say that the bootstrap is consistent (strongly) under
ρ ( H n , H Boot ) →a . s. 0.
155
ρ
for T if
T ( X 1 , X 2 ,… , X n ; F ) =√ n ( X̄−μ )
Let functional T is defined as
μ
and
¿
¿
¿
T ( X 1 , X 2 ,… , X n ; F n ) =√ n ( X̄ − X̄ )
Bootstrap
PF
n
X̄
are sample mean and population mean respectively. Bootstrap version of T
¿
is
where
method
( √ n ( X̄ ¿ − X̄ )≤x )
is
a
device
X̄ ¿
, where
for
is boostrapping sample mean.
PF ( √n ( X̄ −μ )≤x )
estimating
by
. Singh (198) studied the consistency of bootstrap under
Kolmogorov metric,
K ( F , G )=sup |F ( x)−G (x )|
=
x
sup |P F ( √ n ( X̄−μ )≤x )−P F
x
Meanwhile, Bickel and Freedman (1981)
n
( √ n ( X̄ ¿ − X̄ ) ≤x ) |.
studied the same topic but tey used
¿
X̄ →a . s. X̄ . Suprihatin,
Mallows metric. The crux result of both papers is that
et.al (2011) emphasized this result by giving nice simulations and agree with their
results. Papers of Singh (198) and Bickel and Freedman (1981) have become the
foundation for studying other complicated statistics.
X 1 , X 2 ,…, X n
Suppose we have the observed values
from the stationary
AR(1) process. A natural estimators for parameters mean, covariance and correlation
n
function
1
μ^ n = X̄ n = ∑ X t
n t =1
,
are
ρ^ n (h)=γ^ n (h )/ γ^ n (0 )
series
n−h
1
γ^ n ( h)= ∑ ( X t +h − X̄ n )( X t − X̄ n )
n t =1
,
respectively. If the series
Xt
and
is replaced by the centered
X t −μ X , then the autocovariance function does not change. Therefore,
^
studying the asymptotic properties of the sample autocovariance function γ n ( h) , it
is not a loss of generality to assume that
μX
= 0. The sample autocovariance
function can be written as
n−h
γ^ n ( h)=
(
n−h
)(
n
)
1
1
1
X t +h X t − X̄ n ∑ X t − ∑ X t X̄ n +
n∑
n
n t =1
t =1
t=1
156
2
( X̄ n)
.
(2)
Under some conditions (see, e.g., van der Vaart (2012)), the last three terms in (2) is
O p ( 1/n ) . Thus, under assumption that
of the order
μX
= 0, we can write (2) in
simple notation,
n−h
1
γ^ n ( h)= ∑ X t +h X t +O p ( 1/n )
n t=1
.
The asymptotic behaviour of the sequence
only on
n
−1
√ n ( γ^ n( h)−γ X (h))
depends
n−h
∑t=1 X t+h X t
. Note that a change of
n−h
by n
is asymptotically
negligible, so that, for simplicity of notation, we can equivalently study the average
~γ (h )= 1 ∑n X X
n
n t=1 t+h t .
Both
γ^ n ( h) and ~γ n(h) are unbiased estimator of
the condition that
E ( X t +h X t ) =γ X (h )
, under
μ X =0 . Their asymptotic distribution then can be derived by
applying a central limit theorem to the average
The asymptotic variance takes the form
Ȳ n of the variables Y t =X t+h X t .
∑g γY (g)
and in general depends on
fourth order moments of the type
E ( X t +g+h X t+g X t+h X t )
autocovariance function of the series
X t . Van der Vaart (2012) showed that the
autocovariance function of the series
as well as on the
Y t =X t+h X t can be written as
V h , h=κ 4 ( ε)γ X (h )2 + ∑ γ X (g )2 + ∑ γ X (g +h )γ X ( g−h ),
g
κ 4 ( ε )=
Where
E ( ε 41 )
2
E ( ε 12 )
(3)
g
−3,
the fourth cumulant of
give the asymptotic distribution of the sequence
157
ε t . The following theorem
√ n ( γ^ n( h)−γ X (h))
.
∞
Theorem 1 If
and
X t =μ+
E ( ε 4t )0 such
for every
√ n||θ^ ¿n−θ^ n|| ≤M
small,
,
and
||θ^ n −θ|| ≤δ , then
Rn :=‖√n ( φ( θ^ ¿n )−φ( θ^ n ))−φ'θ √ n (θ^ ¿n− θ^ n ) ‖
=| ( φ'θ̄ −φ 'θ) √ n ( θ^ ¿n −θ^ n ) |≤ηM
n
Fix a number
ε>0
and a large number M. For
η
.
sufficiently small to ensure
that ηM ε| P^ n ) ≤P ( √ n||θ^ ¿n −θ^ n|| > M or ||θ^ n −θ||> δ| P^ n )
Since
.
(4)
θ^ n →a . s. θ , the right side of (4) converges almost surely to P ( ||T||≥M )
for every continuity point M of
||T|| . This can be made arbitrarily small by choice
159
of M. Conclude that the left side of (4) converges to zero almost surely. The theorem
follows by an application of Slutsky’s lemma. ■
For the AR(1) process, from Yule-Walker equation we obtain the moment
^ ^
estimator θ= ρ1
where ρ^ 1
be the lag 1 sample autocorrelation
n
∑t=2 X t −1 X t
ρ^ 1 =
n
∑t =1 X 2t
.
(5)
According to Davison and Hinkley (2006), the estimate of standard error of
parameter
θ
is
√ 1−θ^ 2 /n
s^e ( θ )=¿
. Meanwhile, the bootstrap version of
standard error was introduced by Efron and Tibshirani (1986). In Section 4 we
demonstrate results of Monte Carlo simulations consist the two of standard errors
and give brief comments. In Suprihatin, et.al. (2012) we construct a measurable
function
φ as follows. Equation (5) can be written as
n
∑ X t−1 ( θX t−1 +ε t )
ρ^ 1 = t=2
n
∑t=1 X 2t
n
=
n
∑t=1 X 2t
θ
=
n
θ ∑ t=2 X 2t−1 + ∑t =2 X t−1 ε t
n+1
(∑
t=2
n
)
X 2t−1 −X 2n + ∑t=2 X t−1 ε t
n
∑t=1 X 2t
θ n
θ
1 n
X 2t − X 2n + ∑t=2 X t−1 ε t
∑
t=1
n
n
n
=
n
1
X 2t
∑
t=1
n
Brockwell and Davis (1991) have shown that
true
parameter
θ= ρ1 .
1 n
X ε→ E X ε
n ∑t=2 t−1 t a .s. ( t−1 t ) . Since
Kolmogorov
X t−1
160
ρ^ 1 is consistent estimator of
SLLN
asserts
is independent of
that
ε t , then
E ( X t−1 ε t )
= 0. Hence,
1 n
∑ X ε→ 0
n t=2 t−1 t a . s. . By applying the Slutsky’s lemma,
θ
θ X 2 − X 2n
n
~
ρ 1=
.
2
X
the last display is approximated by
~
^
θ→
a. s . ρ 1 . We see that
obtain
θ
θx− X 2n
n
φ ( x )=
x
. Since φ
ρ^
~ρ
1
φ ( X 2)
equals to
n→ ∞
Thus, for
we
for the function
is continous and hence is measurable. Suppose that
εt .
is based on a sample from a distribution with finite first four moments of
By central limit theorem and applying Theorem 1 we conclude that
√ n ( X 2 −γ X ( 0)) →d N ( 0,V 0,0 )
where
V 0,0 =κ 4 (ε )γ X (0)2 +2 ∑ γ X (g )2
,
φ is
as in (3) for h=0 . The map
g
'
differentiable at the point γ X (0) , with derivative
φγ
X
(0 )=
θX 2n
2
nγ X (0 )
. Theorem 2
says that
√ n ( φ ( X 2 ) −φ( γ X (0)) ) =
'
φγ
X
(0 )
(√ (
n
n
1
∑ t=1 X 2t −γ X ( 0 )
n
))
θX 2n
=
2
nγ X (0 )
+ o p (1)
√ n ( 1n ∑t X 2t −γ X (0 ))
+ o p (1) .
In view of Theorem 2, if T possesses the normal with mean 0 and variance
V 0,0 ,
then
2
^
)=√ n ( φ ( X 2 )−φ( γ X ( 0 ))) →d
√ n ( θ−θ
θX n
2
nγ X ( 0 )
Meantime, the bootstrap version of
((
N 0,
T
~
θX 2n
nγ X ( 0 )2
¿
θ^ , denoted by θ^
161
for
t=2, 3, …, n.
V 0,0
.
can be obtained
as follows [see, e.g. Efron dan Tibshirani (1986) and Freedman (1985)]:
^
^
1. Define the residuals ε t = X t −θ X t−1
) )
2
¿
¿
¿
X 1 , X 2 , …, X n
2. A bootstrap sample
¿
¿
¿
ε , ε , …, εn
is created by sampling 2 3
¿
X 1 =X 1 as an initial bootstrap
with replacement from the residuals. Letting
¿ ^ ¿
¿
X
=
θ
X
+ε
t
t−1
t , t=2, 3, …, n .
sample and
3. Finally, after centering the bootstrap time series
¿
replaced by
X i − X̄
¿
X̄ =
¿
where
¿
¿
¿
¿
X 1 , X 2 , …, X n i.e. X i is
1 n
¿
Xt
∑
t=1
n
. Using the plug-in
n
principle, we obtain the bootstrap estimator
¿
computed from the sample
¿
∑ X ¿t−1 X ¿t
= t=2n
2
∑t=1 X ¿t
¿
¿
θ^ = ρ^ 1
¿
X 1 , X 2 , …, X n .
Analog with the previous discussion, we obtain the bootstrap version for
counterpart of
~ρ
1 , that is measurable map
of Theorem 3 we conclude that
~
ρ ¿1=
2 θ
¿
θ X − X ¿n 2
n
X
¿
2
.
Thus, in view
~ρ ¿
~
1 converges to ρ 1 conditionally almost surely.
By the Glivenko-Cantelli lemma and applying the plug-in principle, we obtain
√ n ( X ¿ 2 −^γ n(0 ))→d N ( 0,V ¿0,0 )
and
((
√ n ( θ^ ¿ −θ^ ) =√ n ( φ ( X ¿ 2 )−φ ( γ^ n ( 0 ))) →d N 0,
¿
) )
2
θX ¿n2
2
n γ^ n ( 0)
V ¿0,0
.
4. Results of Monte Carlo Simulations
The simulation is conducted using S-Pus and the sample is the 50 time series data of
exchange rate of US dollar compared to Indonesian rupiah. Data is taken from
authorized website of Bank Indonesia, i.e. http://www.bi.go.id for fifty days of
162
transactions on March and April 2012. Suprihatin, et. al. (2012) has identified that
the time series data satisfies the AR(1) procces, such that the data follows the
equation
X t =θX t−1 +ε t , t=2, 3, …, 50 ,
where
εt
2
^
WN ( 0, σ ) . The simulation yields the estimator θ
=-
0.448 with standard error 0.1999. To produce a good approximation, Efron and
Tibshirani (1986) and Davison and Hinkley (2006) suggest to use the number of
resamples at least B = 50. Bootstrap version of standard errror using bootstrap
samples of size B = 25, 50, 100, 1,000 and 2,000 yielding as presented in Table 1.
^
Table 1 Estimates for Standard Errors of θ
¿
for Various B
B
¿
se F^ ( θ^ )
25
50
100
500
1,000
2,000
0.2005
0.1981
0.1997
0.1991
0.1972
1.1964
From Table 1 we can see that the values of bootstrap standard errors tend to
decrease in term of size of B increase and closed to the value of 0.1999 (actual
standard error). These results show that the bootstrap gives a good estimate.
Meantime, the histogram and density estimate of
¿
θ^
are presented in Figure 1.
From Figure 1 we can see that the resulting histogram close related to the normal
density. Of course, this result agree to the result of Freedman (1985) and Bose
0.0
0.5
1.0
1.5
2.0
2.5
(1988).
-0.8
-0.6
-0.4
-0.2
tetha.boot
163
0.0
0.2
^
Figure 1 Histogram and Density Estimate of Bootstrap Estimator θ
¿
5. Conclusions
A number of points arise from the study of Section 2, 3, and 4, amongst which we
state as follows.
X t =θX t−1 +ε t
1. Consider an AR(1) process
θ^
=
ρ^ 1
is a consistent estimator for the true parameter
using the delta method
estimator for
^ ~
θ→
p ρ1
with Yule-Walker estimator
~ρ
1
~ρ ¿
1
we have shown that
θ= ρ1 . By
is also a consistent
and
for n→ ∞ . Moreover, we obtain the crux result that
((
√ n ( φ ( X )−φ( γ^ n ( 0 ))) →d N 0,
¿2
¿
¿
) )
2
θX 2n
2
n ^γ n ( 0 )
V ¿0,0
.
2. Resulting of Monte Carlo simulations show that the bootstrap estimators are
good approximations, as represented by their standard errors and plot of
densities estimation.
REFERENCES
[1] BICKEL, P. J. AND FREEDMAN, D. A. Some asymptotic theory for the
bootstrap, Ann. Statist., 9, 1996-1217, 1981.
[2] BOSE, A. Edgeworth correction by bootstrap in autoregressions, Ann. Statist.,
16, 1709-1722, 1988.
[3] BROCKWELL, P. J. AND DAVIS, R. A. Time Series: Theory and Methods,
Springer, New York, 1991.
[4] DAVISON, A. C. AND HINKLEY, D. V. Bootstrap Methods and Their
Application, Cambridge University Press, Cambridge, 2006.
164
[5] EFRON, B. AND TIBSHIRANI, R. Bootstrap methods for standard errors,
confidence intervals, and others measures of statistical accuracy, Statistical
Science, 1, 54-77, 1986.
[6] FREEDMAN, D. A. On bootstrapping two-stage least-squares estimates in
stationary linear models, Ann. Statist., 12, 827-842, 1985.
[7] SINGH, K. On the asymptotic accuracy of Efron’s bootstrap, Ann. Statist.,
9, 1187-1195, 1981.
[8] SUPRIHATIN, B., GURITNO, S., AND HARYATMI, S. Consistency of the
bootstrap estimator for mean under kolmogorov metric and its implementation
on delta method, Proc. of The 6th Seams-GMU Conference, 2011.
[9] VAN DER VAART, A. W. Asymptotic Statistics, Cambridge University Press,
Cambridge, 2000.
[10] VAN DER VAART, A. W. Lecture Notes on Time Series, Vrije Universiteit,
Amsterdam, 2012.
165
for Parameter of Autoregressive Model
1
Bambang Suprihatin, 2Suryo Guritno, and 3Sri Haryatmi
1
Mathematics Department, Sriwijaya University, INDONESIA
(Ph.D student of Mathematics Department, Gadjahmada University)
E-mail: suprihatin.b@mail.ugm.ac.id
2,3
Mathematics Department, University of Gadjahmada, INDONESIA
E-mail: 2suryoguritno@ugm.ac.id 3s_kartiko@yahoo.com
Abstract. Let
X t , X t ,… , X t
{ X t , t∈T }
be the first order of autoregressive model and let
be the sample that satisfies such model, i.e. the sample follows the
X t =θX t−1 +ε t where { ε t } is a zero mean white noise process with
relation
2
^
constant variance σ . Let θ be the estimator for parameter θ . Brockwell and
1
2
n
Davis (1991) showed that
^ θ
θ→
p
and
^
) →d N ( 0, σ 2 ( γ^ n ( 0 ))−1 )
√ n ( θ−θ
.
Meantime, Central Limit Theorem asserts that the distribution of
√ n ( X̄−μ )
2
converges to Normal distribution with mean 0 and variance σ
as n→ ∞ . In
bootstrap view, the key of bootstrap terminology says that the population is to the
sample as the sample is to the bootstrap samples. Therefore, when we want to
investigate the consistency of the bootstrap estimator for sample mean, we
¿
investigate the distribution of √ n ( X̄ − X̄ ) contrast to √ n ( X̄−μ ) , where X̄
¿
is a bootstrap version of X̄ computed from sample bootstrap X . Asymptotic
theory of the bootstrap sample mean is useful to study the consistency for many other
¿
¿
θ^ be the bootstrap estimator for θ^ . In this paper we study the
^¿
consistency of θ using delta Method. After all, we construct a measurable map
¿
'
n
m
φ:ℜ →ℜ
√ n ( θ^ ¿ −θ^ ) = √ n ( φ ( X̄ )−φ ( X̄ ) ) →d φ X̄ ( G )
such that
n ( X̄ ¿ − X̄ ) →d G , where
conditionally almost surely, by applying the fact that √
statistics. Let
G is a normal distribution. We also present the Monte Carlo simulations
emphisize the conclusions.
to
Keywords: Bootstrap, consistency, autoregressive model, delta method, Monte Carlo
simulations
1. Introduction
153
Studying of estimation of the unknown parameter
θ^
θ
involves: (1) what estimator
should be used? (2) having choosen to use particular
θ^ , is this estimator
θ ? (3) how accurate is
consistent to the population parameter
θ^
as an
θ ? (4) the interesting one is, what is the asymptotic
estimator for true parameter
distribution of such estimator? The bootstrap is a general methodology for answering
the second and third questions, while the delta method is used to answer the last
question. Consistency theory is needed to ensure that the estimator is consistent to
the actual parameter as desired.
Consider the parameter θ
for
θ
is the sample mean
is the population mean. The consistent estimator
^ X̄= 1 ∑n X
θ=
n i=1 i . The consistency theory is then
extended to the consistency of bootstrap estimator for mean. According to the
bootstrap terminology, if we want to investigate the consistency of bootstrap
estimator for mean, we investigate the distribution of
√ n ( X̄ ¿ − X̄ )
√ n ( X̄−μ )
and
. The consistency of bootstrap under Kolmogorov metric is defined
as
sup |P F ( √ n ( X̄−μ )≤x )−P F
x
(n √ n ( X̄ ¿ − X̄ ) ≤x ) |.
(1)
Bickel and Freedman (1981) and Singh (1981) showed that (1) converges almost
surely to zero as. The consistecy of bootstrap for mean is a worthy tool for studying
the consistency and limiting distribution of other statistics. In this paper, we study the
asymptotic distribution of
¿
θ^ , i.e. bootstrap estimator for parameter of the AR(1)
process. Suprihatin, et.al (2013) also studied the advantage of bootstrap for
estimating the median, and the results gave a good accuracy.
The consistency of bootstrap estimator for mean is then applied to study the
asymptotic distribution of
¿
θ^ , i.e. bootstrap estmate for parameter of the AR(1)
process using delta method. We describe the consistency of bootstrap estimates for
mean and investigate the limiting distribution of
154
¿
θ^ . Section 2 reviews the
consistency of bootstrap estimate for mean under Kolmogorov metric and describe
the estimation of autocovariance function. Section 3 deal with asymptotic
^
distribution of θ
¿
using delta method. Section 4 discuss the results of Monte Carlo
simulations involve bootstrap standard errors and density estmation for mean and
¿
θ^ . Section 5, is the last section, briefly describes the conclusions of the paper.
2. Consistency of Bootstrap Estimator For Mean and Estimation of
Autocovariance Function
Let
( X 1 , X 2 ,…, X n )
be a
random sample of size n from a population with
common distribution F and let
T ( X 1 , X 2 ,… , X n ; F )
be the specified random
variable or statistic of interest, possibly depending upon the unknown distribution F.
Let
Fn
denote the empirical distribution function of
( X 1 , X 2 ,…, X n )
X 1 , X 2 ,…, X n . The
distribution putting probability 1/n at each of the points
bootstrap method is to approximate the distribution of
F by that of
T ( X ¿1 , X ¿2 ,… , X ¿n ; F n )
under
bootstrapping random sample of size n from
Fn whrere
, i.e., the
T ( X 1 , X 2 ,… , X n ; F )
¿
¿
under
¿
( X 1 , X 2 ,…, X n )
denotes a
Fn .
We start with definition of consistency. Let F and G be two distribution
functions on sample space X. Let
on
X.
For
X 1 , X 2 ,…, X n
T ( X 1 , X 2 ,… , X n ; F )
ρ ( F ,G ) be a metric on the space of distribution
i.i.d
from
F,
and
a
given
functional
, let
H n ( x )=P F ( T ( X 1 , X 2 ,… , X n ; F ) ≤x )
H Boot ( x )=P ¿ ( T ( X ¿1 , X ¿2 ,…, X ¿n ; F n ) ≤x )
,
.
We say that the bootstrap is consistent (strongly) under
ρ ( H n , H Boot ) →a . s. 0.
155
ρ
for T if
T ( X 1 , X 2 ,… , X n ; F ) =√ n ( X̄−μ )
Let functional T is defined as
μ
and
¿
¿
¿
T ( X 1 , X 2 ,… , X n ; F n ) =√ n ( X̄ − X̄ )
Bootstrap
PF
n
X̄
are sample mean and population mean respectively. Bootstrap version of T
¿
is
where
method
( √ n ( X̄ ¿ − X̄ )≤x )
is
a
device
X̄ ¿
, where
for
is boostrapping sample mean.
PF ( √n ( X̄ −μ )≤x )
estimating
by
. Singh (198) studied the consistency of bootstrap under
Kolmogorov metric,
K ( F , G )=sup |F ( x)−G (x )|
=
x
sup |P F ( √ n ( X̄−μ )≤x )−P F
x
Meanwhile, Bickel and Freedman (1981)
n
( √ n ( X̄ ¿ − X̄ ) ≤x ) |.
studied the same topic but tey used
¿
X̄ →a . s. X̄ . Suprihatin,
Mallows metric. The crux result of both papers is that
et.al (2011) emphasized this result by giving nice simulations and agree with their
results. Papers of Singh (198) and Bickel and Freedman (1981) have become the
foundation for studying other complicated statistics.
X 1 , X 2 ,…, X n
Suppose we have the observed values
from the stationary
AR(1) process. A natural estimators for parameters mean, covariance and correlation
n
function
1
μ^ n = X̄ n = ∑ X t
n t =1
,
are
ρ^ n (h)=γ^ n (h )/ γ^ n (0 )
series
n−h
1
γ^ n ( h)= ∑ ( X t +h − X̄ n )( X t − X̄ n )
n t =1
,
respectively. If the series
Xt
and
is replaced by the centered
X t −μ X , then the autocovariance function does not change. Therefore,
^
studying the asymptotic properties of the sample autocovariance function γ n ( h) , it
is not a loss of generality to assume that
μX
= 0. The sample autocovariance
function can be written as
n−h
γ^ n ( h)=
(
n−h
)(
n
)
1
1
1
X t +h X t − X̄ n ∑ X t − ∑ X t X̄ n +
n∑
n
n t =1
t =1
t=1
156
2
( X̄ n)
.
(2)
Under some conditions (see, e.g., van der Vaart (2012)), the last three terms in (2) is
O p ( 1/n ) . Thus, under assumption that
of the order
μX
= 0, we can write (2) in
simple notation,
n−h
1
γ^ n ( h)= ∑ X t +h X t +O p ( 1/n )
n t=1
.
The asymptotic behaviour of the sequence
only on
n
−1
√ n ( γ^ n( h)−γ X (h))
depends
n−h
∑t=1 X t+h X t
. Note that a change of
n−h
by n
is asymptotically
negligible, so that, for simplicity of notation, we can equivalently study the average
~γ (h )= 1 ∑n X X
n
n t=1 t+h t .
Both
γ^ n ( h) and ~γ n(h) are unbiased estimator of
the condition that
E ( X t +h X t ) =γ X (h )
, under
μ X =0 . Their asymptotic distribution then can be derived by
applying a central limit theorem to the average
The asymptotic variance takes the form
Ȳ n of the variables Y t =X t+h X t .
∑g γY (g)
and in general depends on
fourth order moments of the type
E ( X t +g+h X t+g X t+h X t )
autocovariance function of the series
X t . Van der Vaart (2012) showed that the
autocovariance function of the series
as well as on the
Y t =X t+h X t can be written as
V h , h=κ 4 ( ε)γ X (h )2 + ∑ γ X (g )2 + ∑ γ X (g +h )γ X ( g−h ),
g
κ 4 ( ε )=
Where
E ( ε 41 )
2
E ( ε 12 )
(3)
g
−3,
the fourth cumulant of
give the asymptotic distribution of the sequence
157
ε t . The following theorem
√ n ( γ^ n( h)−γ X (h))
.
∞
Theorem 1 If
and
X t =μ+
E ( ε 4t )0 such
for every
√ n||θ^ ¿n−θ^ n|| ≤M
small,
,
and
||θ^ n −θ|| ≤δ , then
Rn :=‖√n ( φ( θ^ ¿n )−φ( θ^ n ))−φ'θ √ n (θ^ ¿n− θ^ n ) ‖
=| ( φ'θ̄ −φ 'θ) √ n ( θ^ ¿n −θ^ n ) |≤ηM
n
Fix a number
ε>0
and a large number M. For
η
.
sufficiently small to ensure
that ηM ε| P^ n ) ≤P ( √ n||θ^ ¿n −θ^ n|| > M or ||θ^ n −θ||> δ| P^ n )
Since
.
(4)
θ^ n →a . s. θ , the right side of (4) converges almost surely to P ( ||T||≥M )
for every continuity point M of
||T|| . This can be made arbitrarily small by choice
159
of M. Conclude that the left side of (4) converges to zero almost surely. The theorem
follows by an application of Slutsky’s lemma. ■
For the AR(1) process, from Yule-Walker equation we obtain the moment
^ ^
estimator θ= ρ1
where ρ^ 1
be the lag 1 sample autocorrelation
n
∑t=2 X t −1 X t
ρ^ 1 =
n
∑t =1 X 2t
.
(5)
According to Davison and Hinkley (2006), the estimate of standard error of
parameter
θ
is
√ 1−θ^ 2 /n
s^e ( θ )=¿
. Meanwhile, the bootstrap version of
standard error was introduced by Efron and Tibshirani (1986). In Section 4 we
demonstrate results of Monte Carlo simulations consist the two of standard errors
and give brief comments. In Suprihatin, et.al. (2012) we construct a measurable
function
φ as follows. Equation (5) can be written as
n
∑ X t−1 ( θX t−1 +ε t )
ρ^ 1 = t=2
n
∑t=1 X 2t
n
=
n
∑t=1 X 2t
θ
=
n
θ ∑ t=2 X 2t−1 + ∑t =2 X t−1 ε t
n+1
(∑
t=2
n
)
X 2t−1 −X 2n + ∑t=2 X t−1 ε t
n
∑t=1 X 2t
θ n
θ
1 n
X 2t − X 2n + ∑t=2 X t−1 ε t
∑
t=1
n
n
n
=
n
1
X 2t
∑
t=1
n
Brockwell and Davis (1991) have shown that
true
parameter
θ= ρ1 .
1 n
X ε→ E X ε
n ∑t=2 t−1 t a .s. ( t−1 t ) . Since
Kolmogorov
X t−1
160
ρ^ 1 is consistent estimator of
SLLN
asserts
is independent of
that
ε t , then
E ( X t−1 ε t )
= 0. Hence,
1 n
∑ X ε→ 0
n t=2 t−1 t a . s. . By applying the Slutsky’s lemma,
θ
θ X 2 − X 2n
n
~
ρ 1=
.
2
X
the last display is approximated by
~
^
θ→
a. s . ρ 1 . We see that
obtain
θ
θx− X 2n
n
φ ( x )=
x
. Since φ
ρ^
~ρ
1
φ ( X 2)
equals to
n→ ∞
Thus, for
we
for the function
is continous and hence is measurable. Suppose that
εt .
is based on a sample from a distribution with finite first four moments of
By central limit theorem and applying Theorem 1 we conclude that
√ n ( X 2 −γ X ( 0)) →d N ( 0,V 0,0 )
where
V 0,0 =κ 4 (ε )γ X (0)2 +2 ∑ γ X (g )2
,
φ is
as in (3) for h=0 . The map
g
'
differentiable at the point γ X (0) , with derivative
φγ
X
(0 )=
θX 2n
2
nγ X (0 )
. Theorem 2
says that
√ n ( φ ( X 2 ) −φ( γ X (0)) ) =
'
φγ
X
(0 )
(√ (
n
n
1
∑ t=1 X 2t −γ X ( 0 )
n
))
θX 2n
=
2
nγ X (0 )
+ o p (1)
√ n ( 1n ∑t X 2t −γ X (0 ))
+ o p (1) .
In view of Theorem 2, if T possesses the normal with mean 0 and variance
V 0,0 ,
then
2
^
)=√ n ( φ ( X 2 )−φ( γ X ( 0 ))) →d
√ n ( θ−θ
θX n
2
nγ X ( 0 )
Meantime, the bootstrap version of
((
N 0,
T
~
θX 2n
nγ X ( 0 )2
¿
θ^ , denoted by θ^
161
for
t=2, 3, …, n.
V 0,0
.
can be obtained
as follows [see, e.g. Efron dan Tibshirani (1986) and Freedman (1985)]:
^
^
1. Define the residuals ε t = X t −θ X t−1
) )
2
¿
¿
¿
X 1 , X 2 , …, X n
2. A bootstrap sample
¿
¿
¿
ε , ε , …, εn
is created by sampling 2 3
¿
X 1 =X 1 as an initial bootstrap
with replacement from the residuals. Letting
¿ ^ ¿
¿
X
=
θ
X
+ε
t
t−1
t , t=2, 3, …, n .
sample and
3. Finally, after centering the bootstrap time series
¿
replaced by
X i − X̄
¿
X̄ =
¿
where
¿
¿
¿
¿
X 1 , X 2 , …, X n i.e. X i is
1 n
¿
Xt
∑
t=1
n
. Using the plug-in
n
principle, we obtain the bootstrap estimator
¿
computed from the sample
¿
∑ X ¿t−1 X ¿t
= t=2n
2
∑t=1 X ¿t
¿
¿
θ^ = ρ^ 1
¿
X 1 , X 2 , …, X n .
Analog with the previous discussion, we obtain the bootstrap version for
counterpart of
~ρ
1 , that is measurable map
of Theorem 3 we conclude that
~
ρ ¿1=
2 θ
¿
θ X − X ¿n 2
n
X
¿
2
.
Thus, in view
~ρ ¿
~
1 converges to ρ 1 conditionally almost surely.
By the Glivenko-Cantelli lemma and applying the plug-in principle, we obtain
√ n ( X ¿ 2 −^γ n(0 ))→d N ( 0,V ¿0,0 )
and
((
√ n ( θ^ ¿ −θ^ ) =√ n ( φ ( X ¿ 2 )−φ ( γ^ n ( 0 ))) →d N 0,
¿
) )
2
θX ¿n2
2
n γ^ n ( 0)
V ¿0,0
.
4. Results of Monte Carlo Simulations
The simulation is conducted using S-Pus and the sample is the 50 time series data of
exchange rate of US dollar compared to Indonesian rupiah. Data is taken from
authorized website of Bank Indonesia, i.e. http://www.bi.go.id for fifty days of
162
transactions on March and April 2012. Suprihatin, et. al. (2012) has identified that
the time series data satisfies the AR(1) procces, such that the data follows the
equation
X t =θX t−1 +ε t , t=2, 3, …, 50 ,
where
εt
2
^
WN ( 0, σ ) . The simulation yields the estimator θ
=-
0.448 with standard error 0.1999. To produce a good approximation, Efron and
Tibshirani (1986) and Davison and Hinkley (2006) suggest to use the number of
resamples at least B = 50. Bootstrap version of standard errror using bootstrap
samples of size B = 25, 50, 100, 1,000 and 2,000 yielding as presented in Table 1.
^
Table 1 Estimates for Standard Errors of θ
¿
for Various B
B
¿
se F^ ( θ^ )
25
50
100
500
1,000
2,000
0.2005
0.1981
0.1997
0.1991
0.1972
1.1964
From Table 1 we can see that the values of bootstrap standard errors tend to
decrease in term of size of B increase and closed to the value of 0.1999 (actual
standard error). These results show that the bootstrap gives a good estimate.
Meantime, the histogram and density estimate of
¿
θ^
are presented in Figure 1.
From Figure 1 we can see that the resulting histogram close related to the normal
density. Of course, this result agree to the result of Freedman (1985) and Bose
0.0
0.5
1.0
1.5
2.0
2.5
(1988).
-0.8
-0.6
-0.4
-0.2
tetha.boot
163
0.0
0.2
^
Figure 1 Histogram and Density Estimate of Bootstrap Estimator θ
¿
5. Conclusions
A number of points arise from the study of Section 2, 3, and 4, amongst which we
state as follows.
X t =θX t−1 +ε t
1. Consider an AR(1) process
θ^
=
ρ^ 1
is a consistent estimator for the true parameter
using the delta method
estimator for
^ ~
θ→
p ρ1
with Yule-Walker estimator
~ρ
1
~ρ ¿
1
we have shown that
θ= ρ1 . By
is also a consistent
and
for n→ ∞ . Moreover, we obtain the crux result that
((
√ n ( φ ( X )−φ( γ^ n ( 0 ))) →d N 0,
¿2
¿
¿
) )
2
θX 2n
2
n ^γ n ( 0 )
V ¿0,0
.
2. Resulting of Monte Carlo simulations show that the bootstrap estimators are
good approximations, as represented by their standard errors and plot of
densities estimation.
REFERENCES
[1] BICKEL, P. J. AND FREEDMAN, D. A. Some asymptotic theory for the
bootstrap, Ann. Statist., 9, 1996-1217, 1981.
[2] BOSE, A. Edgeworth correction by bootstrap in autoregressions, Ann. Statist.,
16, 1709-1722, 1988.
[3] BROCKWELL, P. J. AND DAVIS, R. A. Time Series: Theory and Methods,
Springer, New York, 1991.
[4] DAVISON, A. C. AND HINKLEY, D. V. Bootstrap Methods and Their
Application, Cambridge University Press, Cambridge, 2006.
164
[5] EFRON, B. AND TIBSHIRANI, R. Bootstrap methods for standard errors,
confidence intervals, and others measures of statistical accuracy, Statistical
Science, 1, 54-77, 1986.
[6] FREEDMAN, D. A. On bootstrapping two-stage least-squares estimates in
stationary linear models, Ann. Statist., 12, 827-842, 1985.
[7] SINGH, K. On the asymptotic accuracy of Efron’s bootstrap, Ann. Statist.,
9, 1187-1195, 1981.
[8] SUPRIHATIN, B., GURITNO, S., AND HARYATMI, S. Consistency of the
bootstrap estimator for mean under kolmogorov metric and its implementation
on delta method, Proc. of The 6th Seams-GMU Conference, 2011.
[9] VAN DER VAART, A. W. Asymptotic Statistics, Cambridge University Press,
Cambridge, 2000.
[10] VAN DER VAART, A. W. Lecture Notes on Time Series, Vrije Universiteit,
Amsterdam, 2012.
165