Asymptotic Distribution of Bootstrap Estimate For Parameter of AR1 Process

4     n t t h t n X X n h 1 1 ~  . Both ˆ h n  and ~ h n  are unbiased estimator of   h X X E X t h t    , under the condition that  X  . Their asymptotic distribution then can be derived by applying a central limit theorem to the averages n Y of the variables t h t t X X Y   . The asymptotic variance takes the form  g Y g  and in general depends on fourth order moments of the type   t h t g t h g t X X X X E     as well as on the autocovariance function of the series t X . Van der Vaart [10] showed that the autocovariance function of the series t h t t X X Y   can be written as , 2 2 4 , h g h g g h V X g X g X X h h              3 Where     , 3 2 2 1 4 1 4       E E the fourth cumulant of t  . The following theorem gives the asymptotic distribution of the sequence   ˆ h h n X n    . Theorem 1 If        j j t j t X    holds for an i.i.d. sequence t  with mean zero and     4 t E  and numbers j  with ,    j j  then     h h d X n V N h h n , , ˆ     .

3. Asymptotic Distribution of Bootstrap Estimate For Parameter of AR1 Process

Using Delta Method The delta method consists of using a Taylor expansion to approximate a random vector of the form   n T  by the polynomial                n T in   n T . This method is useful to deduce the limit law of         n T from that of   n T , which is guaranteed by the next theorem. Theorem 2 Let m k    :  be a map defined on a subset of k  and differentiable at  . Let n T be random vectors taking their values in the domain of  . If   T T r d n n    for numbers   n r , then     T T r d n n         . Moreover, the difference between         n n T r and          n n T r converges to zero in probability. 5 Assume that n ˆ is a statistic, and that  is a given differentiable map. The bootstrap estimator for the distribution of ˆ      n is ˆ ˆ n n      . If the bootstrap is consistent for estimating the distribution of      n n ˆ , then it is also consistent for estimating the distribution of   ˆ      n n , as given in the following theorem. The proof of the theorem is due to [10]. Theorem 3 Delta Method For Bootstrap Let m k    :  be a measurable map defined and continuously differentiable in a neighborhood of  . Let n  ˆ be random vectors taking their values in the domain of  that converge almost surely to  . If   T n d n     ˆ and   T n d n     ˆ ˆ conditionally almost surely, then both         T n d n          ˆ and         T n d n n          ˆ ˆ conditionally almost surely. Proof. By applying the mean value theorem, the difference ˆ ˆ n n      can be written as   n n n     ˆ ˆ   for a point n  between ˆ n  and n ˆ , if the latter two points are in the ball around  in which  is continuously differentiable. By the continuity of the derivative, there exists a constant   for every   such that || || h         || || h  for every h and every      || ˆ || n . If n is sufficiently large,  suffeciently small, M n n n   || ˆ ˆ ||   , and      || ˆ || n , then     n n n n n n n R         ˆ ˆ ˆ ˆ :          M n n n n              ˆ ˆ . Fix a number   and a large number M. For  sufficiently small to ensure that    M ,     n n n n n n P M n P P R P ˆ | || ˆ || or || ˆ ˆ || ˆ |             . 4 Since   . . ˆ s a n  , the right side of 4 converges almost surely to   M T P  || || for every continuity point M of || || T . This can be made arbitrarily small by choice of M. 6 Conclude that the left side of 4 converges to zero almost surely, and hence     . ˆ ˆ ˆ ˆ . . s a n n n n n n              By assumption that   T n d n     ˆ ˆ and because matrix   is continuous, by applying the continuous-mapping theorem we conclude that   . ˆ ˆ T n d n n           By applying the Slutsky’s lemma, we obtain     n n s a n n n n         ˆ ˆ ˆ ˆ . .     , and by an earlier conclusion we also conclude that   , ˆ ˆ . T n d n n          completing the proof. ■ The moment estimator for stationary AR1 process is obtained from the Yule- Walker equation, i.e. 1 ˆ ˆ n    where 1 ˆ n  be the lag 1 of sample autocorrelation       n t t n t t t n X X X 1 2 2 1 1 ˆ  . 5 According to Davison and Hinkley [4], the estimated standard error of parameter ˆ is   n 2 ˆ 1    . Meanwhile, the bootstrap version of standard error was introduced in [5]. In Section 4 we demonstrate the results of the Monte Carlo simulations consist tw types of standard errors and give brief comments. In accordance with the Theorem 3, we should construct a measurable function  which is defined and continuously differentiable in a neighborhood of  . Meantime, the bootstrap version of  ˆ , denoted by ˆ  can be obtained as follows see, e.g ., [5, 6]: 1. Define the residuals 1 ˆ ˆ    t t t X X   for . , , 3 , 2 n t   2. A bootstrap sample 2 1 , , , n X X X  is created by sampling 3 2 , , , n     with replacement from the residuals. Letting 1 1 X X  as an initial bootstrap sample and 1 ˆ t t t X X      , n t , , 3 , 2   . 3. Finally, after centering the bootstrap time series 2 1 , , , n X X X  i.e. i X is replaced by X X i  where    n t t X n X 1 1 . Using the plug-in principle, we obtain the bootstrap estimator 1 ˆ ˆ n          n t t n t t t X X X 1 2 2 1 computed from the sample 2 1 , , , n X X X  . 7 The way to describe the function  and asymptotic behaviour of 1 ˆ n  as follows. Since       n t t n t t t n X X X 1 2 2 1 1 ˆ  , then we can write this expression as             n t t t n t t n X X n X n 2 1 1 2 1 , 1 1 ˆ   , for the function    2 :  with u v v u  ,  . According to the Theorem 1, the multivariate central limit theorem for T n t t t n t t X X n X n            2 1 1 2 1 , 1 is                                                1 , 1 , 1 1 , , 2 2 1 1 2 , 1 1 1 V V V V N X X n X n n d X X n t t t n t t   . 6 The map  is differentiable with the matrix                     u u v v u v v u u 1 , , 2    , and          1 1 2 1 X X X X X       . By applying Theorem 2,                   1 , 1 , 1 2 1 1 2 X X n t t t n t t X X n X n n                                        n t X t t n t X t X X n n X n n X X 2 1 1 2 1 1 1 1      + 1 p o .                       1 1 1 1 1 2 1 1 2 2 X n t t t X X n t t X X X X n n X n n      + 1 p o . In view of Theorem 2, if   T Z Z 2 1 , possesses the normal distribution as in 6, then                      1 1 1 1 1 2 1 1 2 2 X n t t t X X n t t X X X X n n X n n      8 2 1 2 1 1 Z Z X X X d       ~   2 ,  N , where        2 1 2 2 1 1 Z Z Var X X X     , 1 2 1 1 2 1 3 2 2 1 2 2 Z Z Cov Z Var Z Var X X X X X             1 , 3 1 , 1 2 , 2 2 1 2 1 1 V V V X X X X X             . Thus, by Theorem 2 we conclude that   d X X n t t t n t t X X n X n n                  1 , 1 , 1 2 1 1 2       2 ,  N . Similarly and by applying the plug-in principle, Theorem 3 gives the bootstrap version for the above result,                                                    1 , 1 , 1 1 , , 2 2 1 1 2 , 1 ˆ ˆ 1 1 V V V V N X X n X n n d n n n t t t n t t   and     ˆ ˆ  n =                            n t t t n t t n t t t n t t X X n X n X X n X n n 2 1 1 2 2 1 1 2 1 , 1 1 , 1     2 ,  N d  , where 1 , 3 1 , 1 2 , 2 2 2 ˆ 1 ˆ 2 ˆ 1 ˆ 1 ˆ V V V n n n n n              . The results which are obtained by these two ways methods lead to the same conclusion, i.e. both results converge in distribution to normal distribution, but the variances of both normal distribution are different. The difference of both variances is a reasonable property, because they were concluded by different approach.

4. Results of Monte Carlo Simulations