The case with an i.i.d. process Numerical results for ARMA1,1 processes

P . Perron, S. Mallet Economics Letters 68 2000 225 –234 227 As special cases, we have that with an MA1 process: [Tr] 2 1 1 ]]] ]]] VarX r 5 1 , T 2 T T1 1 u and with an AR1 process: [Tr ] 2 2[Tr ] [Tr] 2a1 2 a a 1 2 a ]] ]]]]] ]]]]] VarX r 5 2 1 . T 2 T T1 2 a T1 2 a It is easily seen, from each expression, that as T → ` , VarX r → r 5VarWr. To assess, the T ‘approximation error’ we use the percentage relative deviation in variance: VarX r 2VarWr VarX r 2 r T T ]]]]]]]] ]]]]] eT,r,a,u 5 100 5 100 . 3 r VarWr

3. The case with an i.i.d. process

When the process is i.i.d. N0,1, we have [Tr] [Tr] S ]] D S ]] D eT,r,0,0 5 100 2 r r 5 100 2 1 . 4 T Tr It is to be noted that the error is non-positive for all T and r, so that the variance of the Wiener process Wr over-estimates the variance of X r. An interesting feature of the error function 4 is that it is T not, in general, a monotonically decreasing function of the sample size. Take for example the case r 5 0.5, the error is 0 for an even-valued sample size but is 1000.5 T for any odd-valued sample size. There is, hence, a seesaw pattern in the mean-squared error which increases in importance as r decreases. The error caused by this feature can be quite large. Take for example the case r 5 0.1. The errors are as follows: 0 at T 5 20, 2 4.76 at T 5 21 up to 2 31.03 at T 5 29, back to 0 at T 5 30, 2 3.23 at T 5 31 and up to 2 23.08 at T 5 39. In Fig. 1, we present the percentage error function for values of T ranging from 10 to 100 in steps of 1 and for r between 0.05 and 1.0 in steps of .05. The seesaw pattern is seen to be quite important when T and r are small. When r increases, it rapidly becomes negligible. However, when r is small e.g. .1, the non-monotonic pattern remains important even when T is as large as 100. This feature due to the truncation argument holds as well in the general ARMA case. We shall, however, not dwell upon it and choose values of T and r such that [Tr] 5 Tr to concentrate on the effect of serial correlation.

4. Numerical results for ARMA1,1 processes

We present results of the error function 3 for a wide range of values for a and u and values of T selected to be T 5 30, 100, 500 and 5000. The values of r are from 0.1 to 0.9 in steps of 0.1. The pairs 228 P . Perron, S. Mallet Economics Letters 68 2000 225 –234 Fig. 1. Relative error in in the i.i.d. Case negative of the value. hr,T j selected avoid the non-monotonic behavior of the error caused by the truncation [Tr] relative to Tr. Fig. 2 presents the error function eT,r,0,u corresponding to the case of an MA1 process. The graphs are constructed using values of u between 2 0.95 and 0.95 in steps of 0.05. For any value of the MA parameter, the error function is positive and decreasing as r increases. A notable feature is the fact that when the MA component is close to 2 1, the error is very large reaching as high a 14 000 for T 5 30 when r 5 0.1 and u 5 2 0.95. Even, when r 5 1, it is of the order of 2000. As expected, the error function decreases as T increases but very slowly when the MA parameter is large and negative. Even at T 5 5000, the error with u 5 2 0.95 is between 10 and 80 depending on the value of r. Fig. 3 presents a similar graph for the case of an AR1 process again for values of a between 2 0.95 and 0.95 in steps of 0.05. Here the results are quite different. First, the errors are nowhere near as large as in the MA case. The largest error is of the order of 160 when T 5 30. An interesting feature is that the error is negative when a is positive and vice versa. The error declines in absolute value as r increases. Also, the errors are larger for negative values of a. The largest errors occur when a is either close to 2 1 or 1 1 in which case the rate of decrease as T increases is rather slow. The rate of decrease is slower when a is positive, so that when T is either 500 or 5000, the error is now larger in absolute value for positive than for negative values of a. We now consider the behavior of the error as a function of the MA and AR parameters jointly. We consider the case r 5 1 for illustration since the qualitative results are unchanged using other values of r. What transpires from Fig. 4 is the fact that the error function is totally dominated by the region P . Perron, S. Mallet Economics Letters 68 2000 225 –234 229 Fig. 2. The error function in the MA case. where both the MA and the AR coefficients are large and negative. In the extreme, with a 5 u 5 2 0.95, the error is of the order of 50 000 when T 5 30 and is still as high as 300 when T 5 5000. The error in this region is so large that we fail to see variations in the error function for other parameter configurations. To that effect, we present in Fig. 5, the error function in the region where the AR coefficient a is positive. One can see that there are still important variations when the MA coefficient is negative. For example, with T 5 30, a 5 0 and u 5 2 0.95, the error is sill around 1400 and decreases to 0 when a 5 0.95 and u 5 2 0.95 in which case the process is i.i.d. since the roots cancel. The error decreases quite rapidly but when T 5 5000 it is still around 8 when a 5 0 and u 5 2 0.95. Hence, we see that the error function is highest, by a wide margin when the MA coefficient is close to 2 1. The effect induced by the presence of an autoregressive component is to reduce the error function when a is positive and to increase it when a is negative reaching extremely high values as both the MA and AR coefficients get close to 2 1. To better assess the effect on the error function of variations in the AR coefficient, Fig. 6 presents the results for the parameter space where the MA coefficient is constrained to be non-negative. What 230 P . Perron, S. Mallet Economics Letters 68 2000 225 –234 Fig. 3. The error function in the AR case. is first of interest is the fact that variations in the MA coefficient from 0 to 0.95 basically induce no variation in the error function except somewhat when the AR coefficient takes a high negative value in which case an increase in the MA coefficient decreases somewhat the error. Hence, the behavior of the error function as a varies is basically the same as that presented in Fig. 2 for r 5 1. In summary, the results show that when the data exhibit little serial correlation, the Wiener process is a good approximation to the finite sample distribution of the scaled partial sums X r, except for T some non-monotonicities caused by truncation which, however, dissipate rather quickly as the sample size increases. However, the presence of a negative moving average component causes serious distortions to the approximation which become very important when the moving average coefficient approaches the boundary 2 1. The distortions are mitigated somewhat when a positive autoregressive component is present but exacerbated to a great extent with a negative autoregressive component. When only an autoregressive component is present, the errors can be large for sample sizes like T 5 100 but become small as T increases further. We have performed similar experiments with the demeaned process in relation to its asymptotic approximation given by the Brownian Bridge. The P . Perron, S. Mallet Economics Letters 68 2000 225 –234 231 Fig. 4. The error function in the ARMA1,1 case. qualitative results are very similar when r 5 0.5. The difference is that the magnitude of the errors decreases as r approaches either 0 or 1, but the general shape remains the same. The interested reader may consult the full set of results in Mallet 1995.

5. Conclusion