Form T potential hidden units by using each observation of the inputs, v , t 5 1, . . . , T, as a Bootstrap approximation

A .P. Blake, G. Kapetanios Economics Letters 69 2000 15 –23 17 the use of similar functions was originally to solve the exact interpolation problem. The hidden neural network unit arising out of RBFs is generally of the form c uuv 2 cuu where v is a vector of inputs, c t t is a vector of constants referred to as centres, uu ? uu denotes a norm and c ? is a scalar function. 2z t Usually the function used is Gaussian, given by e . Specifically, we use the following 21 9 z 5 R v 2 c, z 5 z z . t t t t t R is a diagonal matrix of radii. z is then the hidden unit output for a given c and R. t Our second departure from the ANN test is to use a strategy to construct the test statistic which avoids many of the ad hoc parameter choices associated with the White test. Following Peguin- Feissolle 1999 and Kamstra 1993 we consider both the residuals and the squared residuals of 1 at given lags as possible neural network inputs for the construction of hidden units. The strategy may be described as follows:

1. Form T potential hidden units by using each observation of the inputs, v , t 5 1, . . . , T, as a

t possible center. Following common practice Orr, 1995 we use twice the maximum change from period t to period t 1 1, t 5 1, . . . , T of each input as the radius for all potential hidden units. 2. We regress the squared residuals of 1 on a constant and each hidden unit and obtain the sum of squared residuals SSR from each regression. t T ˜ 3. We compare the SSR with SSR 5 o e and sort the hidden units according to the magnitude of t t 51 t SSR 2 SSR in descending order. t 4. Starting with the hidden unit which provides maximum reduction in the sum of squared residuals, ˆ we regress the residuals, e , on this unit and a constant, and successively add units to the regression t until an information criterion in this case BIC is minimised. 5. For the chosen regression, we test the significance of the coefficients of the hidden units using the same LM test used for the ANN test. This test will be referred to as the RBF test.

3. Bootstrap approximation

Preliminary investigation of the small sample properties of the RBF test indicates that the size properties of the test are not very good. In particular the test overrejects under the null hypothesis in a 2 number of occasions. Such an outcome indicates that the x asymptotic approximation is not particularly accurate. A common method used to overcome this problem involves use of the bootstrap. The bootstrap is an improvement to the asymptotic approximation since, under given conditions, the 21 2 approximation error is smaller that T which is the error of the asymptotic approximation. In particular if a statistic is asymptotically pivotal i.e. does not depend asymptotically on unknown 21 parameters then the bootstrap estimate of its distribution has statistical error of order T . Of course, the simulation error arising out of the need to estimate the bootstrap distribution using a finite number 2 of replications has to be controlled. As Brown 1999 points out T bootstrap replications are needed to take advantage of the improved statistical approximation. The above results apply to samples with 18 A .P. Blake, G. Kapetanios Economics Letters 69 2000 15 –23 i.i.d. observations. Results for sequences of dependent data exist for some cases such as AR or MA models see Bose, 1988 and Bose, 1990 indicating that the improvement is of order o 1 rather than p 21 2 O T . p In our framework the bootstrap can be applied as follows. Once the original test statistic, denoted ˆ ˆ by S has been obtained we retrieve the set of residuals, he , . . . , e j from 1. We then resample 1 T randomly with replacement from the set of the residuals to obtain a bootstrap sample of residuals ˆ ˆ ˆ ˆ ˆ e , . . . , e where each e has been drawn with replacement from he , . . . , e j and stars denote 1 T t 1 T generic bootstrap quantities. We then carry out the artificial neural network test on the bootstrap ˆ ˆ sample of residuals, e , . . . , e . Repeating this process N times where N is the number of bootstrap 1 T replications we obtain a set of bootstrap test statistics, S , . . . , S . We then use these samples to 1 N construct the bootstrap distribution of our test statistic. More specifically the estimated P-value of a given test statistic S is given by N O 1S S n 51 n ˆ ]]]]] p 5 N where 1 ? is the indicator function taking the value 1 when its argument is true and zero otherwise. We will refer to the bootstrap test as RBF-B. Under the null hypothesis, the error terms in 1 are i.i.d. and therefore the resampling scheme described above is justified. Under an ARCH alternative the random resampling should ensure that the dependence between the resampled residuals is negligible asymptotically thereby providing a consistent testing procedure under certain conditions. Establishing that the error sequence in 1 is either mixing or near epoque dependent see e.g. Davidson, 1994, pp. 261–277 under a generalised ARCH alternative should be sufficient for the testing procedure to be consistent. Nevertheless the sufficiency of mixing or near epoque dependence is conjectured and a rigorous proof remains to be provided.

4. Monte Carlo study