Economics Letters 69 2000 15–23
www.elsevier.com locate econbase
A radial basis function artificial neural network test for ARCH
Andrew P. Blake , George Kapetanios
National Institute of Economic and Social Research , 2 Dean Trench Street, Smith Square, London SW1P 3HE, UK
Received 21 July 1999; accepted 3 March 2000
Abstract
We propose a test for ARCH that uses a radial basis function artificial neural network. It outperforms alternative neural network tests in a variety of Monte Carlo experiments.
2000 Elsevier Science S.A. All
rights reserved.
Keywords : Artificial neural networks; Conditional heteroscedasticity; Hypothesis testing; Bootstrap
JEL classification : C12; C22; C45
1. Introduction
Following the introduction of autoregressive conditional heteroscedasticity ARCH models by Engle 1982, a rapidly increasing amount of empirical and theoretical work has investigated their
properties and extensions such as GARCH and M-GARCH. Comprehensive surveys may be found in, for example, Bollerslev et al., 1992; Bera and Higgins 1993 and Bollerslev et al., 1994. Testing
procedures for ARCH effects have also received considerable attention, beginning with the LM procedure proposed by Engle 1982. Recently, Peguin-Feissolle 1999 has proposed a neural
network test for ARCH based on the neglected nonlinearity testing procedure of Lee et al. 1993.
In this paper we provide a general test for ARCH, based on an alternative neural network specification. This is shown to have superior power properties and is less subject to ad hoc parameter
choices. We do find, however, that the test suffers from size distortions. We use the bootstrap to correct these distortions and find that although the power of the test is reduced it remains above that of
alternative neural network procedures. The paper is organised as follow. Section 2 provides the alternative testing procedures that will be examined. Section 3 provides details on the bootstrap
procedure that corrects the size distortions of the original test. Section 4 provides the design of the
Corresponding author. Tel.: 144-20-7654-1924; fax: 144-20-7654-1900. E-mail address
: ablakeniesr.ac.uk A.P. Blake. 0165-1765 00 – see front matter
2000 Elsevier Science S.A. All rights reserved.
P I I : S 0 1 6 5 - 1 7 6 5 0 0 0 0 2 6 7 - 6
16 A
.P. Blake, G. Kapetanios Economics Letters 69 2000 15 –23
Monte Carlo experiments. Section 5 presents the results of the Monte Carlo study. Finally, Section 6 concludes.
2. Testing procedures
We consider the following regression model
9
y 5 x b 1
e 1
t t
t
where he j is a sequence of disturbances and we denote by the set of available information s-field
t t
2
at time t. Under the null hypothesis he j is an i.i.d. sequence with variance s . Under the alternative
t
hypothesis
2
E e
u 5 h 5 f
e ,
e , . . . .
t t 21
t t 21
t 22
The ability of neural networks to approximate arbitrary functions has been used in a number of contexts in econometrics including tests for neglected nonlinearity Lee et al., 1993 and tests for
ARCH Peguin-Feissolle, 1999. The underlying idea in ARCH testing is that the function f ? may be approximated by a neural network. A single hidden layer feedforward neural network model is
defined as
q
f e
, e
, . . . . a 1
O
a c e , . . . ,
e ,
g , . . . , g 2
t 21 t 22
j j
t 21 t 2p
j 0 jp
j 51
There are p inputs used to activate q hidden units regulated by the functions c ? . Following Lee et
j
al. 1993, Peguin-Feissolle 1999 specified that logistic functions be used for c ? giving
j q
a
j
]]]]]]]]] f
e ,
e , . . .
. a 1
O
3
t 21 t 22
2 g 1g e
1 . . . 1 g e
0j 1j t 21
pj t 2p
1 1 e
j 51
Then the null of no ARCH effects is equivalent to a test of a 5 ? ? ? 5 a 5 0 where a , j 5 1, . . . , q
1 q
j
are the coefficients of the hidden units in a regression of the squared residuals of 1 on a constant and the hidden units.
There are q p 1 1 1 q 1 1 parameters to estimate in 3, a difficult and expensive exercise. Peguin-Feissolle 1999 follows Lee et al. 1993 to avoid estimation issues by assuming the weights
g , i 5 1, . . . , p; j 5 0, . . . , q, are generated randomly from a uniform distribution on [22, 2]. q and
ij
˜ p are chosen by the researcher. To avoid multicollinearity between the hidden units, the q largest
˜ principle components of the hidden units are used as regressors instead. Again the choice of q rests
21 1
˜ ˜
]
with the researcher. The actual test amounts to constructing the statistic
e 9WW9W W9e where
2
˜ ˜
˜ e 5
e , . . . , e is the vector of demeaned squared residuals from 1. W is a matrix containing the
1 T
2
observations on the hidden units and a constant. The test has asymptotically a x
distribution
˜ q
according to Peguin-Feissolle 1999. We will refer to this as the ANN test. Our test amounts to two important departures from the ANN test. Firstly, we suggest using radial
basis functions RBFs instead of logistic functions. These are widely used in single hidden layer neural network models. For a nontechnical introduction see Campbell et al. 1993. The inspiration for
A .P. Blake, G. Kapetanios Economics Letters 69 2000 15 –23
17
the use of similar functions was originally to solve the exact interpolation problem. The hidden neural network unit arising out of RBFs is generally of the form
c
uuv 2 cuu where v is a vector of inputs, c
t t
is a vector of constants referred to as centres, uu ? uu denotes a norm and c ? is a scalar function.
2z
t
Usually the function used is Gaussian, given by e . Specifically, we use the following
21
9
z 5 R v 2 c, z 5 z z .
t t
t t
t
R is a diagonal matrix of radii. z is then the hidden unit output for a given c and R.
t
Our second departure from the ANN test is to use a strategy to construct the test statistic which avoids many of the ad hoc parameter choices associated with the White test. Following Peguin-
Feissolle 1999 and Kamstra 1993 we consider both the residuals and the squared residuals of 1 at given lags as possible neural network inputs for the construction of hidden units. The strategy may
be described as follows:
1. Form T potential hidden units by using each observation of the inputs, v , t 5 1, . . . , T, as a