Directory UMM :Data Elmu:jurnal:J-a:Journal of Econometrics:Vol95.Issue1.2000:

Journal of Econometrics 95 (2000) 57}69

Bayesian analysis of ARMA}GARCH models:
A Markov chain sampling approach
Teruo Nakatsuma*
Institute of Economic Research, Hitotsubashi University, Naka 2-1, Kunitachi, Tokyo, 186-8603, Japan
Received 1 August 1996; received in revised form 1 February 1999; accepted 1 April 1999

Abstract
We develop a Markov chain Monte Carlo method for a linear regression model with
an ARMA(p, q)-GARCH(r, s) error. To generate a Monte Carlo sample from the joint
posterior distribution, we employ a Markov chain sampling with the Metropolis}Hastings
algorithm. As illustration, we estimate an ARMA}GARCH model of simulated time
series data. ( 2000 Elsevier Science S.A. All rights reserved.
JEL classixcation: C11; C22
Keywords: ARMA process; Bayesian inference; GARCH; Markov chain Monte Carlo;
Metropolis}Hastings algorithm

1. Introduction
In this paper, we propose a new Markov chain Monte Carlo (MCMC)
method for Bayesian estimation and inference of the ARCH/GARCH model.

Autoregressive conditional heteroskedasticity (ARCH) by Engle (1982) and
generalized ARCH (GARCH) by Bollerslev (1986) have been extensively studied
and applied in many "elds of economics, especially in "nancial economics. Our

* Tel.: 042-580-8367; fax: 042-580-8333.
E-mail address: [email protected] (T. Nakatsuma)

0304-4076/00/$ - see front matter ( 2000 Elsevier Science S.A. All rights reserved.
PII: S 0 3 0 4 - 4 0 7 6 ( 9 9 ) 0 0 0 2 9 - 9

58

T. Nakatsuma / Journal of Econometrics 95 (2000) 57}69

new MCMC method is applicable to estimating not only a simple ARCH/
GARCH model but also a linear regression model with the ARMA}GARCH
error, which we shall call the ARMA}GARCH model.
The MCMC method is one of Monte Carlo integration methods.1 In this
method, we generate samples of parameters in the model from their joint
posterior distribution by Markov chain sampling, and evaluate complicated

multiple integrals, which are necessary for Bayesian inference, by Monte Carlo
integration. The Metropolis}Hastings (MH) algorithm is one of the Markov
chain sampling schemes widely used among researchers. The MH algorithm,
developed by Metropolis et al. (1953) and Hastings (1970), is a technique to
generate random numbers from a probability distribution. This algorithm is
quite useful especially when we cannot generate samples of the parameters
directly from their joint posterior distribution. This is the case for the
ARMA}GARCH model.
To develop a new MCMC method for the ARMA}GARCH model, we
combine and modify Markov chain sampling schemes developed by Chib and
Greenberg (1994) and MuK ller and Pole (1995). Chib and Greenberg (1994)
designed a Markov chain sampling scheme for a linear regression model with an
ARMA(p, q) error in which the disturbance term of the ARMA process was i.i.d.
normal. MuK ller and Pole (1995) developed an MCMC procedure for a linear
regression model with a GARCH error in which the error term of the regression
model had no serial correlation. It seems natural to develop a new MCMC
procedure for the ARMA}GARCH model by combining these two methods.
Organization of this paper is as follows. In Section 2, we explain our new
MCMC method for the ARMA}GARCH model. In Section 3, we estimate
ARMA}GARCH models with simulated data by our MCMC method as illustration. In Section 4, concluding remarks of this paper are given.


2. MCMC method for ARMA}GARCH models
2.1. ARMA}GARCH model and Bayesian inference
We consider the following linear regression model with
ARMA(p, q)}GARCH(r, s) error, or simply an ARMA}GARCH model:
y "x c#u , (t"1,2, n),
t
t
t

an

(1)

1 Importance sampling, another popular Monte Carlo integration method, was also applied to
ARCH and GARCH models by a few researchers (Geweke, 1989a, b; Kleibergen and Van Dijk,
1993, among others).

T. Nakatsuma / Journal of Econometrics 95 (2000) 57}69


59

p
q
u " + / u #e # + h e , e DF &N(0, p2),
t
t
j t~j
t
j t~j
t t~1
j/1
j/1
(2)
r
s
p2"a # + a e2 # + b p2 ,
t
0
j t~j

j t~j
j/1
j/1
where y is a scalar of the dependent variable; x is a 1]k vector of the
t
t
independent variables; c is a k]1 vector of the regression coe$cients; F
is
t~1
a p-"eld generated by My , y ,2N; / s are the coe$cients of the AR(p)
t~1 t~2
j
process, h s are the coe$cients of the MA(q) process; a s and b s are the
j
j
j
coe$cients of the GARCH(r, s) process. Let c"[c ,2, c ]@, /"[/ ,2, / ]@,
1
k
1

p
h"[h ,2,h ]@, a"[a , a ,2, a ]@, b"[b ,2, b ]@, >"[y ,2, y ]@,
1
q
0 1
r
1
s
1
n
X"[x@ ,2, x@ ]@, U(¸)"+p / ¸j, H(¸)"+q h ¸j, A(¸)"+r a ¸j, and
j/1 j
j/1 j
j/1 j
1
n
B(¸)"+s b ¸j where ¸ is the lag operator.
j/1 j
In practice, we often impose several constraints on parameters in the
ARMA}GARCH model:

C1: all roots of 1!U(¸)"0 are outside the unit circle,
C2: all roots of 1#H(¸)"0 are outside the unit circle,
C3: a '0 for j"0,2, r,
j
C4: b '0 for j"1,2, s.
j
C1 and C2 are related to stationarity and invertibility of the ARMA process.
C3 and C4 are imposed to guarantee that the conditional variance p2 is always
t
positive. In many previous studies of GARCH models, the following constraint:
C5: A(1)#B(1)(1,
is imposed for the "niteness of the unconditional variance of e . Since one of
t
the objects in Bayesian analysis of the ARMA}GARCH model is to test whether
the constraint C5 is true, we will not put C5 on the GARCH coe$cients.
To perform Bayesian analyses of the ARMA}GARCH model, we construct
the posterior density function of the model:
l(>DX, d)p(d)
p(dD>, X)"
,

:l(>DX, d)p(d) dd

(3)

where d is the set of all parameters in the ARMA}GARCH model, l(>DX, d) is
the likelihood function, and p(d) is the prior. The likelihood function of the
ARMA}GARCH model is

C

D

1
n
e( 2
l(>DX, d)" <
exp ! t ,
2p2
t
t/1 J2pp2t

where
e ,
e( " 0
t
y !x c!+p / (y !x c)!+q h e( ,
j/1 j t~j
j/1 j t~j
t~j
t
t

G

(4)

(t"0)
(t"1,2, n)
(5)

60


T. Nakatsuma / Journal of Econometrics 95 (2000) 57}69

and we assume y "e , y "0 for t(0, and x "0 for t)0. We treat the
0
0 t
t
pre-sample error e as a parameter in the ARMA}GARCH model.
0
As the prior, we use the following proper prior:
p(e , c, /, h, a, b)"N(k 0, R 0)]N(k , R )
0
e e
c c
]N(k , R )I (/)]N(k , R )I (h)
( ( C1
h h C2
]N(k , R )I (a)]N(k , R )I (b),
(6)
a a C3

b b C4
where I ( ) )( j"1, 2, 3, 4) is the indicator function which takes unity if the
Cj
constraint holds; otherwise zero, and N( ) )I ( ) ) represents that the normal
Cj
distribution N( ) ) is truncated at the boundary of the support of the indicator
function I ( ) ).
Cj
In Bayesian inference, we need to evaluate the expectation of a function of
parameters:

P

E[ f (d)]" f ( d)p(dD>, X) dd.

(7)

The functional form of f (d ) depends on what kind of inference we conduct. For
j
instance, when we estimate the posterior probability of A(1)#B(1)(1, f (d) is
the indicator function I(A(1)#B(1)(1).
In the ARMA}GARCH model (1) and (2), however, it is di$cult to evaluate
the multiple integral in (7) analytically. We need to employ a numerical integration method. A Monte Carlo integration is one of widely used techniques for
numerical integration. Let Md(1),2, d(m)N be samples generated from the posterior distribution p(dD>, X). Then (7) is approximated by
1 m
E[ f (d)]+ + f (d(i))
(8)
m
i/1
for enough large m.
To apply the Monte Carlo method, we need to generate samples
Md(1),2, d(m)N from the posterior distribution. Since we cannot generate them
directly, we use the Metropolis}Hastings (MH) algorithm, which is one of
Markov chain sampling procedures such as the Gibbs sampler. In the MH
algorithm we generate a value dK from the proposal distribution g(d) and accept
the proposal value with probability:

G

j(d, dK )"min

p(dK D>, X)/g(dK )
,1 .
p(dD>, X)/g(d)

H

(9)

See Tierney (1994) and Chib and Greenberg (1995) among others for further
explanations and discussions about the MH algorithm as well as the MCMC
methods.

T. Nakatsuma / Journal of Econometrics 95 (2000) 57}69

61

2.2. MCMC procedure
We explain the outline of our new MCMC procedure for the ARMA}
GARCH model. More details on the MCMC procedure are given in the
appendix of this paper.
To construct a MCMC procedure for the ARMA}GARCH model, we divide
the parameters into two groups. Let d "(e , c, /, h) be the "rst group and
1
0
d "(a, b) be the second. For each group of the parameters, we use di!erent
2
proposal distributions.
The proposal distributions for the "rst group d are based on the original
1
ARMA}GARCH model:
p
q
(10)
y "x c# + / (y !x c)#e # + h e , e &N(0, p2),
t
t
t
j t~j
t~j
t
j t~j
t
j/1
j/1
but we assume that the conditional variances Mp2Nn are "xed and known.
t t/1
Using (10), we can generate d from their proposal distributions by the MCMC
1
procedure by Chib and Greenberg (1994) with some modi"cations.
The proposal distributions for the second group d are based on an approxi2
mated GARCH model:
s
l
(11)
e2"a # + (a #b )e2 #w ! + b w , w &N(0, 2p4),
t
t
j t~j
t
t
0
j
j t~j
j/1
j/1
where l"maxMr, sN, a "0 for j'r, and b "0 for j's. Model (11) is derived
j
j
by using the well-known property of the GARCH model. As shown in Bollerslev
(1986), the GARCH(r, s) model (2) is expressed as an ARMA(l, s) process of
Me2Nn :
t t/1
l
s
e2"a # + (a #b )e2 #w8 ! + b w8 ,
(12)
t
0
j
j t~j
t
j t~j
j/1
j/1
where w8 "e2!p2. Since w8 "(e2/p2!1)p2"(s2(1)!1)p2, the conditional
t
t
t t
t
t
t
t
mean of w8
is E(w8 DF )"0, and the conditional variance is
t
t t~1
Var(w8 DF )"2p4. Replacing w8 with w &N(0, 2p4)2, we have (11). Given
t
t
t
t
t t~1
Mp2Nn and Me2Nn , we generate d from their proposal distributions by the
2
t t/1
t t/1
MCMC procedure similar to Chib and Greenberg's method.
The outline of our MCMC procedure is as follows:
(a) generate d "(e , c, /, h) from the proposal distribution based on (10)
1
0
given d ,
2
2 You may suspect that this normal approximation is quite poor, but in our experience this
approach works "ne.

62

T. Nakatsuma / Journal of Econometrics 95 (2000) 57}69

(b) generate d "(a, b) from the proposal distribution based on (11) given d ,
2
1
(c) apply the MH algorithm after each parameter is generated in (a) or (b),
(d) repeat (a)}(c) until the sequences become stable.
In this procedure, we update Me2Nn and Mp2Nn at every time after correspondt t/1
t t/1
ing parameters are updated. The full description of our new procedure is given in
the appendix of this paper.

3. An example of the MCMC estimation
In this section, we demonstrate how our new MCMC method is used in
application. We consider the following regression model:
y "c #c x #u , (t"1,2, n)
t
1
2 t
t
u "/ u #e #h e #h e #h e #h e ,
t
1 t~j
t
1 t~1
2 t~2
3 t~3
4 t~4
e DF &N(0, p2),
t
t t~1
p2"a #a e2 #a e2 #a e2 #a e2 #b p2
1 t~1
4 t~4
3 t~3
2 t~2
t
0
1 t~1
#b p2 ,
2 t~2
where

(13)

(c , c )"(1.0, 1.0),
1 2
/ "0.9,
1
(h , h , h , h )"(!0.48, 0.36, !0.24, 0.12),
1 2 3 4
(a , a , a , a , a )"(0.001, 0.24, 0.18, 0.12, 0.06),
0 1 2 3 4
(b , b )"(0.2, 0.1),
1 2
x is drawn from the uniform distribution between !0.5 and 0.5, and we set
t
n"1000.
We estimate (13) by our new MCMC method and the maximum-likelihood
estimation (MLE) for comparison. The maximum-likelihood estimates of the
parameters are computed with the simulated annealing algorithm by Go!e
(1996). For this data set, the covariance estimator n~1A~1B A~1, where
0 0 0
1 n R2 ln l (d)
1 n R ln l (d) R ln l (d)
t
t
t
A "! +
and B " +
,
0
0 n
n
Rd@
RdRd@
Rd
t/1
t/1
is not positive de"nite at the maximum-likelihood estimates of d because A is
0
not. Therefore, we use n~1B~1 as the covariance estimator.
0
The MCMC procedure is performed as follows. We generate 60,000 runs by
the Markov chain sampling in a single-sample path. The number of runs to be

A

B

A

B

T. Nakatsuma / Journal of Econometrics 95 (2000) 57}69

63

discarded as the initial burn-in is determined by the convergence diagnostic
(CD) in Geweke (1992). Let m denote the number of runs to be discarded, and
0
m the number of runs to be retained. Let dM denote the sample mean of d for the
1
"rst m runs in the sample path with m runs, and dM for the last m runs. As
1
2
2
Geweke (1992) suggested, We set m "0.1 m and m "0.5 m. The test statistic
1
2
for CD is given as
(14)
(dM !dM )/[SK d (0)/m #SK d (0)/m ]1@2,
2
2
1
1
1
2
where SK d( ) ) is the spectrum density estimate for m runs, and (14) asymptotically
i
i
follows the standard normal distribution. By this diagnostic, we choose
m "6000 and m"54,000.3 Estimation results by the MLE and the MCMC
0
are shown in Table 1.
In Table 1, the MLE and the MCMC produce comparable estimates for c, /,
and h. All t-ratios of these parameters in the MLE are greater than 2.0. For a and
b, however, the t-ratios are less than 2.0 except for a . In the MCMC, the 95%
1
interval does not include zero for c, /, and h (this fact is guaranteed for a and
b by assumption), and the true value is within the 95% interval for all parameters.
We also estimate the posterior probability of A(1)#B(1)(1, the condition
for the "nite unconditional variance in the GARCH(p, q) process (2). This
probability is estimated by (8) with f (d)"I(A(1)#B(1)(1). The estimated
posterior probability is shown in the bottom of Table 1. The value, 0.917, is
reasonable because A(1)#B(1)"0.24#0.18#0.12#0.06#0.2#0.1"
0.9(1 and the GARCH process in (13) has the "nite unconditional variance.

4. Concluding remarks
In this paper, we derived a Markov chain Monte Carlo method with Metropolis}Hastings algorithm for Bayesian inference of a linear regression model with
an ARMA}GARCH error,or the ARMA}GARCH model. Our new MCMC
method allows the error term not only to be an ARMA(p, q) process, but also to
have the GARCH variance. This #exibility is one of the major advantages of our
new method. Moreover, our method can deal with the pre-sample values of the
error term and conditional variance of the ARMA}GARCH model.
We also applied the new MCMC method to estimate an ARMA}GARCH
model of simulated time series data, and demonstrated how to conduct the
Bayesian inference with samples of the parameters obtained by the Markov

3 We also tried to use every third, "fth or tenth point, instead of all points, in the sample path to
avoid strong serial correlation. The changes in estimation results, however, were negligible.

64

T. Nakatsuma / Journal of Econometrics 95 (2000) 57}69

Table 1
Estimation results
True value
MLE
c

0.982
(0.0128)!
c
1.0
1.007
2
(0.00550)
/
0.9
0.879
1
(0.0210)
h
!0.48
!0.433
1
(0.0382)
h
0.36
0.306
2
(0.0362)
h
!0.24
!0.228
3
(0.0362)
h
0.12
0.161
4
(0.0337)
e
*
0.168
0
(0.168)
a
0.001
0.000774
0
(0.000705)
a
0.24
0.223
1
(0.0562)
a
0.18
0.114
2
(0.246)
a
0.12
0.139
3
(0.112)
a
0.06
0.0613
4
(0.157)
b
0.2
0.373
1
(1.08)
b
0.1
0.0238
2
(0.559)
PMA(1)#B(1)(1N
1

1.0

o( $

Posterior statistics
Mean

s.d.

Median

CD%

0.982
(7.35]105)"
1.007
(3.39]105)
0.882
(0.000366)
!0.437
(0.000757)
0.298
(0.000774)
!0.228
(0.000887)
0.159
(0.000620)
0.0935
(0.00173)
0.000829
(5.42]106)
0.228
(0.000406)
0.144
(0.000997)
0.134
(0.00140)
0.0688
(0.000939)
0.236
(0.00369)
0.123
(0.000927)
0.917
(0.00165)

0.0131

0.982
[0.957,1.008]#
1.007
[0.996,1.018]
0.883
[0.844,0.916]
!0.436
[!0.502,!0.369]
0.298
[0.230,0.368]
!0.230
[!0.301,!0.153]
0.159
[0.0990,0.218]
0.151
[!0.424,0.554]
0.000815
[0.000473,0.00126]
0.228
[0.130,0.329]
0.144
[0.0295,0.260]
0.132
[0.0225,0.260]
0.0627
[0.00395,0.168]
0.233
[0.0158,0.491]
0.108
[0.00558,0.314]
*

0.143
!0.551
0.168
!0.435
0.706
0.209
0.908
1.43
0.903
!0.756
0.911
!0.0146
0.893
0.600
0.373
0.0750
0.691
!0.502
0.406
0.507
0.546
!0.660
0.604
0.276
0.566
!0.746
0.805
!0.100
0.624
1.17
0.249
0.700

0.00564
0.0184
0.0344
0.0357
0.0377
0.0302
0.259
0.000200
0.0509
0.0586
0.0603
0.0446
0.131
0.0855
0.275

!the standard error of the MLE.
"the numerical standard error of the posterior mean.
#95% interval, [Q
,Q
].
2.5{ 97.5{
$the "rst-order autocorrelation in a sample path.
%convergence diagnostic test statistic (14).

chain sampling. We estimated the posterior statistics of the parameters and
compare them to the MLE. We also estimated the posterior probability of the
"niteness of the unconditional variance in the GARCH(r, s) process, which is
di$cult to test in the classical approach.

T. Nakatsuma / Journal of Econometrics 95 (2000) 57}69

65

Appendix A
In this Appendix, we explain the proposal distributions of parameters in the
ARMA}GARCH model for the MCMC procedure. The parameters to be
generated are (i) pre-sample error e ; (ii) regression coe$cients c; (iii) AR
0
coe$cients /; (iv) MA coe$cients h; (v) ARCH coe$cients a; (vi) GARCH
coe$cients b. We also discuss how to deal with the pre-sample error of the
approximated GARCH model (11), w .
0
A.1. e : Pre-sample error
0
Given the pre-sample error e , the ARMA}GARCH model is rewritten as
0
p
q
y "x c# + / (y !x c)#e # + h e #(/ #h )e ,
(A.1)
t
t
j t~j
t~j
t
j t~j
t
t 0
j/1
j/1
where y "e "0 for t(0, x "0 for t(0, / "0 for t'p, and h "0 for t'q.
t
t
t
t
t
In (A.1), y does not depend on e for t'maxMp, qN. Chib and Greenberg (1994)
t
0
derived a similar expression for a regression model with an ARMA(p, q) error.
The likelihood function of the ARMA}GARCH model is rewritten as

C

D

1
n
(ys!xse )2
t 0 ,
f (>DX, R, d , d )" <
exp ! t
1 2
2p2
J2pp2
t
t/1
t

(A.2)

where ys and xs are computed by
t
t
p
q
ys"y !x c! + / (y !x c)! + h ys ,
t
t
t
j t~j
t~j
j t~j
j/1
j/1
(A.3)
q
xs"(/ #h )! + h xs ,
t
t
t
j t~j
j/1
with y "ys"0 for t)0, and x "xs"0 for t)0. Obviously, e "ys!xse .
t
t
t
t
t
t
t 0
We have the proposal distribution of e :
0
e D>, X, R&N(k( 0, p( 0),
(A.4)
0
e e
)~1.
where k( 0"RK 0(+n x2 /p2#k 0/p20) and RK 0"(+n x y /p2#p~2
e e
e0
t/1 et et t
e t/1 et t
e
e
A.2. c: Regression coezcients
The likelihood function of the ARMA}GARCH model is rewritten as

C

D

1
n
(yH!xHc)2
t
f (>DX, R, d , d )" <
exp ! t
,
1 2
2p2
J2pp2
t
t/1
t

(A.5)

66

T. Nakatsuma / Journal of Econometrics 95 (2000) 57}69

where yH and xH are calculated by the following transformation:
t
t
p
q
yH"y ! + / y ! + h yH ,
t
t
j t~j
j t~j
j/1
j/1
(A.6)
p
q
xH"x ! + / x ! + h xH ,
t
t
j t~j
j t~j
j/1
j/1
where yH"e , y "yH"0 for t(0, and x "xH"0 for t)0. This is a modit
t
t
0
0 t
"ed version of the transformation derived by Chib and Greenberg (1994). It
is straightforward to show e "yH!xHc. Let > "[yH,2, yH]@ and
n
1
t
c
t
t
X "[xH{,2, xH{]@. We have the following proposal distribution of c:
n
1
c
cD>, X, R, d &N(k( , RK ),
~c
c c

(A.7)

where k( "RK (X@ R~1> #R~1k ) and RK "(X@ R~1X #R~1)~1.
c
c
c
c c
c
c
c
c c
A.3. /: AR coezcients
The likelihood function of the ARMA}GARCH model is also written as

C

D

1
n
(y8 !x8 /)2
t
f (>DX, R, d , d )" <
exp ! t
,
1 2
2p2
J2pp2
t
t/1
t

(A.8)

where y8 and x8 are calculated by the following transformation:
t
t
q
y8 "y !x c! + h y8 , x8 "[y8
, , y8
],
t
t
t
j t~j
t
t~1 2 t~p
j/1

(A.9)

where y8 "e , y "y8 "0 for t(0. This is also a modi"ed version of the
0
0 t
t
transformation derived by Chib and Greenberg (1994). It is also straightforward
to show e "y8 !x8 /. Let > "[y8 ,2, y8 ]@ and X "[x8 @ ,2, x8 @ ]@. We have
t
t
t
(
1
n
(
1
n
the proposal distribution of /:
/D>, X, R, d &N(k( , RK )I (/),
~(
( ( C1

(A.10)

where k( "RK (X@ R~1> #R~1k ) and RK "(X@ R~1X #R~1)~1.
(
(
(
( (
(
(
(
( (
A.4. h: MA coezcients
Generation of h is a little more complicated since the error term u is
t
a non-linear function of h. To deal with this complexity, Chib and Greenberg
(1994) proposed to linearize e by the "rst-order Taylor expansion
t
e (h)+e (hH)#t (h!hH),
t
t
t

(A.11)

T. Nakatsuma / Journal of Econometrics 95 (2000) 57}69

67

where e (hH)"yH(hH)!xH(hH) and t "[t ,2, t ] is the "rst-order derivat
t
1t
qt
t
t
tive of e (h) evaluated at hH given by the following recursion:
t
q
(i"1,2, q),
(A.12)
t "!e (hH)! + hHt
j it~j
it
t~i
j/1
where t "0 for t)0. The choice of hH is crucial to obtain a suitable approxiit
mation. Chib and Greenberg (1994) chose the non-linear least-squares estimate
of h given the other parameters as hH,
n
(A.13)
hH"argmin + Me (h)N2.
t
h t/1
However, the error term in the ARMA}GARCH model is heteroskedastic while
Chib and Greenberg applied their approximation to the homoskedastic-error
ARMA model. Thus, instead of (A.13), we use the following weighted non-linear
least-squares estimate of h:
n
hH"argmin + Me (h)N2/p2,
t
t
h t/1
to approximate the likelihood function for model (10).
Then we have an approximated likelihood function for model (10):

C

D

1
n
Me (hH)#t (h!hH)2N2
t
f (>DX, R, d , d )" <
exp ! t
.
1 2
2p2
t
t/1 J2pp2t

(A.14)

(A.15)

Let > "[t hH!e (hH),2, t hH!e (hH)]@ and X "[t@ ,2, t@ ]@. Using the
n
1
h
1
1
n
n
h
approximated likelihood function, we have the following proposal distribution
of h:
hD>, X, R, d &N(k( , RK )I (h),
~h
h h C2
where k( "RK (X@ R~1> #R~1k ) and RK "(X@ R~1X #R~1)~1.
h
h
h
h h
h
h
h
h h

(A.16)

A.5. w : Pre-sample error in the approximated GARCH model
0
Before we explain the proposal distributions for a and b, we discuss how to
deal with the pre-sample error in the approximated GARCH model (11), w . In
0
(11), e2 is given as
t
s
l
(A.17)
e2"a # + (a #b )e2 #w ! + b w #a w ,
t
j t~j
t 0
t
0
j
j t~j
j/1
j/1
where e2"w "0 for t(0, a "0 for t'r, and b "0 for t's. Note that
t
t
t
t
e2 does not depend on w for t'r.
t
0

68

T. Nakatsuma / Journal of Econometrics 95 (2000) 57}69

Unlike e , we do not need to generate w . Suppose that the initial variance
0
0
p2 is equal to a . Since w "e2!p2, we have
t
t
0
0
t
w "e2!a .
0
0
0

(A.18)

Hence, w is automatically determined by (A.18) once we obtain e and a .
0
0
0
A.6. a: ARCH coezcients
Generation of a is similar to /. The likelihood function of the approximated
GARCH model (11) is rewritten as

C

D

1
n
(e6 2!f a)2
t
f (e2D>, X, R, d , d )" <
,
exp ! t
1 2
2(2p4)
)
J2p(2p4
t
t/1
t

(A.19)

where e2"[e2,2, e2]@, e6 2"e8 2!+s b e8 2 , and f "[ιI , e8 2 ,2, e8 2 ].
t~r
t
t t~1
j/1 j t~j
t
t
n
1
e8 2 and ιI are obtained by transformation (A.9) with h P!b and y !x cPe2
t
t
t
j
j
t
t
for e8 2, or y !x cP1 for ιI . Obviously, w "e6 2!f a. Let > "[e6 2,2, e6 2]@ and
n
1
t
t
a
t
t
t
t
t
X "[f@ ,2, f@ ]@. Then we have the following proposal distribution of a:
n
1
a
aD>, X, R, d &N(k( , RK )I (a),
~a
a a C3
where k( "RK (X@ K~1> #R~1k ),
a a
a
a
a a
diagM2p4,2, 2p4N.
n
1

(A.20)
RK "(X@ K~1X #R~1)~1,
a
a
a
a

and

K"

A.7. b: GARCH coezcients
Generation of b is similar to h. We approximate the likelihood function of
(11) as

C

D

1
n
Mw (bH)!m (b!bH)N2
t
f (e2D>, X, R, d , d )" <
exp ! t
,
1 2
2(2p4)
)
J2p(2p4
t
t/1
t
(A.21)
where m "[m ,2, m ] is the "rst-order derivative of p2(b) evaluated at bH,
t
t
1t
st
which is computed by the transformation (A.12) with !e (hH)Pp2 (bH) and
t~i
t~i
hHP!bH, and bH is the solution of (A.14) with hPb, e (h)Pw (b), and
j
t
t
j
p2P2p4. Let > "[w (bH)#m bH,2, w (bH)#m bH]@ and X "[m@ ,2, m@ ]@.
n
1
t
b
1
1
n
n
b
t
Using the approximated likelihood function (A.21), we have the following
proposal distribution of b:
&N(k( , RK )I (b),
~b
b b C4
where k( "RK (X@ K~1> #R~1k ) and RK "(X@ K~1X #R~1)~1.
b
b
b
b b
b
b
b
b b
bD>, X, R, d

(A.22)

T. Nakatsuma / Journal of Econometrics 95 (2000) 57}69

69

References
Bollerslev, T., 1986. Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics 31, 307}327.
Chib, S., Greenberg, E., 1994. Bayes inference in regression models with ARMA (p, q) errors. Journal
of Econometrics 64, 183}206.
Chib, S., Greenberg, E., 1995. Understanding the Metropolis}Hastings algorithm. American Statistician 49, 327}335.
Engle, R.F., 1982. Autoregressive conditional heteroskedasticity with estimates of the variance of
United Kingdom in#ation. Econometrica 50, 987}1008.
Geweke, J., 1989a. Bayesian inference in econometric models using Monte Carlo integration.
Econometrica 57, 1317}1339.
Geweke, J., 1989b. Exact predictive densities for linear models with ARCH disturbances. Journal of
Econometrics 40, 63}86.
Geweke, J., 1992. Evaluating the accuracy of sampling-based approaches to the calculation of
posterior moments. In: Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (Eds.), Bayesian
Statistics 4. Oxford University Press, Oxford, pp. 169}193.
Go!e, W.L., 1996. SIMANN: a global optimization algorithm using simulated annealing. Studies in
Nonlinear Dynamics and Econometrics 1, 169}176.
Hastings, W.K., 1970. Monte Carlo sampling methods using Markov chains and their applications.
Biometrika 57, 97}109.
Kleibergen, F., Van Dijk, H.K., 1993. Non-stationarity in GARCH models: A Bayesian analysis.
Journal of Applied Econometrics 8, S41}S61.
Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., Teller, E., 1953. Equations of
state calculations by fast computing machines. Journal of Chemical Physics 21, 1087}1092.
MuK ller, P., Pole, A., 1995. Monte Carlo posterior integration in GARCH models. Manuscript, Duke
University.
Tierney, L., 1994. Markov chains for exploring posterior distributions. Annals of Statistics 22,
1701}1762.