M01436

Comparison of Griddy Gibbs and Metropolis-Hastings Samplers for
Estimation of the Standard LNSV Model∗
Didit B. Nugroho1) and Takayuki Morimoto2)



1) Department
2) Department

of Mathematics, Satya Wacana Christian University
of Mathematical Sciences, Kwansei Gakuin University

Abstract
This article compares performances of two MCMC samplers to estimate parameters
and latent stochastic processes in the standard log-normal stochastic volatility (LNSV)
model: Griddy Gibbs (GG) and Metropolis-Hastings (MH) samplers. To illustrate the
comparison of two samplers, we apply the model and samplers to the daily returns on
three stocks of the TOPIX Core 30: Hitachi Ltd., Nissan Motor Co. Ltd. and Panasonic Corp., from 5th January 2004 to 30th December 2008. Based on the standardized
innovations, the normality and correlation test statistics indicate that the standard LSV
model estimated by two samplers is able to capture the return dynamics of those stocks
successfully where MH sampler is more able to capture extreme observations in the tails

of the distribution. Using six loss functions, where the daily realized volatility is used
as a proxy, the GMLE (Gaussian quasi-maximum likelihood function) criterion indicates that the the GG sampler provides the most accurate SV, while the results for the
other functions indicate no clear pattern. Particularly, it is also shown that volatility
by MH sampler is more persistent and less variable than by GG sampler. In computational time, the GG sampler is much more costly although it seems easier to implement.
Keywords: Standard stochastic volatility, Bayesian inference, single-move Markov
Chain Monte Carlo, Griddy Gibbs, Metropolis-Hastings, TOPIX Core 30.

1

Introduction

Amongst modeling of financial time series, the stochastic volatility (SV) model is recognized as one of the most important class as it has the ability to capture the commonly
observed change in variance of the observed stock index or exchange rate over time. A
popular and most widely used SV model is the log-normal (LN) SV model, which was first
introduced by Taylor (1982). In his discrete time model, volatility process is modeled as a
first-order autoregression for the log-squared volatility. The LNSV model provides a more
realistic and adequate than the ARCH-type models (see, for example, Ghysels et al. (1996))
and the GARCH-type models (see, for example, Kim et al. (1998), Yu (2002) and Carnero
et al. (2001)).
Unfortunately, it is not possible to obtain an explicit expression for the likelihood function of some unknown parameters in SV models. An approach has become very attractive


This article is presented at the International Conference on Recent Development in Statistics, Empirical
Finance and Econometrics, Kyoto University, Japan, 29th November–1st December 2011.

The email addresses of the authors are, respectively, didit.budinugroho@staff.uksw.edu and
morimot@kwansei.ac.jp. The first author is a PhD candidate. The first author also wishes to thank
Shuichi Nagata for some Matlab codes and helpful discussions.

1

is the Bayesian approach, which was first proposed by Shephard (1993) and Jacquier et
al. (1994). Inference in this approach often requires advanced Bayesian computation, and
here we focus on Markov Chain Monte Carlo (MCMC) sampling. MCMC permits to obtain the conditional posterior distributions of the parameters by simulation rather than
analytical methods. Updating scheme for random samples usually involves both standard
Gibbs sampling steps and the use of the Metropolis-Hastings (MH) sampler (Metropolis
and Ulam (1949), Metropolis et al. (1953) and Hastings (1970)) for the sampling of the
volatility process and autoregressive coefficient, which have not a standard form.
Ritter and Tanner (1953) describes a procedure to obtain random samples in a Gibbs
sampler when the posterior is univariate and hard to derive or to sample from. The method
is called the Griddy Gibbs (GG) sampler and is widely applicable. Unlike the MH sampler,

in the GG sampler we do not need to find an efficient proposal distribution. This sampler
has been used by Bauwens and Lubrano (1998) in order to conduct Bayesian inference on
GARCH models and also by Tsay (2010) to sample volatility from the LNSV model.
In this article we consider a standard LNSV model and compare the performance between
the use of the GG sampler and the MH sampler. That is, we independently use the methods
to sample the log-squared volatility and parameters in the model fitted to daily returns on
three stocks of the TOPIX Core 30, which are Hitachi Ltd., Nissan Motor Co. Ltd. and
Panasonic Corp., from 5th January 2004 to 30th December 2008.
The article is organized as follows. The model specification is discussed in Section 2.
Section 3 presents the description of the Bayesian MCMC method, conditional posteriors
and sampling algorithms. In Section 4, we apply the model and the samplers to the daily
returns on three stocks to obtain volatilities and values of model parameters. Finally,
Section 5 gives some concluding remarks.

2

MCMC in the LNSV Model

2.1


Standard LNSV Model

The discrete time LNSV model analyzed in this article is the standard one given by

iid
ǫt ∼ N (0, 1),
Rt = exp 21 ht ǫt ,
ht+1 = α + φ(ht − α) + τ νt+1 ,

iid

νt+1 ∼ N (0, 1),

for t = 1, 2, ..., T , where ht = ln σt2 , for the unobservable volatility σt of asset on day t, and
Rt is the asset return on day t from which the mean and autocorrelations are removed. We
assume {ǫt } and {νt } are independent normal white noise random processes. The value of φ
measures the autocorrelation present in the log-volatility. Thus φ can be interpreted as the
persistence in the volatility, the constant scaling factor exp 12 α as the modal instantaneous
volatility, and τ is the volatility of the log-volatility (cf. Kim et al. (1998)). In that case, the
process ht is assumed to follow a stationary process. It is common to assume that 0 ≤ φ < 1

because the volatility is positively autocorrelated in most financial time series, and


h1 ∼ N α, τ 2 / 1 − φ2
and ht |ht−1 ∼ N α + φ(ht − α), τ 2 , for t = 2, ..., T.

2.2

MCMC Method

The Bayesian approach begins by completing the our model with the prior distributions
for the unobservable parameters. Following standard practice, assume that
α ∼ N (dα , Dα ) ,

φ ∼ Beta (A, B) ,
2

τ −2 ∼ Γ(a, b)

where a and b, respectively, are shape and inverse scale parameters. The priors are then

updated into the conditional posteriors. If conditional posterior of some parameters can be
obtained, these parameters can be sampled via sampling methods, such as MCMC.
The implementation of MCMC methods involves two steps. In the first step, the methods construct a Markov chain, which is a sequence of random variables converging to
its conditional posterior. In the second step, Monte Carlo methods are employed after a
sufficiently long burn-in to compute the posterior mean of parameters.
Denote R = (R1 , R2 , ..., RT ), θ = (α, φ, τ ) and H = (h1 , h2 , ..., hT ). The general form of
the single move Gibbs sampler for the LNSV model proceeds as follows. Choose arbitrary
starting values H (0) , θ(0) , and let i = 0.

1. Sample H (i+1) from p H|θ(i) , R .

2. Sample α(i+1) from p α|H (i+1) , φ(i) , τ (i) .

3. Sample φ(i+1) from p φ|H (i+1) , α(i+1) , τ (i) .
−2

4. Sample τ (i+1)
from p τ −2 |H (i+1) , α(i+1) , φ(i+1) .
5. Set i = i + 1 and go to step 1.


2.3

Conditional Posteriors and Sampling Parameters

The conditional posteriors of the parameters are found from the joint posterior of H and
θ conditional on the returns R is


YT
YT
1
2
p(ht |ht−1 , θ) × exp −
p(Rt |ht ) × p(h1 |θ) ×
p(H, θ|R) ∝
(α − dα )
t=2
t=1
2Dα



 a−1
b
1
B−1
A−1
(1)
exp − 2
×φ
(1 − φ)
×
τ2
τ
where



Rt2
1
p(Rt |ht ) ∝ exp − ht exp −

,
2
2 exp (ht )


1

1
2
−1
2 2
2
p(h1 |θ) ∝ τ
1−φ
exp − 2 1 − φ (h1 − α) ,



1
p(ht |ht−1 , θ) ∝ τ −1 exp − 2 [ht − α − φ (ht−1 − α)]2 .




2.3.1

Sampling ht

The conditional posteriors of ht are obtained from the relevant part for ht in the joint
posterior (1). By taking the logarithm, we have the following expressions,
h1 R12 −h1 (h1 − α)2
φ

e

+ 2 (h2 − α) (h1 − α) ,
2
2
2

τ

1
ht Rt2 −ht
− 2 [ht − α − φ (ht−1 − α)]2
e
ln p (ht |H−t , θ, R) ∝ − −
2
2

1
− 2 [ht+1 − α − φ (ht − α)]2 , t = 2, ..., T − 1,

R2
1
hT
− T e−hT − 2 [hT − α − φ (hT −1 − α)]2
ln p (hT |H−T , θ, R) ∝ −
2
2

ln p (h1 |H−1 , θ, R) ∝ −

3

where H−t is the parameter vector after removing ht . The above posteriors are not standard,
and thus ht can not be sampled directly. There are however several ways to sample from
this conditional posterior, such as Griddy Gibbs and Metropolis-Hastings samplers.
The following algorithm is a Griddy Gibbs sampler procedure described by Rachev et
al. (2008) for drawing from w’s posterior at the (i + 1)th iteration of the Gibbs sampler:
1. Denote the equally spaced grid of values for w, say, w1 ≤ w2 ≤ · · · ≤ wm .
2. Compute the value of w’s posterior at each of the grid nodes and denote the resultant
vector by p(w) = (p(w1 ), . . . , p(wm )).
3. Normalize p(w) and denote the resultant vector by p∗ (w) = (p∗ (w1 ), . . . , p∗ (wm )) .
4. Compute the empirical cumulative distribution function of w, CDF (w).
5. Draw a uniform (0, 1) random variate and denote it by u.
6. Find the element of CDF (w) closest to u without exceeding it.
7. The grid node corresponding to the value of CDF (w) in the previous step is the draw
of w from its posterior.
Another scheme to sample ht was given by Kim et al. (1998) by developing a simple
acceptance-rejection (AR) MH procedure (see Ripley (1987)). For each t, the AR-MH
sampling method, at (i + 1)th iteration of the Gibbs sampler algorithm is as follows:

1. Generate a proposal xt from a N mt , s2t distribution, where

mt = h∗t + 12 s2t Rt2 exp (−h∗t ) − 1 ,
in which

h∗1 = α + φ(h2 − α), s21 = τ 2 ,
τ2
φ [(ht−1 − α) + (ht+1 − α)] 2
=
,
s
, t = 2, ..., T − 1,
h∗t = α +
t
1 + φ2
1 + φ2
h∗T = α + φ(ht−1 − α), s2T = τ 2 .
2. Generate u from a uniform (0, 1) distribution.
3. If u ≤

p∗ (Rt , xt )
,

gt (Rt , xt , h∗t )



p∗ (Rt , xt ) = exp − 21 xt − 21 Rt2 exp (−h∗t ) ,


gt∗ (Rt , xt , h∗t ) = exp − 21 xt − 21 Rt2 exp (−h∗t ) (1 + h∗t − xt ) ,

(i+1)

then set ht
2.3.2

where

(i+1)

= xt , else set ht

(i)

= ht .

Sampling α

When α is only to be sampled based on the joint posterior (1), the conditional posterior
for α is the normal distribution with mean and variance being defined, respectively, by
!

2 + (n − 1)(1 − φ)2 −1
1

φ
1
s2α =
,
+

τ2
4

mα =

s2α

!

P
1 − φ2 h1 + (1 − φ) nt=2 (ht − φht−1 )

.
+

τ2


Hence α can be sampled directly from N mα , s2α , given H, φ and τ 2 .

2.3.3

Sampling φ

As far as φ is only concerned to be sampled on the joint posterior (1), the conditional
posterior for φ given H, α and τ 2 , is


1
p φ|H, α, τ 2 ∝ p(φ) × 1 − φ2 2 × fN φ|µφ , s2φ

where fN (φ|µφ , s2φ ) denote the normal density function with mean and variance being defined, respectively, by
PT
(ht − α)(ht−1 − α)
τ2
µφ = t=2
and s2φ = PT
.
PT
2
2
t=3 (ht−1 − α)
t=3 (ht−1 − α)
The above conditional posterior is not standard, and hence we draw φ by using Griddy
Gibbs and Metropolis-Hastings. Because the fN distribution does not depend on φ, the
independence sampler (IS) MH sampling method,
introduced by Tierney (1994), can be

2
implemented to sample φ from p φ|H, α, τ , at (i + 1)th iteration,


1. Generate a proposal φ∗ from a N µφ , s2φ distribution and provide that 0 ≤ φ∗ < 1.
2. Generate u from a uniform (0, 1) distribution.


1 





2
2
p(φ ) × 1 − (φ )
(i+1) = φ∗ , else set φ(i+1) = φ(i) .
3. If u ≤ min 1,

 1  then set φ


2
2

 p(φ(i) ) × 1 − φ(i)

2.3.4

Sampling τ

As τ −2 can be integrated out of the joint posterior (1), τ −2 can be sampled directly from
its conditional posterior, which is the gamma distribution with shape and inverse scale being
defined, respectively, by


XT

2

2

1
1
a = a + 2 T and b = b + 2 1 − φ (h1 − α) +
(ht − α − φ(ht−1 − α)) .
t=2

3
3.1

Empirical Illustrations
Data Description

For illustrative and comparative purposes, we use the series of daily closing prices {St } of
the three stocks of the TOPIX Core 30, which are Hitachi Ltd., Nissan Motor Co. Ltd. and
Panasonic Corp., from 5th January 2004 to 30th December 2008 for 1229 observations. The
series of return
mean-corrected returns,
Rt , given by the transformation
 are daily percentage

1 PT
Rt = 100 × ln St − ln St−1 − T i=1 (ln Si − ln Si−1 ) , for t = 1, ..., T .
Table 1 reports summary statistics of daily returns for Hitachi Ltd. (HIT), Nissan
Motor Co. Ltd. (NIS) and Panasonic Corp. (PAN). As usual, the kurtosis of the returns is
5

significantly above three, indicating leptokurtic return distributions. The Ljung-Box (LB)
test statistics indicate that the returns for HIT stock price index are serially uncorrelated,
while the returns for NIS and PAN indices are serially correlated at the 5% level. One
simple way to adjust these autocorrelated returns is to unsmooth the return series such
that the adjusted returns display no serial correlation. This way can be traced back to
Geltner (1991, 1993), and has been applied more recently by Brooks and Kat (2002). The
procedure is given as follows:
Rt − ρ1 Rt−1
Rt∗ =
1 − ρ1

where ρ1 is the first-order autocorrelation of the autocorrelated return series Ra and Rt is
the return of Ra at time t and Rt−1 is the one-period lagged return.
Table 1: Descriptive statistics of daily returns for three stocks of the TOPIX Core 30.
HIT
NIS
PAN
Raw data
Raw data
Unsmooth
Raw data
Unsmooth
Sample size
1228
1228
1227
1228
1227
Mean
−1.73e − 16 5.86e − 16 −8.41e − 04 −2.03e − 16 −1.82e − 04
Standard deviation
1.899
2.217
2.326
2.101
2.001
Kurtosis
7.205
12.2680
12.4919
9.8248
9.8202
LB(8)
8.44
17.04
14.71
15.74
12.32
p-value LB(8)
39.17%
2.97%
6.51%
4.63%
13.76%
Autocorrelation
no
yes
no
yes
no
NOTE: The lag length s = 8 for the LB(s) statistic is selected based on the choice of
s ≈ ln(1228) (see Tsay (2010)).
Statistics

3.2

Empirical Results

The hyperparameters required in the joint posterior are set to dα = 0, Dα = 10, A = 30,
B = 1.5, a = 5 and b = 0.2. The burn-in period of the MCMC simulation consists of 5, 000
iterations, and the posterior sample of the parameter consists of N = 10, 000 iterations.
The MCMC sampler is initialized by setting all the ht = 1, α = 1, φ = 0.9 and τ 2 = 0.1.
Table 2 summarizes the comparison of the MCMC output, including the posterior mean, the
standard deviation (SD) in brackets, the 95% credible interval, the Monte Carlo standard
error (MCSE) and the simulation inefficiency factors (SIF). The 95% credible intervals are
calculated using a highest posterior density (HPD) proposed by Chen and Shao (1999). The
MCSE √
is useful to check the mixing performance of the MCMC simulation and estimated
by σ
bf / N (see Roberts (1996)), where σ
bf2 defined as the variance of the posterior mean
from correlated draws. Here, the batch size for computing MCSE is 200 and there are 50
batches. The SIF can be interpreted as the number of successive iterations needed to obtain
near independent samples and is calculated by σ
bf2 /e
σf2 , where σ
ef is the standard deviation of
the posterior mean. It is useful to check the efficiency of the algorithm. In addition, we also
report the CPU time on a Core2 Duo 2.8GHz with MATLAB version 7.8.0.347(R2009a).
From Table 2 we obtain the following results. Both the MCSE and SIF indicate that
the two proposed sampling algorithms have been mixing quite good. The posterior means,
standard deviations
and 95% intervals for the modal of the stationary distribution of vola
1
tility, exp 2 α
ˆ , are both smaller in the GG sampler than those in the MH sampler. The
posterior means of φˆ show a higher persistence in volatility of the MH sampler than those
of the GG sampler. The posterior means, standard deviations and 95% intervals for the
6

conditional variance of volatility, τˆ2 , are smaller in the MH sampler than those in the GG
sampler, indicating that the volatility of the MH sampler is less variable than those of the
GG sampler. In computational time, we can see that the GG sampler is relatively greedy
because the main cost of the sampler is of course the evaluation of the posterior at each of
the grid nodes.
Table 2: Posterior samples of daily returns for three stocks of the TOPIX Core 30.
Sampler for
Mean (SD)
95% HPD Interval
ht and φ
Panel A: HIT stock
GG
1.4313 (0.1360)
(1.1629, 1.7014)
exp( 12 α
ˆ)
MH
1.5854 (0.2005)
(1.2390, 1.9728)
GG
0.9562
(0.0139)
(0.9266, 0.9790)
φˆ
MH
0.9709 (0.0106)
(0.9492, 0.9893)
GG
0.0671
(0.0194)
(0.0351, 0.1068)
τˆ2
MH
0.0430 (0.0126)
(0.0228, 0.0677)
CPU time: GG : 54.5255 min. and MH : 4.8504 min.
Panel B: NIS stock
GG
1.4280 (0.2352)
(1.0225, 1.8849)
exp( 12 α
ˆ)
MH
1.6457 (0.3917)
(0.9781, 2.3805)
GG
0.9738
(0.0092)
(0.9541, 0.9890)
φˆ
MH
0.9848 (0.0066)
(0.9718, 0.9967)
GG
0.0501
(0.0121)
(0.0289, 0.0747)
τˆ2
MH
0.0346 (0.0093)
(0.0184, 0.0530)
CPU time: GG : 55.0135 min. and MH : 4.8081 min.
Panel C: PAN stock
GG
1.4038 (0.1704)
(1.0970, 1.7714)
exp( 12 α
ˆ)
MH
1.5814 (0.2669)
(1.1362, 2.0714)
GG
0.9701
(0.0100)
(0.9516, 0.9890)
φˆ
MH
0.9783 (0.0088)
(0.9609, 0.9945)
GG
0.0491
(0.0129)
(0.0275, 0.0726)
τˆ2
MH
0.0364 (0.0112)
(0.0183, 0.0591)
CPU time: GG : 55.1225 min. and MH : 4.7892 min.
Parameter

MCSE

SIF

2.6e − 3
3.8e − 3
1.1e − 3
0.8e − 3
2.0e − 3
1.4e − 3

3.57
3.67
59.28
59.40
106.30
119.29

3.9e − 3
7.3e − 3
0.7e − 3
0.5e − 3
1.1e − 3
1.0e − 3

2.68
3.46
53.84
46.87
82.83
108.73

3.6e − 3
6.4e − 3
0.6e − 3
0.7e − 3
1.2e − 3
1.3e − 3

4.41
5.69
38.53
68.71
85.03
128.18

Next, we compare the performance of volatility estimates. To evaluate the estimated
volatility accuracy, the realized volatility (RV) is used as a proxy and six loss functions are
used (but the statistic values are not reported to save space), namely, root mean-squared
error (RMSE) for volatility and variance, mean absolute error (MAE) for volatility and
variance, logarithmic loss (LL) and Gaussian quasi-maximum likelihood function (GMLE),
as discussed by r
Bollerslev et al. (1994) and Lopez (2001). The percentage RV is defined
X Nt
by RVt = 100 ×
[p (t, k) − p (t, k − 1)]2 , where Nt is the number of observations in
k=2

day t and p (t, k) denotes the log-price at the k’th observation in day t. Figure 1 plots the
ˆ t ) together with the RV. Those
corresponding posterior mean of volatilities, σ
ˆt = exp( 21 h
plots confirm that the posterior means of volatilities from the MH sampler exhibit smoother
movements than those from the GG sampler. The GMLE loss function is minimized on all
stocks by the GG sampler, while the results for two RMSEs, two MAEs and LL indicate no
clear pattern.

7

ˆ t ) based on the
Figure 1: Time series plots for the posterior mean of volatility σ
ˆt = exp( 21 h
use of GG and MH samplers for sampling ht and φ in the standard LNSV model,
together with realized volatility.

Table 3: Summary of correlation and normality test statistics of standardized innovations.
Statistics
LB(8)
p-value LB(8)
JB
p-value JB

3.3

HIT
GG
MH
7.45
7.59
48.86% 47.44%
5.61
1.63
6.05% 44.33%

NIS
GG
MH
8.45
8.03
39.08% 43.09%
5.74
2.24
5.68% 32.58%

PAN
GG
MH
6.24
6.92
62.09% 54.52%
4.84
2.61
8.88% 27.18%

Model Diagnostic

To examine to what the proposed standard LNSV model provides an accurate description
of the return dynamics, we consider
a measure of goodness of fit based on the standardized


innovations ǫt = Rt exp − 2 ht . These innovations should form a sequence of i.i.d. normal
random variables. Table 3 presents summary of correlation and Jarque-Bera (JB) normality
test statistics of standardized innovations based on the volatility estimates by GG and MH
samplers. At the 5% significance level, both two statistics confirm that series {ǫt } for all
8

stocks and samplers have no significant serial correlation and follow a normal distribution,
indicating a quite satisfying return dynamics performance. Figure 2 depicts the QQ plots
of standardized innovations. It is shown that the standard LNSV model estimated by MH
sampler is more able to capture extreme observations in the tails of the distribution.

Figure 2: QQ plots of standardized innovations based on the GG volatility estimates (top
panel) and the MH volatility estimates (bottom panel).

4

Conclusions

This article compares performances between the use of the Griddy Gibbs and MetropolisHastings samplers to estimate parameters and latent stochastic processes in the standard
log-normal stochastic volatility (LNSV). The results, based on daily observations from three
stocks of the TOPIX Core 30: Hitachi Ltd., Nissan Motor Co. Ltd. and Panasonic Corp.,
reveal that volatility in MH sampler is more persistent and less variable than those in GG
sampler. From the use of six loss functions, where daily RV is used as a proxy, it is noted that
the GMLE criterion indicates that the GG sampler provides the most accurate volatility,
while the results for the other functions indicate no clear pattern. In computational time,
the GG sampler is much more costly, although it seems easier to implement, because the
main cost of this sampler is on the evaluation of the posterior at each of the grid nodes.
Finally, our empirical study indicates that MH sampler is more able to capture extreme
observations in the tails of the distribution.

References
Bauwens, L., & Lubrano, M. (1998). Bayesian inference on GARCH models using the Gibbs
sampler. The Econometrics Journal , 1 (1), 23–46.
Bollerslev, T., Engle, R. F., & Nelson, D. B. (1994). ARCH Models. In R. Engle &
D. McFadden (Eds.), The Handbook of Econometrics (pp. 2959–3038). Amsterdam:
North-Holland.
Brooks, C., & Kat, H. (2002). The statistical properties of hedge fund index returns and
their implications for investors. The Journal of Alternative Investments, 5 (2), 26–44.

9

Carnero, M. A., Pea, D., & Ruiz, E. (2001). Is stochastic volatility more flexible than
GARCH? Working Paper 01-08, Universidad Carlos III de Madrid. Retrieved from
e-archivo.uc3m.es/bitstream/10016/152/1/w
Geltner, D. (1991). Smoothing in appraisal-based returns. Journal of Real Estate Finance
and Economics, 4 (3), 327–345.
Geltner, D. (1993). Estimating market values from appraised values without assuming an
efficient market. Journal of Real Estate Research, 8 (3), 325–346.
Ghysels, E., Harvey, A. C., & Renault, E. (1996). Stochastic volatility. In G. Maddala &
C. Rao (Eds.), Handbook of Statistics: Statistical Methods in Finance (pp. 119–191).
Amsterdam: Elsevier Science.
Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their
applications. Biometrika, 57 (1), 97–109.
Jacquier, E., Polson, N. G., & Rossi, P. E. (1994). Bayesian analysis of stochastic volatility
models. In N. Shephard (Ed.), Stochastic Volatility: Selected Readings (pp. 247–282).
Oxford University Press, New York.
Kim, S., Shephard, N., & Chib, S. (1998). Stochastic volatility: likelihood inference and
comparison with ARCH models. In N. Shephard (Ed.), Stochastic Volatility: Selected
Readings (pp. 283–322). Oxford University Press, New York.
Lopez, J. A. (2001). Evaluation of predictive accuracy of volatility models. Journal of
Forecasting, 20 (1), 87–109.
Metropolis, N., Rosenbluth, A. W., Marshall, N. R., Teller, A. H., & Teller, E. (1953).
Equations of state calculations by fast computing machines. Journal of Chemical
Physics, 21 (6), 1087–1091.
Metropolis, N., & Ulam, S. (1949). The Monte Carlo method. Journal of the American
Statistical Association, 44 (247), 335–341.
Rachev, S. T., Hsu, J. S. J., Bagasheva, B. S., & Fabozzi, F. J. (2008). Bayesian methods
in finance. John Wiley & Sons.
Ripley, B. D. (1987). Stochastic simulation. John Wiley & Sons.
Ritter, C., & Tanner, M. A. (1953). Facilitating the Gibbs sampler: The Gibbs stopper and
the Griddy-Gibbs sampler. Journal of the American Statistical Association, 87 (419),
861–868.
Roberts, G. O. (1996). Markov chain concepts related to sampling algorithms. In R. S. Gilks
W.R. & D. Spiegelhalter (Eds.), Markov Chain Monte Carlo in Practice (pp. 45–57).
Chapman & Hall, London.
Shephard, N. (1993). Fitting non-linear time series models, with applications to stochastic
variance models. Journal of Applied Econometrics, 8 , 135–152.
Taylor, S. J. (1982). Financial returns modelled by the product of two stochastic processes—
a study of the daily sugar prices 1961–75. In N. Shephard (Ed.), Stochastic Volatility:
Selected Readings (pp. 60–82). Oxford University Press, New York.
Tierney, L. (1994). Markov chain for exploring posterior distributions. Annals of Statistics,
22 (4), 1701–1762.
Tsay, R. S. (2010). Analysis of financial time series. John Wiley & Sons.
Yu, J. (2002). Forecasting volatility in the New Zealand stock market. Applied Financial
Economics, 12 , 193–202.

10

Dokumen yang terkait

M01436

0 0 10