Extending the New Keynesian Monetary Mod (1)
Extending the New Keynesian Monetary
Model with Data Revision Processes?
Jesús Vázquez,1 Ramón María-Dolores2 and Juan M. Londoño1
1
Universidad del País Vasco
Universidad de Murcia
2
Abstract. This paper proposes an extended version of the New Keynesian Monetary (NKM) model which contemplates revision processes
of output and inflation data in order to assess the importance of data
revisions on the estimated monetary policy rule parameters. The estimation results of the extended NKM model suggest that real-time data are
not rational forecasts of revised data. These results are in line with the
evidence provided by Aruoba (2008) using a reduced-form econometric
approach. Moreover, the different policy parameter estimates obtained
depending on whether or not the revision processes are assumed to be
well behaved, provide evidence that policymakers decisions could be determined by the availability of data at the time of policy implementation.
Key words: NKM model, monetary policy rule, indirect inference,
real-time data, (non-)rational forecast errors
JEL classification numbers: C32, E30, E52
We thank Mike McAleer and seminar participants at the University of Oxford for
comments and discussions. The second author also thanks Department of Economics
at the University of Oxford for its hospitality. Financial support from the Ministerio de Ciencia y Tecnología (Spain) through projects SEJ2007-66592-C03-0102/ECON is acknowledged. Correspondence to: Jesús Vázquez, Departamento de
Fundamentos del Análisis Económico II, Universidad del País Vasco, Av. Lehendakari Aguirre 83, 48015, Spain. Phone: +34-94-601-3779, Fax: +34-94-601-7123,
e-mail: [email protected]
1
INTRODUCTION
Nowadays, the importance of timing and availability of the data used
in the empirical evaluation of policy rules has converted into something crucial. Some preliminary studies about real-time data such as
those of Maravall and Pierce (1986), Trivellato and Rettore (1986)
and Ghyssels, Swanson and Callan (2002) have examined revision
process errors. Maravall and Pierce (1986) analyze how preliminary
and incomplete data affect monetary policy and demonstrate that,
even though revisions to measures of money supply are large, monetary policy would not have been much different if more accurate
data had been known. By contrast, Ghyssels, Swanson and Callan
(2002) find that a Taylor-type rule would have been significantly improved if policymakers had waited for data to be revised rather than
reacting to newly released data.3
Two of the most well-known studies that compare results based
on real-time data with those obtained with revised data are Diebold
and Rudebusch (1991) and Orphanides (2001). The former shows
that the index of leading indicators does a much worse job in predicting future movements of output in real time than it does after data
are revised. The latter examines parameter as well as model specification uncertainty in the Taylor-rule by using data over a period of
more than 20 years. The paper concludes that the Taylor principle
does not prevail using real-time data. This empirical evidence is in
sharp contrast with that found by Clarida, Galí and Gertler (2000).
With regard to Taylor-rule parameters, Rudebusch (2002) indicates
that data uncertainty potentially plays an important role in reducing the coefficients of the rule that characterize both policy inertia
and shock persistence.4 The main advantage of using real-time data
in estimating policy rules is to reduce the effects of parameter uncertainty in actual policy setting since the researcher can estimate
3
4
Mankiw, Runkle and Shapiro (1984) is a seminal paper in this branch of literature. They develop a theoretical framework for analyzing initial announcements of
economic data and apply that framework to the money stock.
By using reduced-form estimation approaches, some empirical studies, such as English, Nelson and Sack (2003) and Gerlach-Kristen (2004) have shown that both
persistent shocks and policy inertia enter the U.S. estimated monetary policy rule.
María-Dolores and Vázquez (2006, 2008) obtained similar results for the U.S. and
the Eurozone using an econometric structural approach.
policy rules with data which were truly available at any given point
in time. This is particularly important with seasonally adjusted data
as such data are subject to revisions based on two-sided filters.5
As pointed out by Aruoba (2008), should revisions of real-time
data be rational forecast errors, then the arrival of revised data would
not be relevant for policy makers’ decisions and policy rule estimates
would be rather similar using either revised or real-time data. Moreover, Aruoba (2008) documents the empirical properties of revisions
to major macroeconomic variables in the U.S. and points out that
they are not well-behaved. That is, they do not satisfy simple desirable properties such as zero mean, which indicates that the revisions
of initial announcements made by statistical agencies are biased, and
that they might be predictable using the information set available
at the time of the initial announcement.
The literature estimating policy rules with real-time data cited
above uses reduced-form econometric approaches. This paper extends the NKM model to include revision processes of output and
inflation data, and thus to analyze revised and real-time data together by using a structural econometric approach. This extension
allows for (i) a joint estimation procedure of both monetary policy
rule and revision process parameters, (ii) an assessment of the interaction between these two sets of parameters and, (iii) a test of the
null hypothesis establishing that real-time data are a rational forecast of revised data in the context of a dynamic structural general
equilibrium (DSGE) model.
The use of real-time data in the estimation of a DSGE model
may look tricky because private agents’ (households and firms) decisions determine the true (revised) values of macroeconomic variables,
such as output and inflation, and they are not observable without
error by policymakers in real time. The problem is easily solved by
augmenting the NKM model with the revision processes of output
and inflation. In particular, these revision processes are allowed to
5
Kavajecz and Collins (1995) conclude, using Monte Carlo simulations, that irrationality in seasonally adjusted data arises from the specific seasonal adjustment
procedure used by the Federal Reserve.
be determined by the information available at the time the initial
announcements of output and inflation are released.
We follow a classical approach based on the indirect inference
principle suggested by Smith (1993, 2008) to estimate our extended
version of the NKM model. In particular, we follow Smith (1993) by
using an unrestricted VAR as the auxiliary model. More precisely,
the distance function is built upon the coefficients estimated from
a five-variable VAR that considers U.S. quarterly data of revised
output growth, revised inflation, real-time output growth, real-time
inflation and the Fed funds rate.
The estimation results show that most policy rule parameter estimates depend to an important extent on whether or not the revision
processes of output and inflation are assumed to be well-behaved.
In particular, the estimates of policy inertia and output gap parameters are larger whereas monetary shock persistence substantially decreases by allowing for the possibility of non-rational revision
processes. These differences provide empirical evidence that policymakers’ decisions could be determined by the availability of data at
the time of policy implementation. Moreover, the estimates of the
revision process parameters show that the initial announcements of
output and inflation are not rational forecasts. For instance, a 1%
increase in the initial announcement of inflation leads to a downward
revision in output of 2.96%. These estimation results are in line with
the empirical evidence provided by Aruoba (2008) mentioned above,
who finds that data revisions are not well-behaved (i.e. they are not
white noise processes).
The rest of the paper is organized as follows. Section 2 introduces the log-linearized approximation of an augmented version of
the NKM model that includes the revision processes for output and
inflation. Section 3 describes the structural estimation method used
in this paper. Section 4 describes the data and discusses the estimation results. Finally, Section 5 concludes.
2
AN NKM MODEL AUGMENTED WITH
DATA REVISION PROCESSES
The model analyzed in this paper is a standard NKM model augmented with data revision processes. It is given by the following set
of equations:
xt = Et xt+1 − τ (it − Et π t+1 ) − φ(1 − ρχ )χt ,
(1)
π t = βEt π t+1 + κxt + zt ,
(2)
it = ρit−1 + (1 − ρ)[ψ1 π rt−1 + ψ2 xrt−1 ] + vt .
(3)
xt ≡ xrt + rtx ,
(4)
π t ≡ πrt + rtπ ,
(5)
rtx = bxx xrt + bxπ π rt +
r
xt ,
(6)
rtπ = bπx xrt + bππ π rt +
r
πt ,
(7)
where x denotes revised output gap (that is, the log-deviation of
output with respect to the level of output under flexible prices) and π
and i denote the deviations from the steady states of revised inflation
and nominal interest rate, respectively. πrt and xrt are real-time data
for inflation and output gap, respectively. Et denotes the conditional
expectation based on the agents’ information set at time t. χ, z and
v denote aggregate productivity, cost-push inflation and monetary
policy shocks, respectively. Each of these shocks is further assumed
to follow a first-order autoregressive process:
χt = ρχ χt−1 +
χt ,
(8)
zt = ρz zt−1 +
zt ,
(9)
vt = ρv vt−1 +
vt ,
(10)
where χt , zt and vt denote i.i.d. random innovations associated
with these shocks and their standard deviations are denoted by σ χ ,
σ z and σ v , respectively.
Equation (1) is the log-linearized consumption first-order condition obtained from the representative agent optimization plan. The
parameter τ > 0 represents the intertemporal elasticity of substitution obtained when assuming a standard constant relative risk
aversion utility function.
Equation (2) is the New Phillips curve that is obtained in a sticky
price à la Calvo (1983) model where monopolistically competitive
firms produce (a continuum of) differentiated goods and each firm
faces a downward sloping demand curve for its produced good. The
parameter β ∈ (0, 1) is the agent discount factor, and κ measures
the slope of the New Phillips curve that is related to other structural
parameters as follows
κ=
[(1/τ ) + η](1 − ω)(1 − ωβ)
.
ω
In particular, κ is a decreasing function of Calvo’s probability, ω.
The parameter ω is a measure of the degree of nominal rigidity; a
larger ω implies that fewer firms adjust prices in each period and that
the expected time between price changes is longer.6 At this point, it
is worthwhile emphasizing that the IS and Phillips curve equations
are described in terms of the revised output and inflation data since
they are indeed determined by the optimal choices of private agents
(households and firms).
In contrast to Equation (1)-(2), Equation (3) describes the monetary policy rule based on real-time data of output and inflation
truly available at the time of implementing monetary policy. Moreover, the nominal interest rate exhibits smoothing behaviour. As
pointed out by Aruoba (2008), the initial announcement of quarterly (monthly) macroeconomic variables corresponding to a particular quarter (month) appears in the vintage of the next quarter
(month), roughly 45 (at least 15) days after the end of the quarter
(month). Then, a backward-looking Taylor rule as (3), that includes
lagged values of real-time data on output and inflation, would more
accurately approximate the information set available to the Fed at
the time of implementing the policy7 .
6
7
See, for instance, Walsh (2003, chapter 5.4) for a detailed analytical derivation of
the New Phillips curve.
Notice that a backward-looking policy rule does not necessarily exclude the forwardlooking Taylor rule setting usually assumed for the monetary policy. After all, ex-
The NKM model is extended to incorporate the revision processes
of output and inflation data, respectively. Equations (4) and (5) are
identities showing how revised data of output and inflation are related to real-time output and inflation, respectively. Then, rtx (rtπ )
denotes the revision of output (inflation).8 Equations (6) and (7)
describe the revision processes associated with output and inflation,
respectively. These processes allow for the existence of non-zero correlations between output and inflation revisions and the initial announcements of these variables.9 rxt and rπt denote i.i.d. random
innovations associated with the revision processes where the corresponding standard deviations are denoted by σ rx and σ rπ , respectively.
Finally, the model is completed by the following identities:
xt = Et−1 xt + (xt − Et−1 xt ),
πt = Et−1 π t + (π t − Et−1 π t ).
The system of equations (1)-(10) (together with the latter two
identities involving forecast errors) can be written in matrix form as
follows:
Γ0 Yt = Γ1 Yt−1 + Ψ
t
+ Πη t ,
(11)
Yt = (xt , πt , it , Et xt+1 , Et πt+1 , χt , zt , vt , xrt , π rt , rtx , rtπ )0 ,
t
=(
r
r 0
χt , zt , vt , xt , πt ) .
ηt = (xt − Et−1 xt , π t − Et−1 πt )0 .
Equation (11) represents a linear rational expectations (LRE)
system. It is well known that LRE systems deliver multiple stable equilibrium solutions for certain parameter values. Lubik and
8
9
pectations of output and inflation can be seen as a function of the information set
available to the Fed in every period (in this case, real-time output and inflation).
By adding the log of potential output on both sides of (4), we have that rtx also
denotes the revision of the log of output.
The two revision processes assumed do not intend to provide a structural characterization of the revision processes followed by statistical agencies, but to provide a
simple framework to assess whether the nature of the revision processes might affect
the estimated monetary policy rule.
Schorfheide (2003) characterize the complete set of LRE models
with indeterminacies and provide a numerical method for computing
them. In this paper, we deal only with sets of parameter values that
imply determinacy (uniqueness) of the rational expectations equilibrium. In particular, we impose in the estimation procedure that the
Taylor principle holds (i.e. ψ1 > 1).
The model’s solution yields the output gap, xt . This measure is
not observable. In order to estimate the model by simulation, we have
to transform the output gap into a measure that has an observable
counterpart such as output growth. This is a quite straightforward
exercise since the log-deviation of output from its steady state can be
defined as the output gap plus the (log of the) flexible-price equilibrium level of output, ytf , and the latter can be expressed as a linear
function of the productivity shock:
ytf = φχt .
The log-deviation of output from its steady state is also unobservable. However, the growth rate of output is observable and its model
counterpart is obtained from the first-difference of the log-deviation
of output from its steady state.
Similarly, the solution of the model yields the deviations of inflation and the interest rate from their respective steady states. In order
to obtain the levels of inflation and nominal interest rate, we first
calibrate the steady-state value of inflation as the sample mean of
the inflation rate. Second, using the calibrated value of steady-state
inflation and the definition of the steady-state value of real interest
rate, we can easily compute the steady-state value of the nominal
interest rate. Third, the level of the nominal interest rate is obtained
by adding the deviation (from its steady-state value) of the nominal
rate to its steady-state value computed in the previous step. Finally,
since a period is identified with a quarter and the nominal interest
rate is measured in quarterlized values, the quarterlized interest rate
is transformed in an annualized value as in actual data.
3
ESTIMATION PROCEDURE
In order to carry out a joint estimation of the NKM model augmented with the revision processes using both revised and real-time
data, we follow a classical approach based on the indirect inference
principle suggested by Smith (1993, 2008). In particular, we follow
Smith (1993) by first using an unrestricted VAR as the auxiliary
model. More precisely, the distance function is built upon the coefficients estimated from a five-variable VAR with four lags that considers U.S. quarterly data of revised output growth, revised inflation,
real-time output growth, real-time inflation and the Fed funds rate.
The lag length considered is fairly reasonable when using quarterly
data. Second, we apply the simulated moments estimator (SME) suggested by Lee and Ingram (1991) and Duffie and Singleton (1993)
to estimate the parameters of the model. In this context, we believe
it is useful to consider an unrestricted VAR (which imposes mild restrictions) as the auxiliary model, letting the data speak more freely
than other estimation approaches such as maximum-likelihood.10
This estimation procedure starts by constructing a p × 1 vector with the coefficients of the VAR representation obtained from
actual data, denoted by HT (θ0 ) where p in this application is 120.
We have 105 coefficients from a four-lag, five-variable system and 15
extra coefficients from the non-redundant elements of the variancecovariance matrix of the VAR residuals. T denotes the length of the
time series data, and θ is a k × 1 vector whose components are the
model parameters. The true parameter values are denoted by θ0 .
Since our main goal is to estimate the policy rule parameters, prior
to estimation we split the model parameters into two groups. The
first group is formed by the pre-assigned structural parameters β,
τ , η and ω. We set β = 0.995, τ = 0.5, γ = 3.0 and ω = 0.75,
corresponding to standard values assumed in the relevant literature
for the discount factor, consumption intertemporal elasticity, the
Frisch elasticity and Calvo’s probability, respectively. The second
group, formed by policy and shock parameters, is the one being estimated. In the augmented NKM model, the estimated parameters are
10
For a detailed description of this estimation procedure see María-Dolores and
Vázquez (2006, 2008).
θ = (ρ, ψ 1 , ψ 2 , ρχ , ρz , ρv , bxx , bxπ , bπx , bππ , σ χ , σ z , σ v , σ rx , σ rπ ) and then
k = 15.
As pointed out by Lee and Ingram (1991), the randomness in the
estimator is derived from two sources: the randomness in the actual
data and the simulation. The importance of the randomness in the
simulation to the covariance matrix of the estimator is decreased by
simulating the model a large number of times. For each simulation a
p×1 vector of VAR coefficients, denoted by HN,i (θ), is obtained from
the simulated time series of output growth, inflation and the Fed
funds interest rate generated from the NKM model, where N = nT
is the length of the simulated data. By averaging
Pm the m realizations
1
of the simulated coefficients, i.e. HN (θ) = m i=1 HNi (θ), we obtain
a measure of the expected value of these coefficients, E(HN i (θ)). The
choice of values for n and m deserves some attention. Gouriéroux,
Renault and Touzi (2000) suggest that it is important for the sample
size of synthetic data to be identical to T (that is, n = 1) to get
an identical size of finite sample bias in estimators of the auxiliary
parameters computed from actual and synthetic data. We make n =
1 and m = 500 in this application. To generate simulated values of
the output growth, inflation and interest rate time series we need the
starting values of these variables. For the SME to be consistent, the
initial values must have been drawn from a stationary distribution.
In practice, to avoid the influence of starting values we generate
a realization from the stochastic processes of the five variables of
length 200 + T , discard the first 200 simulated observations, and use
only the remaining T observations to carry out the estimation. After
two hundred observations have been simulated, the influence of the
initial conditions must have disappeared.
The SME of θ0 is obtained from the minimization of a distance
function of VAR coefficients from actual and synthetic data. Formally,
min JT = [HT (θ0 ) − HN (θ)]0 W [HT (θ0 ) − HN (θ)],
θ
where W −1 is the covariance matrix of HT (θ0 ). Denoting the solution
of the minimization problem by ˆθ, Lee and Ingram (1991) and Duffie
and Singleton (1993) prove the following asymptotic results:
¸
∙ µ
¶
√
1
0
−1
T (ˆθ − θ0 ) → N 0, 1 +
(B W B)
,
m
¶
µ
1
T JT → χ2 (p − k),
1+
m
(12)
N i (θ)
).
where B is a full rank matrix given by B = E( ∂H∂θ
The objective function JT is minimized using the optimization
package OPTMUM programmed in GAUSS language. We apply the
Broyden-Fletcher-Glodfard-Shanno algorithm. To compute the covariance matrix we need to obtain B. Computation of B requires
two steps: first, obtaining the numerical first derivatives of the coefficients of the VAR representation with respect to the estimates of
the structural parameters θ for each of the m simulations; second,
averaging the m-numerical first derivatives to get B.
4
DATA AND ESTIMATION RESULTS
We consider quarterly U.S. data for the growth rate of output, the
inflation rate obtained from the implicit GDP deflator and the Fed
funds rate during the post-Volcker period (1983:1-2008:1). In addition, we also considered real-time data on output growth and inflation as reported by the Federal Reserve Bank of Philadelphia.11
Figure 1 shows the five time series considered in the paper.
We focus on the post-Volcker period for two main reasons. First,
the Taylor rule seems to fit better in this period than in the preVolcker era. Second, considering the pre-Volcker era opens the door
to many issues studied in the literature, including the presence of
macroeconomic switching regimes and the existence of switches in
monetary policy (see, for instance, Sims and Zha, 2006). These issues
are beyond the scope of this paper.
11
See Croushore and Stark (2001) for the details of the real-time data set.
Fig. 1. U.S. Time Series
Next, we motivate the inclusion of real-time data in the estimated policy rule. As a preliminary step, we analyze whether realtime data are a rational forecast of revised data. Following Aruoba
(2008), Panel A of Table 1 shows a set of summary statistics and
tests that allow us to assess whether revision processes for output
growth and inflation are well-behaved. For both revision processes,
we cannot reject the null hypothesis that the unconditional mean
is null. However, on the one hand, the standard deviation for the
two revision processes is quite large, especially when compared to
revised data standard deviations (i.e. noise/signal parameter). On
the other hand, the revision processes are likely to show a first-order
autocorrelation pattern. The evidence that revisions are not rational forecast errors is further supported by the statistics displayed in
Panel B. Neither Output growth nor inflation revision processes are
orthogonal to the initial announcements and their conditional means
are not null.
These preliminary estimation results are in line with the empirical evidence provided by Aruoba (2008) who finds that data revisions
for these variables are not white noise. The non-rational features of
revision processes shown above, together with the important differences in estimated parameters values shown below, when real-time
and revised data are jointly used in the structural estimation procedure, suggest strong evidence that a policymaker’s decisions could
be determined by the availability of data at the time of policy implementation. Next, we estimate the extended version of the NKM
model by considering both revised and real-time data.
Table 2 shows the estimation results obtained using both revised
and real-time data. The inflation parameter estimate is extremely
close to one and the output gap coefficient is large and significant.
Moreover, the policy inertia parameter estimate (ρ = 0.97) is larger
whereas the policy shock persistence parameter though smaller than
in previous studies mentioned above is still significant (ρv = 0.46).
With respect to the estimates of the remaining shock parameters,
they all display large persistence. The estimation results also show
that many revision process parameters are significant, suggesting
that real-time data are not rational forecasts in line with the evi-
Table 1. Revision process analysis. Actual data.
Panel A: Summary Statistics
rty
rtπ
Mean
0.074
−0.046
Median
−0.176
0.033
Min
−7.053
−7.273
Max
6.343
8.940
St.dev
2.968
2.039
Noise/Signal
1.350
2.076
corr.with initial 0.319
0.238
AC(1)
−0.229∗∗
−0.316∗∗∗
E(rt ) = 0 t-stat 0.301
−0.302
Panel B: Conditional Mean
rty
Coef
t-stat
Coef
const
2.614
5.235∗∗∗
2.094
(y rt −y rt−1 ) ∗ 400 −0.757 −7.399∗∗∗
0.040
(π rt ) ∗ 400
−0.072 −0.546
−0.879
F3,90
33.904∗∗∗
rtπ
t-stat
9.421∗∗∗
0.999
−14.619∗∗∗
33.904∗∗∗
Note: Revisions are calculated over annual GDP growth and inflation, respectively.
Since revisions are likely to have a first-order autocorrelation pattern, t-statistics for
testing whether unconditional means are null are calculated based on Newey-West
corrected standard deviations. Noise/signal is calculated as the standard deviation of
the revision over the standard deviation of the revised data. The null hypothesis for
computing the F -test in Panel B or conditional mean hypothesis is that all coefficients
associated with real-time information are null.
dence provided by Aruoba (2008) and that shown in Table 1 (Panel
B). In particular, the coefficient of inflation in the output revision
equation is large and significant (bxπ = −2.96). Finally, we also observe that the estimated standard deviation of innovations associated
with output revision is much higher than the one associated with inflation revision process innovations.
Table 2. Joint estimation of the NKM model and the revision processes using both
revised and real-time data.
JT (θ)
8.6653
Policy
Estimate
Shock
Estimate Revision Estimate
parameter
parameter
parameter
ρ
0.9682
ρχ
0.9652
bxx
0.1733
(0.0137)
(0.0235)
(0.0786)
ψ1
1.0000
ρz
0.9476
bxπ
−2.9620
(0.4150)
(0.0132)
(0.4214)
ψ2
0.8430
σχ
9.4e − 05
bπx
0.0229
(0.3878)
(3.9e − 05)
(0.0094)
ρv
0.4639
σz
4.7e − 04
bππ
0.0845
(0.0403)
(6.3e − 05)
(0.0583)
σv
5.3e − 05
σrπ
1.7e − 04
σrx
0.0014
(1.1e − 05)
(4.3e − 05)
(2.7e − 04)
Under the null hypothesis, H0 : bxx = bxπ = bπx = bππ = 0,
and rtx can be viewed as rational forecast errors. That is, this
hypothesis implies that the two revision processes are characterized
by two white noise processes rxt and rπt . Should revisions of real-time
data be rational forecast errors, then the arrival of revised data would
not be relevant for policy makers’ decisions and policy rule estimates
would be rather similar using either revised or real-time data. In
order to analyze whether the characteristics of revision processes
have an effect on estimated policy rule parameters, we then estimate
the system under the null hypothesis that rtx and rtx are rational
forecast errors.
rtx
Table 3 shows the estimation results imposing H0 . It is well
known that the null hypothesis H0 can be tested using the following
Wald statistic
¶
µ
1
F1 = 1 +
T [JT (θ) − JT (θ0 )] → χ2 (4),
m
where JT (θ0 ) denotes the value of the distance function under H0 .
F1 -statistic takes the value 432.2. Therefore, we can reject the joint
hypothesis that the revision processes of output and inflation are
both white noise processes at any standard significance level.
Comparing the estimation results of Tables 2 and 3, it is interesting to observe that policy inertia and output gap coefficients (ρ
and ψ2 ) decrease when imposing the restriction that the two revision
processes are well-behaved, i.e. when H0 is imposed. Although the
reduction in the latter is not statistically significant. Moreover, the
policy shock persistence parameter, ρv , increases by imposing H0 .
These results obtained under H0 are in line with the estimates of
these parameters obtained in previous papers that use only revised
data.
Furthermore, it is interesting to note that imposing H0 leads to
some poor estimates of the standard deviations of the two revision
processes (i.e. σ rx and σ rπ ). Thus, the estimate of the standard deviation of inflation revision is fifteen times larger than the one associated
with output revision. These estimation results are in sharp contrast
with those displayed in Table 2, but also with the actual statistics
reported by Aruoba (2008, Table 1), which show that actual output
revision volatility is twice as large as inflation revision volatility.
Tables 4 and 5 show a set of summary statistics for the simulated revision processes of output and inflation, respectively. The
simulated series are computed using the estimates shown in Table 2.
By comparing the properties of estimated revision processes obtained
from simulated data with those obtained from actual revisions data
shown in Table 1, we can assess the ability of the extended NKM
model to capture the main regularities observed in the actual revision processes of output growth and inflation. For output growth, the
model underestimates the standard deviation of the revision process.
With such a low standard deviation, for only 40% of the simulated
series, we could not reject the hypothesis that the unconditional
mean is null. We also find evidence of an autocorrelation pattern,
and the conditional mean is different from zero. Using simulated
data, all real-time variables seem to play a role in explaining the
output growth revision process, which confirms the hypothesis that
this revision process is not a rational forecast error. For inflation, we
again underestimate the standard deviation of the revision process.
The latest result is driven by the low estimate for the standard deviation of the innovation associated with the inflation revision process.
Consistent with actual data, the conditional mean of the inflation
revision process is also different from zero using simulated data.
Table 3. Joint estimation of the NKM model assuming that the revision processes are
well-behaved.
JT (θ)
13.1120
Policy Estimate
Shock
parameter
parameter
ρ
0.9172
ρχ
(0.0100)
ψ1
1.0002
ρz
(0.1113)
ψ2
0.6618
σχ
(0.0973)
ρv
0.8414
σz
(0.0215)
Estimate
Revision Estimate
parameter
0.9815
σrπ
0.0032
(0.0380)
(3.8e − 04)
2.1e − 04
0.8865
σrx
(0.0126)
(3.8e − 05)
1.6e − 04
σv
6.0e − 05
(8.9e − 05)
(1.2e − 05)
2.5e − 04
(4.0e − 05)
Finally, Figures 2-4 show the impulse-responses of the endogenous variables of the extended NKM model (11) to a productivity
shock, an inflation shock and a monetary policy shock, respectively,
using the estimates displayed in Table 2. In these figures the solid
line represents the impulse response implied by the NKM model augmented with revision processes, whereas the dashed lines are the corresponding 95% confidence bands. Moreover, the diamond-dashed
line represents the impulse response implied by the model under H0
Table 4. Output growth revision process analysis. Simulated series.
Panel A: Summary Statistics
percentile
Av.Coef
1
5
10
50
90
Mean
0.000 −0.061 −0.041 −0.031 0.001 0.032
Median
−0.001 −0.245 −0.176 −0.144 −0.004 0.142
Min
−3.515 −5.323 −4.633 −4.325 −3.428 −2.801
Max
3.511
2.410 2.674 2.861 3.440 4.279
St.dev
1.403
1.156 1.241 1.276 1.400 1.540
Noise/Signal
5.583
4.451 4.793 4.929 5.565 6.250
corr.with initial 0.462
0.319 0.362 0.382 0.467 0.531
AC(1)
−0.248 0.478 0.956 1.362 2.403 3.222
E(rt ) = 0 t-stat
0.003 0.020 0.031 0.190 0.445
Panel B: Conditional Mean
percentile t-stats
Av.Coef
1
5
10
50
90
const
0.562
3.578 4.296 4.745 6.719 9.524
(y rt −y rt−1 ) ∗ 400 −0.895 49.167 56.405 60.062 75.245 93.617
(π rt ) ∗ 400
−0.219 3.379 4.398 4.842 7.035 9.728
F3,90
1163.8 1355.1 1427.1 1873.7 2449.3
95
99
0.040
0.050
0.179
0.227
−2.696 −2.341
4.655
5.094
1.572
1.666
6.495
6.735
0.553
0.593
3.396 3.618∗∗∗
0.506
0.641
95
10.487
100.849
10.852
2693.5
99
12.129∗∗∗
117.126∗∗∗
12.726∗∗∗
2967.4∗∗∗
Table 5. Inflation revision process analysis. Simulated series.
Panel A: Summary Statistics
percentile
Av.Coef
1
5
10
50
90
Mean
−0.047 −0.101 −0.080 −0.073 −0.046 −0.023
Median
−0.046 −0.100 −0.081 −0.075 −0.046 −0.020
Min
−0.305 −0.429 −0.380 −0.364 −0.301 −0.247
Max
0.208
0.117 0.135 0.155 0.204 0.271
St.dev
0.102
0.084 0.089 0.092 0.102 0.113
Noise/Signal
0.151
0.123 0.130 0.135 0.151 0.165
corr.with initial 0.994
0.991 0.992 0.993 0.995 0.996
AC(1)
0.350
0.940 1.719 2.089 3.178 4.012
E(rt ) = 0 t-stat
0.361 1.157 1.564 3.286 5.553
Panel B: Conditional Mean
percentile t-stats
Av.Coef
1
5
10
50
90
const
−0.353 6.205 7.766 8.510 11.335 15.948
0.034 0.087 0.224 1.043 2.356
(y rt −y rt−1 ) ∗ 400 0.004
(πrt ) ∗ 400
0.119
5.485 6.768 7.573 10.071 13.648
F3,90
16.987 22.059 24.481 37.363 54.693
95
99
−0.017 0.001
−0.012 0.004
−0.235 −0.209
0.297
0.344
0.118
0.123
0.169
0.175
0.996
0.997
4.264 4.525∗∗∗
6.219
7.383
95
99
17.125 19.365∗∗∗
2.828
3.682
14.961 17.867∗∗∗
59.024 68.454∗∗∗
(i.e. using the estimates displayed in Table 3). The size of the shock
in each case is determined by its estimated standard deviation. In
all cases, we observe that the size of the impulse responses is larger
under H0 , and is significantly different (i.e. the impulse response lies
outside the confidence bands) in the cases of the interest rate impulse response to any shock and the impulse-responses of output gap
and inflation to a productivity shock. More important, the interest
rate impulse response to a restrictive (positive) monetary policy innovation under H0 is quite unrealistic because it leads to a negative
response of the interest rate that is due to the large negative responses of both output gap and inflation that dominate the positive
effect induced by the policy innovation.
5
CONCLUSIONS
This paper suggests an augmented version of the New Keynesian
Monetary (NKM) model, which contemplates revision processes of
output and inflation data in order to (i) test whether initial announcements are a rational forecast of revised data in the context
of a structural model and, (ii) assess the influence of deviations of
real time data from being a rational forecast of revised data on the
estimated monetary policy rule parameters.
The estimation results show that the policy inertia parameter in
the policy rule becomes even larger, whereas policy shock persistence
substantially decreases by allowing for the possibility that the initial
announcements are not a rational forecast of the revised data. Moreover, the estimation results indeed show that many revision process
parameters are significant, suggesting that real-time data are not
rational forecasts. In particular, the coefficient of inflation in the
output revision equation is large and significant. Furthermore, the
estimation results show that the innovations associated with output
revision are much higher than the inflation revision process innovations.
We find that the estimated policy rule parameters based on the
structural estimation of the extended NKM model depend on whether
Fig. 2. Impulse response to a productivity shock
Fig. 3. Impulse response to an inflation-push shock
Fig. 4. Impulse response to a monetary policy shock
the assumption of well-behaved revision processes is imposed or not.
This result, together with the empirical evidence of non well-behaved
revision processes provided in this paper and by Aruoba (2008), suggests overall strong evidence that policymakers’ decisions are most
likely to be determined by the availability of output and inflation
data at the time of policy implementation.
References
1. Aruoba, Borogan S. (2008).“Data revisions are not well-behaved”, Journal of
Money, Credit and Banking 40, 319-340.
2. Clarida, Richard, Jordi Galí, and Mark Gertler (2000) “Monetary policy rules
and macroeconomic stability: evidence and some theory,” Quarterly Journal of
Economics 115, 147-180.
3. Croushore, Dean, and Tom Stark (2001) “A real time data set for macroeconomists”, Journal of Econometrics 105,111-130.
4. Diebold, Francis X., Glenn D. Rudebusch (1991) “Forecasting output with the
composite leading index: an ex ante analysis,” Journal of the American Statistical
Association 86, 603-610.
5. Duffie, D., and K. J. Singleton (1993), “Simulated moments estimation of Markov
models of asset prices,” Econometrica 61, 929-952.
6. English, William B., William R. Nelson, and Brian P. Sack (2003)."Interpreting
the significance of the lagged interest rate in estimated monetary policy rules", The
B.E. Journal in Macroeconomics, Contributions to Macroeconomics 3(1), Article
5.
7. Gerlach-Kristen, Petra (2003) “Interest rate smoothing: monetary policy or unobserved variables?” The B.E. Journal in Macroeconomics, Contributions to Macroeconomics 4(1), Article 3.
8. Ghyssels, Eric, Norman R. Swanson, and Myles Callan (2002) “Monetary policy
rules and data uncertainty,” Southern Economic Journal 69, 239-265.
9. Gouriéroux, C., E. Renault, and N. Touzi (2000), “Calibration by simulation for
small sample bias correction,” in Mariano, R., T. Schuermann and M. Weeks
(eds.) Simulation-Based Inference in Econometrics, Methods and Applications.
Cambridge University Press, Cambridge.
10. Kavajecz, Kenneth A., and Sean S. Collins (1995) “Rationality of preliminary
money stock estimates,” Review of Economics and Statistics 77, 32-41.
11. Lee, B. S., and B. F. Ingram (1991), “Simulation estimation of time-series models,”
Journal of Econometrics 47, 197-205.
12. Lubik, Thomas A., and Frank Schorfheide (2003) “Computing sunspot equilibria in
linear rational expectations models,” Journal of Economics Dynamics and Control
28, 273-285.
13. Mankiw, N.Gregory, David E. Runkle, and Matthew D. Shapiro (1984) “Are preliminary announcements of the money stock rational forecasts?” Journal of Monetary Economics 14, 15-27.
14. Maravall., Agustín, and David A. Pierce (1986) “The transmission of data noise
into policy noise in US monetary control,” Econometrica 54, 961-979.
15. María-Dolores, Ramón, and Jesús Vázquez (2006) “How the New Keynesian monetary model fits in the US and the Euro area? An indirect inference approach,”
The B.E Journal in Macroeconomics, Topics in Macroeconomics 6 (2) Article 9.
16. María-Dolores, Ramón, and Jesús Vázquez (2008) “Term structure and estimated
monetary policy rule in the Eurozone,” Spanish Economic Review 10, 251-77.
17. Orphanides, Athanasios (2001) “Monetary policy rules based on real-time data,”
American Economic Review 91, 964-985.
18. Rudebusch, Glenn D. (2002) “Term structure evidence on interest rate smoothing
and monetary policy inertia,” Journal of Monetary Economics 49, 1161-1187.
19. Sims, Christopher A., Tao Zha (2006) “Were there regime switches in U.S. monetary policy?,” American Economic Review 96, 54-81.
20. Smith, Anthony A. (1993) “Estimating nonlinear time-series models using simulated vector autoregressions,” Journal of Applied Econometrics 8, s63-s84.
21. Smith, Anthony A. (2008) “Indirect inference,” The New Palgrave Dictionary of
Economics, 2nd Edition (forthcoming).
22. Trivellato, U. and E. Rettore. (1986) “Preliminary data errors and their impact
on the forecast error of simultaneous equation models,” Journal of Business and
Economic and Statistics 4, 445-453.
23. Walsh, Carl E. (2003) Monetary Theory and Policy. The MIT Press, Cambridge,
Massachusetts.
APPENDIX
⎛
⎞
1 0 τ −1 −τ φ(1 − ρχ ) 0 0 0
0 0 0
⎜ −κ 1 0 0 −β
0
−1 0 0
0 0 0 ⎟
⎜
⎟
⎜ 0 01 0 0
0
0 −1 0
0 0 0 ⎟
⎜
⎟
⎜ 1 00 0 0
⎟
0
0
0
−1
0
−1
0
⎜
⎟
⎜ 0 10 0 0
⎟
0
0
0
0
−1
0
−1
⎜
⎟
⎜ 0 00 0 0
0
0 0 −bxx −bxπ 1 0 ⎟
⎜
⎟,
Γ0 = ⎜
⎟
0
0
0
0
0
0
0
0
−b
−b
0
1
πx
ππ
⎜
⎟
⎜ 0 00 0 0
1
0 0 0
0 0 0 ⎟
⎜
⎟
⎟
⎜ 0 00 0 0
0
1
0
0
0
0
0
⎟
⎜
⎟
⎜ 0 00 0 0
0
0
1
0
0
0
0
⎟
⎜
⎝ 1 00 0 0
0
0 0 0
0 0 0 ⎠
0 10 0 0
0
0 0 0
0 0 0
⎛
where
⎞
00000 0 0 0 0
0 00
⎜0 0 0 0 0 0 0 0 0
0 0 0⎟
⎟
⎜
3,9
3,10
⎜ 0 0 ρ 0 0 0 0 0 Γ1 Γ1 0 0 ⎟
⎟
⎜
⎜0 0 0 0 0 0 0 0 0
0 0 0⎟
⎟
⎜
⎜0 0 0 0 0 0 0 0 0
⎟
0
0
0
⎜
⎟
⎜0 0 0 0 0 0 0 0 0
⎟
0
0
0
⎟,
Γ1 = ⎜
⎜0 0 0 0 0 0 0 0 0
0 0 0⎟
⎜
⎟
⎜0 0 0 0 0 ρ 0 0 0
⎟
0
0
0
χ
⎜
⎟
⎜0 0 0 0 0 0 ρ 0 0
0 0 0⎟
z
⎜
⎟
⎜0 0 0 0 0 0 0 ρ 0
⎟
0
0
0
v
⎜
⎟
⎝0 0 0 1 0 0 0 0 0
0 0 0⎠
00001 0 0 0 0
0 00
Γ03,9 = (1 − ρ)ψ2 , Γ03,10 = (1 − ρ)ψ1
⎛ ⎞
⎞
00
00000
⎜0 0⎟
⎜0 0 0 0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜0 0 0 0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜0 0 0 0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜0 0 0 0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜0 0 0 1 0⎟
⎜ ⎟
⎟
Ψ =⎜
⎜0 0 0 0 1⎟ , Π = ⎜0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜1 0 0 0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜0 1 0 0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜0 0 1 0 0⎟
⎜ ⎟
⎟
⎜
⎝1 0⎠
⎝0 0 0 0 0⎠
01
00000
⎛
Model with Data Revision Processes?
Jesús Vázquez,1 Ramón María-Dolores2 and Juan M. Londoño1
1
Universidad del País Vasco
Universidad de Murcia
2
Abstract. This paper proposes an extended version of the New Keynesian Monetary (NKM) model which contemplates revision processes
of output and inflation data in order to assess the importance of data
revisions on the estimated monetary policy rule parameters. The estimation results of the extended NKM model suggest that real-time data are
not rational forecasts of revised data. These results are in line with the
evidence provided by Aruoba (2008) using a reduced-form econometric
approach. Moreover, the different policy parameter estimates obtained
depending on whether or not the revision processes are assumed to be
well behaved, provide evidence that policymakers decisions could be determined by the availability of data at the time of policy implementation.
Key words: NKM model, monetary policy rule, indirect inference,
real-time data, (non-)rational forecast errors
JEL classification numbers: C32, E30, E52
We thank Mike McAleer and seminar participants at the University of Oxford for
comments and discussions. The second author also thanks Department of Economics
at the University of Oxford for its hospitality. Financial support from the Ministerio de Ciencia y Tecnología (Spain) through projects SEJ2007-66592-C03-0102/ECON is acknowledged. Correspondence to: Jesús Vázquez, Departamento de
Fundamentos del Análisis Económico II, Universidad del País Vasco, Av. Lehendakari Aguirre 83, 48015, Spain. Phone: +34-94-601-3779, Fax: +34-94-601-7123,
e-mail: [email protected]
1
INTRODUCTION
Nowadays, the importance of timing and availability of the data used
in the empirical evaluation of policy rules has converted into something crucial. Some preliminary studies about real-time data such as
those of Maravall and Pierce (1986), Trivellato and Rettore (1986)
and Ghyssels, Swanson and Callan (2002) have examined revision
process errors. Maravall and Pierce (1986) analyze how preliminary
and incomplete data affect monetary policy and demonstrate that,
even though revisions to measures of money supply are large, monetary policy would not have been much different if more accurate
data had been known. By contrast, Ghyssels, Swanson and Callan
(2002) find that a Taylor-type rule would have been significantly improved if policymakers had waited for data to be revised rather than
reacting to newly released data.3
Two of the most well-known studies that compare results based
on real-time data with those obtained with revised data are Diebold
and Rudebusch (1991) and Orphanides (2001). The former shows
that the index of leading indicators does a much worse job in predicting future movements of output in real time than it does after data
are revised. The latter examines parameter as well as model specification uncertainty in the Taylor-rule by using data over a period of
more than 20 years. The paper concludes that the Taylor principle
does not prevail using real-time data. This empirical evidence is in
sharp contrast with that found by Clarida, Galí and Gertler (2000).
With regard to Taylor-rule parameters, Rudebusch (2002) indicates
that data uncertainty potentially plays an important role in reducing the coefficients of the rule that characterize both policy inertia
and shock persistence.4 The main advantage of using real-time data
in estimating policy rules is to reduce the effects of parameter uncertainty in actual policy setting since the researcher can estimate
3
4
Mankiw, Runkle and Shapiro (1984) is a seminal paper in this branch of literature. They develop a theoretical framework for analyzing initial announcements of
economic data and apply that framework to the money stock.
By using reduced-form estimation approaches, some empirical studies, such as English, Nelson and Sack (2003) and Gerlach-Kristen (2004) have shown that both
persistent shocks and policy inertia enter the U.S. estimated monetary policy rule.
María-Dolores and Vázquez (2006, 2008) obtained similar results for the U.S. and
the Eurozone using an econometric structural approach.
policy rules with data which were truly available at any given point
in time. This is particularly important with seasonally adjusted data
as such data are subject to revisions based on two-sided filters.5
As pointed out by Aruoba (2008), should revisions of real-time
data be rational forecast errors, then the arrival of revised data would
not be relevant for policy makers’ decisions and policy rule estimates
would be rather similar using either revised or real-time data. Moreover, Aruoba (2008) documents the empirical properties of revisions
to major macroeconomic variables in the U.S. and points out that
they are not well-behaved. That is, they do not satisfy simple desirable properties such as zero mean, which indicates that the revisions
of initial announcements made by statistical agencies are biased, and
that they might be predictable using the information set available
at the time of the initial announcement.
The literature estimating policy rules with real-time data cited
above uses reduced-form econometric approaches. This paper extends the NKM model to include revision processes of output and
inflation data, and thus to analyze revised and real-time data together by using a structural econometric approach. This extension
allows for (i) a joint estimation procedure of both monetary policy
rule and revision process parameters, (ii) an assessment of the interaction between these two sets of parameters and, (iii) a test of the
null hypothesis establishing that real-time data are a rational forecast of revised data in the context of a dynamic structural general
equilibrium (DSGE) model.
The use of real-time data in the estimation of a DSGE model
may look tricky because private agents’ (households and firms) decisions determine the true (revised) values of macroeconomic variables,
such as output and inflation, and they are not observable without
error by policymakers in real time. The problem is easily solved by
augmenting the NKM model with the revision processes of output
and inflation. In particular, these revision processes are allowed to
5
Kavajecz and Collins (1995) conclude, using Monte Carlo simulations, that irrationality in seasonally adjusted data arises from the specific seasonal adjustment
procedure used by the Federal Reserve.
be determined by the information available at the time the initial
announcements of output and inflation are released.
We follow a classical approach based on the indirect inference
principle suggested by Smith (1993, 2008) to estimate our extended
version of the NKM model. In particular, we follow Smith (1993) by
using an unrestricted VAR as the auxiliary model. More precisely,
the distance function is built upon the coefficients estimated from
a five-variable VAR that considers U.S. quarterly data of revised
output growth, revised inflation, real-time output growth, real-time
inflation and the Fed funds rate.
The estimation results show that most policy rule parameter estimates depend to an important extent on whether or not the revision
processes of output and inflation are assumed to be well-behaved.
In particular, the estimates of policy inertia and output gap parameters are larger whereas monetary shock persistence substantially decreases by allowing for the possibility of non-rational revision
processes. These differences provide empirical evidence that policymakers’ decisions could be determined by the availability of data at
the time of policy implementation. Moreover, the estimates of the
revision process parameters show that the initial announcements of
output and inflation are not rational forecasts. For instance, a 1%
increase in the initial announcement of inflation leads to a downward
revision in output of 2.96%. These estimation results are in line with
the empirical evidence provided by Aruoba (2008) mentioned above,
who finds that data revisions are not well-behaved (i.e. they are not
white noise processes).
The rest of the paper is organized as follows. Section 2 introduces the log-linearized approximation of an augmented version of
the NKM model that includes the revision processes for output and
inflation. Section 3 describes the structural estimation method used
in this paper. Section 4 describes the data and discusses the estimation results. Finally, Section 5 concludes.
2
AN NKM MODEL AUGMENTED WITH
DATA REVISION PROCESSES
The model analyzed in this paper is a standard NKM model augmented with data revision processes. It is given by the following set
of equations:
xt = Et xt+1 − τ (it − Et π t+1 ) − φ(1 − ρχ )χt ,
(1)
π t = βEt π t+1 + κxt + zt ,
(2)
it = ρit−1 + (1 − ρ)[ψ1 π rt−1 + ψ2 xrt−1 ] + vt .
(3)
xt ≡ xrt + rtx ,
(4)
π t ≡ πrt + rtπ ,
(5)
rtx = bxx xrt + bxπ π rt +
r
xt ,
(6)
rtπ = bπx xrt + bππ π rt +
r
πt ,
(7)
where x denotes revised output gap (that is, the log-deviation of
output with respect to the level of output under flexible prices) and π
and i denote the deviations from the steady states of revised inflation
and nominal interest rate, respectively. πrt and xrt are real-time data
for inflation and output gap, respectively. Et denotes the conditional
expectation based on the agents’ information set at time t. χ, z and
v denote aggregate productivity, cost-push inflation and monetary
policy shocks, respectively. Each of these shocks is further assumed
to follow a first-order autoregressive process:
χt = ρχ χt−1 +
χt ,
(8)
zt = ρz zt−1 +
zt ,
(9)
vt = ρv vt−1 +
vt ,
(10)
where χt , zt and vt denote i.i.d. random innovations associated
with these shocks and their standard deviations are denoted by σ χ ,
σ z and σ v , respectively.
Equation (1) is the log-linearized consumption first-order condition obtained from the representative agent optimization plan. The
parameter τ > 0 represents the intertemporal elasticity of substitution obtained when assuming a standard constant relative risk
aversion utility function.
Equation (2) is the New Phillips curve that is obtained in a sticky
price à la Calvo (1983) model where monopolistically competitive
firms produce (a continuum of) differentiated goods and each firm
faces a downward sloping demand curve for its produced good. The
parameter β ∈ (0, 1) is the agent discount factor, and κ measures
the slope of the New Phillips curve that is related to other structural
parameters as follows
κ=
[(1/τ ) + η](1 − ω)(1 − ωβ)
.
ω
In particular, κ is a decreasing function of Calvo’s probability, ω.
The parameter ω is a measure of the degree of nominal rigidity; a
larger ω implies that fewer firms adjust prices in each period and that
the expected time between price changes is longer.6 At this point, it
is worthwhile emphasizing that the IS and Phillips curve equations
are described in terms of the revised output and inflation data since
they are indeed determined by the optimal choices of private agents
(households and firms).
In contrast to Equation (1)-(2), Equation (3) describes the monetary policy rule based on real-time data of output and inflation
truly available at the time of implementing monetary policy. Moreover, the nominal interest rate exhibits smoothing behaviour. As
pointed out by Aruoba (2008), the initial announcement of quarterly (monthly) macroeconomic variables corresponding to a particular quarter (month) appears in the vintage of the next quarter
(month), roughly 45 (at least 15) days after the end of the quarter
(month). Then, a backward-looking Taylor rule as (3), that includes
lagged values of real-time data on output and inflation, would more
accurately approximate the information set available to the Fed at
the time of implementing the policy7 .
6
7
See, for instance, Walsh (2003, chapter 5.4) for a detailed analytical derivation of
the New Phillips curve.
Notice that a backward-looking policy rule does not necessarily exclude the forwardlooking Taylor rule setting usually assumed for the monetary policy. After all, ex-
The NKM model is extended to incorporate the revision processes
of output and inflation data, respectively. Equations (4) and (5) are
identities showing how revised data of output and inflation are related to real-time output and inflation, respectively. Then, rtx (rtπ )
denotes the revision of output (inflation).8 Equations (6) and (7)
describe the revision processes associated with output and inflation,
respectively. These processes allow for the existence of non-zero correlations between output and inflation revisions and the initial announcements of these variables.9 rxt and rπt denote i.i.d. random
innovations associated with the revision processes where the corresponding standard deviations are denoted by σ rx and σ rπ , respectively.
Finally, the model is completed by the following identities:
xt = Et−1 xt + (xt − Et−1 xt ),
πt = Et−1 π t + (π t − Et−1 π t ).
The system of equations (1)-(10) (together with the latter two
identities involving forecast errors) can be written in matrix form as
follows:
Γ0 Yt = Γ1 Yt−1 + Ψ
t
+ Πη t ,
(11)
Yt = (xt , πt , it , Et xt+1 , Et πt+1 , χt , zt , vt , xrt , π rt , rtx , rtπ )0 ,
t
=(
r
r 0
χt , zt , vt , xt , πt ) .
ηt = (xt − Et−1 xt , π t − Et−1 πt )0 .
Equation (11) represents a linear rational expectations (LRE)
system. It is well known that LRE systems deliver multiple stable equilibrium solutions for certain parameter values. Lubik and
8
9
pectations of output and inflation can be seen as a function of the information set
available to the Fed in every period (in this case, real-time output and inflation).
By adding the log of potential output on both sides of (4), we have that rtx also
denotes the revision of the log of output.
The two revision processes assumed do not intend to provide a structural characterization of the revision processes followed by statistical agencies, but to provide a
simple framework to assess whether the nature of the revision processes might affect
the estimated monetary policy rule.
Schorfheide (2003) characterize the complete set of LRE models
with indeterminacies and provide a numerical method for computing
them. In this paper, we deal only with sets of parameter values that
imply determinacy (uniqueness) of the rational expectations equilibrium. In particular, we impose in the estimation procedure that the
Taylor principle holds (i.e. ψ1 > 1).
The model’s solution yields the output gap, xt . This measure is
not observable. In order to estimate the model by simulation, we have
to transform the output gap into a measure that has an observable
counterpart such as output growth. This is a quite straightforward
exercise since the log-deviation of output from its steady state can be
defined as the output gap plus the (log of the) flexible-price equilibrium level of output, ytf , and the latter can be expressed as a linear
function of the productivity shock:
ytf = φχt .
The log-deviation of output from its steady state is also unobservable. However, the growth rate of output is observable and its model
counterpart is obtained from the first-difference of the log-deviation
of output from its steady state.
Similarly, the solution of the model yields the deviations of inflation and the interest rate from their respective steady states. In order
to obtain the levels of inflation and nominal interest rate, we first
calibrate the steady-state value of inflation as the sample mean of
the inflation rate. Second, using the calibrated value of steady-state
inflation and the definition of the steady-state value of real interest
rate, we can easily compute the steady-state value of the nominal
interest rate. Third, the level of the nominal interest rate is obtained
by adding the deviation (from its steady-state value) of the nominal
rate to its steady-state value computed in the previous step. Finally,
since a period is identified with a quarter and the nominal interest
rate is measured in quarterlized values, the quarterlized interest rate
is transformed in an annualized value as in actual data.
3
ESTIMATION PROCEDURE
In order to carry out a joint estimation of the NKM model augmented with the revision processes using both revised and real-time
data, we follow a classical approach based on the indirect inference
principle suggested by Smith (1993, 2008). In particular, we follow
Smith (1993) by first using an unrestricted VAR as the auxiliary
model. More precisely, the distance function is built upon the coefficients estimated from a five-variable VAR with four lags that considers U.S. quarterly data of revised output growth, revised inflation,
real-time output growth, real-time inflation and the Fed funds rate.
The lag length considered is fairly reasonable when using quarterly
data. Second, we apply the simulated moments estimator (SME) suggested by Lee and Ingram (1991) and Duffie and Singleton (1993)
to estimate the parameters of the model. In this context, we believe
it is useful to consider an unrestricted VAR (which imposes mild restrictions) as the auxiliary model, letting the data speak more freely
than other estimation approaches such as maximum-likelihood.10
This estimation procedure starts by constructing a p × 1 vector with the coefficients of the VAR representation obtained from
actual data, denoted by HT (θ0 ) where p in this application is 120.
We have 105 coefficients from a four-lag, five-variable system and 15
extra coefficients from the non-redundant elements of the variancecovariance matrix of the VAR residuals. T denotes the length of the
time series data, and θ is a k × 1 vector whose components are the
model parameters. The true parameter values are denoted by θ0 .
Since our main goal is to estimate the policy rule parameters, prior
to estimation we split the model parameters into two groups. The
first group is formed by the pre-assigned structural parameters β,
τ , η and ω. We set β = 0.995, τ = 0.5, γ = 3.0 and ω = 0.75,
corresponding to standard values assumed in the relevant literature
for the discount factor, consumption intertemporal elasticity, the
Frisch elasticity and Calvo’s probability, respectively. The second
group, formed by policy and shock parameters, is the one being estimated. In the augmented NKM model, the estimated parameters are
10
For a detailed description of this estimation procedure see María-Dolores and
Vázquez (2006, 2008).
θ = (ρ, ψ 1 , ψ 2 , ρχ , ρz , ρv , bxx , bxπ , bπx , bππ , σ χ , σ z , σ v , σ rx , σ rπ ) and then
k = 15.
As pointed out by Lee and Ingram (1991), the randomness in the
estimator is derived from two sources: the randomness in the actual
data and the simulation. The importance of the randomness in the
simulation to the covariance matrix of the estimator is decreased by
simulating the model a large number of times. For each simulation a
p×1 vector of VAR coefficients, denoted by HN,i (θ), is obtained from
the simulated time series of output growth, inflation and the Fed
funds interest rate generated from the NKM model, where N = nT
is the length of the simulated data. By averaging
Pm the m realizations
1
of the simulated coefficients, i.e. HN (θ) = m i=1 HNi (θ), we obtain
a measure of the expected value of these coefficients, E(HN i (θ)). The
choice of values for n and m deserves some attention. Gouriéroux,
Renault and Touzi (2000) suggest that it is important for the sample
size of synthetic data to be identical to T (that is, n = 1) to get
an identical size of finite sample bias in estimators of the auxiliary
parameters computed from actual and synthetic data. We make n =
1 and m = 500 in this application. To generate simulated values of
the output growth, inflation and interest rate time series we need the
starting values of these variables. For the SME to be consistent, the
initial values must have been drawn from a stationary distribution.
In practice, to avoid the influence of starting values we generate
a realization from the stochastic processes of the five variables of
length 200 + T , discard the first 200 simulated observations, and use
only the remaining T observations to carry out the estimation. After
two hundred observations have been simulated, the influence of the
initial conditions must have disappeared.
The SME of θ0 is obtained from the minimization of a distance
function of VAR coefficients from actual and synthetic data. Formally,
min JT = [HT (θ0 ) − HN (θ)]0 W [HT (θ0 ) − HN (θ)],
θ
where W −1 is the covariance matrix of HT (θ0 ). Denoting the solution
of the minimization problem by ˆθ, Lee and Ingram (1991) and Duffie
and Singleton (1993) prove the following asymptotic results:
¸
∙ µ
¶
√
1
0
−1
T (ˆθ − θ0 ) → N 0, 1 +
(B W B)
,
m
¶
µ
1
T JT → χ2 (p − k),
1+
m
(12)
N i (θ)
).
where B is a full rank matrix given by B = E( ∂H∂θ
The objective function JT is minimized using the optimization
package OPTMUM programmed in GAUSS language. We apply the
Broyden-Fletcher-Glodfard-Shanno algorithm. To compute the covariance matrix we need to obtain B. Computation of B requires
two steps: first, obtaining the numerical first derivatives of the coefficients of the VAR representation with respect to the estimates of
the structural parameters θ for each of the m simulations; second,
averaging the m-numerical first derivatives to get B.
4
DATA AND ESTIMATION RESULTS
We consider quarterly U.S. data for the growth rate of output, the
inflation rate obtained from the implicit GDP deflator and the Fed
funds rate during the post-Volcker period (1983:1-2008:1). In addition, we also considered real-time data on output growth and inflation as reported by the Federal Reserve Bank of Philadelphia.11
Figure 1 shows the five time series considered in the paper.
We focus on the post-Volcker period for two main reasons. First,
the Taylor rule seems to fit better in this period than in the preVolcker era. Second, considering the pre-Volcker era opens the door
to many issues studied in the literature, including the presence of
macroeconomic switching regimes and the existence of switches in
monetary policy (see, for instance, Sims and Zha, 2006). These issues
are beyond the scope of this paper.
11
See Croushore and Stark (2001) for the details of the real-time data set.
Fig. 1. U.S. Time Series
Next, we motivate the inclusion of real-time data in the estimated policy rule. As a preliminary step, we analyze whether realtime data are a rational forecast of revised data. Following Aruoba
(2008), Panel A of Table 1 shows a set of summary statistics and
tests that allow us to assess whether revision processes for output
growth and inflation are well-behaved. For both revision processes,
we cannot reject the null hypothesis that the unconditional mean
is null. However, on the one hand, the standard deviation for the
two revision processes is quite large, especially when compared to
revised data standard deviations (i.e. noise/signal parameter). On
the other hand, the revision processes are likely to show a first-order
autocorrelation pattern. The evidence that revisions are not rational forecast errors is further supported by the statistics displayed in
Panel B. Neither Output growth nor inflation revision processes are
orthogonal to the initial announcements and their conditional means
are not null.
These preliminary estimation results are in line with the empirical evidence provided by Aruoba (2008) who finds that data revisions
for these variables are not white noise. The non-rational features of
revision processes shown above, together with the important differences in estimated parameters values shown below, when real-time
and revised data are jointly used in the structural estimation procedure, suggest strong evidence that a policymaker’s decisions could
be determined by the availability of data at the time of policy implementation. Next, we estimate the extended version of the NKM
model by considering both revised and real-time data.
Table 2 shows the estimation results obtained using both revised
and real-time data. The inflation parameter estimate is extremely
close to one and the output gap coefficient is large and significant.
Moreover, the policy inertia parameter estimate (ρ = 0.97) is larger
whereas the policy shock persistence parameter though smaller than
in previous studies mentioned above is still significant (ρv = 0.46).
With respect to the estimates of the remaining shock parameters,
they all display large persistence. The estimation results also show
that many revision process parameters are significant, suggesting
that real-time data are not rational forecasts in line with the evi-
Table 1. Revision process analysis. Actual data.
Panel A: Summary Statistics
rty
rtπ
Mean
0.074
−0.046
Median
−0.176
0.033
Min
−7.053
−7.273
Max
6.343
8.940
St.dev
2.968
2.039
Noise/Signal
1.350
2.076
corr.with initial 0.319
0.238
AC(1)
−0.229∗∗
−0.316∗∗∗
E(rt ) = 0 t-stat 0.301
−0.302
Panel B: Conditional Mean
rty
Coef
t-stat
Coef
const
2.614
5.235∗∗∗
2.094
(y rt −y rt−1 ) ∗ 400 −0.757 −7.399∗∗∗
0.040
(π rt ) ∗ 400
−0.072 −0.546
−0.879
F3,90
33.904∗∗∗
rtπ
t-stat
9.421∗∗∗
0.999
−14.619∗∗∗
33.904∗∗∗
Note: Revisions are calculated over annual GDP growth and inflation, respectively.
Since revisions are likely to have a first-order autocorrelation pattern, t-statistics for
testing whether unconditional means are null are calculated based on Newey-West
corrected standard deviations. Noise/signal is calculated as the standard deviation of
the revision over the standard deviation of the revised data. The null hypothesis for
computing the F -test in Panel B or conditional mean hypothesis is that all coefficients
associated with real-time information are null.
dence provided by Aruoba (2008) and that shown in Table 1 (Panel
B). In particular, the coefficient of inflation in the output revision
equation is large and significant (bxπ = −2.96). Finally, we also observe that the estimated standard deviation of innovations associated
with output revision is much higher than the one associated with inflation revision process innovations.
Table 2. Joint estimation of the NKM model and the revision processes using both
revised and real-time data.
JT (θ)
8.6653
Policy
Estimate
Shock
Estimate Revision Estimate
parameter
parameter
parameter
ρ
0.9682
ρχ
0.9652
bxx
0.1733
(0.0137)
(0.0235)
(0.0786)
ψ1
1.0000
ρz
0.9476
bxπ
−2.9620
(0.4150)
(0.0132)
(0.4214)
ψ2
0.8430
σχ
9.4e − 05
bπx
0.0229
(0.3878)
(3.9e − 05)
(0.0094)
ρv
0.4639
σz
4.7e − 04
bππ
0.0845
(0.0403)
(6.3e − 05)
(0.0583)
σv
5.3e − 05
σrπ
1.7e − 04
σrx
0.0014
(1.1e − 05)
(4.3e − 05)
(2.7e − 04)
Under the null hypothesis, H0 : bxx = bxπ = bπx = bππ = 0,
and rtx can be viewed as rational forecast errors. That is, this
hypothesis implies that the two revision processes are characterized
by two white noise processes rxt and rπt . Should revisions of real-time
data be rational forecast errors, then the arrival of revised data would
not be relevant for policy makers’ decisions and policy rule estimates
would be rather similar using either revised or real-time data. In
order to analyze whether the characteristics of revision processes
have an effect on estimated policy rule parameters, we then estimate
the system under the null hypothesis that rtx and rtx are rational
forecast errors.
rtx
Table 3 shows the estimation results imposing H0 . It is well
known that the null hypothesis H0 can be tested using the following
Wald statistic
¶
µ
1
F1 = 1 +
T [JT (θ) − JT (θ0 )] → χ2 (4),
m
where JT (θ0 ) denotes the value of the distance function under H0 .
F1 -statistic takes the value 432.2. Therefore, we can reject the joint
hypothesis that the revision processes of output and inflation are
both white noise processes at any standard significance level.
Comparing the estimation results of Tables 2 and 3, it is interesting to observe that policy inertia and output gap coefficients (ρ
and ψ2 ) decrease when imposing the restriction that the two revision
processes are well-behaved, i.e. when H0 is imposed. Although the
reduction in the latter is not statistically significant. Moreover, the
policy shock persistence parameter, ρv , increases by imposing H0 .
These results obtained under H0 are in line with the estimates of
these parameters obtained in previous papers that use only revised
data.
Furthermore, it is interesting to note that imposing H0 leads to
some poor estimates of the standard deviations of the two revision
processes (i.e. σ rx and σ rπ ). Thus, the estimate of the standard deviation of inflation revision is fifteen times larger than the one associated
with output revision. These estimation results are in sharp contrast
with those displayed in Table 2, but also with the actual statistics
reported by Aruoba (2008, Table 1), which show that actual output
revision volatility is twice as large as inflation revision volatility.
Tables 4 and 5 show a set of summary statistics for the simulated revision processes of output and inflation, respectively. The
simulated series are computed using the estimates shown in Table 2.
By comparing the properties of estimated revision processes obtained
from simulated data with those obtained from actual revisions data
shown in Table 1, we can assess the ability of the extended NKM
model to capture the main regularities observed in the actual revision processes of output growth and inflation. For output growth, the
model underestimates the standard deviation of the revision process.
With such a low standard deviation, for only 40% of the simulated
series, we could not reject the hypothesis that the unconditional
mean is null. We also find evidence of an autocorrelation pattern,
and the conditional mean is different from zero. Using simulated
data, all real-time variables seem to play a role in explaining the
output growth revision process, which confirms the hypothesis that
this revision process is not a rational forecast error. For inflation, we
again underestimate the standard deviation of the revision process.
The latest result is driven by the low estimate for the standard deviation of the innovation associated with the inflation revision process.
Consistent with actual data, the conditional mean of the inflation
revision process is also different from zero using simulated data.
Table 3. Joint estimation of the NKM model assuming that the revision processes are
well-behaved.
JT (θ)
13.1120
Policy Estimate
Shock
parameter
parameter
ρ
0.9172
ρχ
(0.0100)
ψ1
1.0002
ρz
(0.1113)
ψ2
0.6618
σχ
(0.0973)
ρv
0.8414
σz
(0.0215)
Estimate
Revision Estimate
parameter
0.9815
σrπ
0.0032
(0.0380)
(3.8e − 04)
2.1e − 04
0.8865
σrx
(0.0126)
(3.8e − 05)
1.6e − 04
σv
6.0e − 05
(8.9e − 05)
(1.2e − 05)
2.5e − 04
(4.0e − 05)
Finally, Figures 2-4 show the impulse-responses of the endogenous variables of the extended NKM model (11) to a productivity
shock, an inflation shock and a monetary policy shock, respectively,
using the estimates displayed in Table 2. In these figures the solid
line represents the impulse response implied by the NKM model augmented with revision processes, whereas the dashed lines are the corresponding 95% confidence bands. Moreover, the diamond-dashed
line represents the impulse response implied by the model under H0
Table 4. Output growth revision process analysis. Simulated series.
Panel A: Summary Statistics
percentile
Av.Coef
1
5
10
50
90
Mean
0.000 −0.061 −0.041 −0.031 0.001 0.032
Median
−0.001 −0.245 −0.176 −0.144 −0.004 0.142
Min
−3.515 −5.323 −4.633 −4.325 −3.428 −2.801
Max
3.511
2.410 2.674 2.861 3.440 4.279
St.dev
1.403
1.156 1.241 1.276 1.400 1.540
Noise/Signal
5.583
4.451 4.793 4.929 5.565 6.250
corr.with initial 0.462
0.319 0.362 0.382 0.467 0.531
AC(1)
−0.248 0.478 0.956 1.362 2.403 3.222
E(rt ) = 0 t-stat
0.003 0.020 0.031 0.190 0.445
Panel B: Conditional Mean
percentile t-stats
Av.Coef
1
5
10
50
90
const
0.562
3.578 4.296 4.745 6.719 9.524
(y rt −y rt−1 ) ∗ 400 −0.895 49.167 56.405 60.062 75.245 93.617
(π rt ) ∗ 400
−0.219 3.379 4.398 4.842 7.035 9.728
F3,90
1163.8 1355.1 1427.1 1873.7 2449.3
95
99
0.040
0.050
0.179
0.227
−2.696 −2.341
4.655
5.094
1.572
1.666
6.495
6.735
0.553
0.593
3.396 3.618∗∗∗
0.506
0.641
95
10.487
100.849
10.852
2693.5
99
12.129∗∗∗
117.126∗∗∗
12.726∗∗∗
2967.4∗∗∗
Table 5. Inflation revision process analysis. Simulated series.
Panel A: Summary Statistics
percentile
Av.Coef
1
5
10
50
90
Mean
−0.047 −0.101 −0.080 −0.073 −0.046 −0.023
Median
−0.046 −0.100 −0.081 −0.075 −0.046 −0.020
Min
−0.305 −0.429 −0.380 −0.364 −0.301 −0.247
Max
0.208
0.117 0.135 0.155 0.204 0.271
St.dev
0.102
0.084 0.089 0.092 0.102 0.113
Noise/Signal
0.151
0.123 0.130 0.135 0.151 0.165
corr.with initial 0.994
0.991 0.992 0.993 0.995 0.996
AC(1)
0.350
0.940 1.719 2.089 3.178 4.012
E(rt ) = 0 t-stat
0.361 1.157 1.564 3.286 5.553
Panel B: Conditional Mean
percentile t-stats
Av.Coef
1
5
10
50
90
const
−0.353 6.205 7.766 8.510 11.335 15.948
0.034 0.087 0.224 1.043 2.356
(y rt −y rt−1 ) ∗ 400 0.004
(πrt ) ∗ 400
0.119
5.485 6.768 7.573 10.071 13.648
F3,90
16.987 22.059 24.481 37.363 54.693
95
99
−0.017 0.001
−0.012 0.004
−0.235 −0.209
0.297
0.344
0.118
0.123
0.169
0.175
0.996
0.997
4.264 4.525∗∗∗
6.219
7.383
95
99
17.125 19.365∗∗∗
2.828
3.682
14.961 17.867∗∗∗
59.024 68.454∗∗∗
(i.e. using the estimates displayed in Table 3). The size of the shock
in each case is determined by its estimated standard deviation. In
all cases, we observe that the size of the impulse responses is larger
under H0 , and is significantly different (i.e. the impulse response lies
outside the confidence bands) in the cases of the interest rate impulse response to any shock and the impulse-responses of output gap
and inflation to a productivity shock. More important, the interest
rate impulse response to a restrictive (positive) monetary policy innovation under H0 is quite unrealistic because it leads to a negative
response of the interest rate that is due to the large negative responses of both output gap and inflation that dominate the positive
effect induced by the policy innovation.
5
CONCLUSIONS
This paper suggests an augmented version of the New Keynesian
Monetary (NKM) model, which contemplates revision processes of
output and inflation data in order to (i) test whether initial announcements are a rational forecast of revised data in the context
of a structural model and, (ii) assess the influence of deviations of
real time data from being a rational forecast of revised data on the
estimated monetary policy rule parameters.
The estimation results show that the policy inertia parameter in
the policy rule becomes even larger, whereas policy shock persistence
substantially decreases by allowing for the possibility that the initial
announcements are not a rational forecast of the revised data. Moreover, the estimation results indeed show that many revision process
parameters are significant, suggesting that real-time data are not
rational forecasts. In particular, the coefficient of inflation in the
output revision equation is large and significant. Furthermore, the
estimation results show that the innovations associated with output
revision are much higher than the inflation revision process innovations.
We find that the estimated policy rule parameters based on the
structural estimation of the extended NKM model depend on whether
Fig. 2. Impulse response to a productivity shock
Fig. 3. Impulse response to an inflation-push shock
Fig. 4. Impulse response to a monetary policy shock
the assumption of well-behaved revision processes is imposed or not.
This result, together with the empirical evidence of non well-behaved
revision processes provided in this paper and by Aruoba (2008), suggests overall strong evidence that policymakers’ decisions are most
likely to be determined by the availability of output and inflation
data at the time of policy implementation.
References
1. Aruoba, Borogan S. (2008).“Data revisions are not well-behaved”, Journal of
Money, Credit and Banking 40, 319-340.
2. Clarida, Richard, Jordi Galí, and Mark Gertler (2000) “Monetary policy rules
and macroeconomic stability: evidence and some theory,” Quarterly Journal of
Economics 115, 147-180.
3. Croushore, Dean, and Tom Stark (2001) “A real time data set for macroeconomists”, Journal of Econometrics 105,111-130.
4. Diebold, Francis X., Glenn D. Rudebusch (1991) “Forecasting output with the
composite leading index: an ex ante analysis,” Journal of the American Statistical
Association 86, 603-610.
5. Duffie, D., and K. J. Singleton (1993), “Simulated moments estimation of Markov
models of asset prices,” Econometrica 61, 929-952.
6. English, William B., William R. Nelson, and Brian P. Sack (2003)."Interpreting
the significance of the lagged interest rate in estimated monetary policy rules", The
B.E. Journal in Macroeconomics, Contributions to Macroeconomics 3(1), Article
5.
7. Gerlach-Kristen, Petra (2003) “Interest rate smoothing: monetary policy or unobserved variables?” The B.E. Journal in Macroeconomics, Contributions to Macroeconomics 4(1), Article 3.
8. Ghyssels, Eric, Norman R. Swanson, and Myles Callan (2002) “Monetary policy
rules and data uncertainty,” Southern Economic Journal 69, 239-265.
9. Gouriéroux, C., E. Renault, and N. Touzi (2000), “Calibration by simulation for
small sample bias correction,” in Mariano, R., T. Schuermann and M. Weeks
(eds.) Simulation-Based Inference in Econometrics, Methods and Applications.
Cambridge University Press, Cambridge.
10. Kavajecz, Kenneth A., and Sean S. Collins (1995) “Rationality of preliminary
money stock estimates,” Review of Economics and Statistics 77, 32-41.
11. Lee, B. S., and B. F. Ingram (1991), “Simulation estimation of time-series models,”
Journal of Econometrics 47, 197-205.
12. Lubik, Thomas A., and Frank Schorfheide (2003) “Computing sunspot equilibria in
linear rational expectations models,” Journal of Economics Dynamics and Control
28, 273-285.
13. Mankiw, N.Gregory, David E. Runkle, and Matthew D. Shapiro (1984) “Are preliminary announcements of the money stock rational forecasts?” Journal of Monetary Economics 14, 15-27.
14. Maravall., Agustín, and David A. Pierce (1986) “The transmission of data noise
into policy noise in US monetary control,” Econometrica 54, 961-979.
15. María-Dolores, Ramón, and Jesús Vázquez (2006) “How the New Keynesian monetary model fits in the US and the Euro area? An indirect inference approach,”
The B.E Journal in Macroeconomics, Topics in Macroeconomics 6 (2) Article 9.
16. María-Dolores, Ramón, and Jesús Vázquez (2008) “Term structure and estimated
monetary policy rule in the Eurozone,” Spanish Economic Review 10, 251-77.
17. Orphanides, Athanasios (2001) “Monetary policy rules based on real-time data,”
American Economic Review 91, 964-985.
18. Rudebusch, Glenn D. (2002) “Term structure evidence on interest rate smoothing
and monetary policy inertia,” Journal of Monetary Economics 49, 1161-1187.
19. Sims, Christopher A., Tao Zha (2006) “Were there regime switches in U.S. monetary policy?,” American Economic Review 96, 54-81.
20. Smith, Anthony A. (1993) “Estimating nonlinear time-series models using simulated vector autoregressions,” Journal of Applied Econometrics 8, s63-s84.
21. Smith, Anthony A. (2008) “Indirect inference,” The New Palgrave Dictionary of
Economics, 2nd Edition (forthcoming).
22. Trivellato, U. and E. Rettore. (1986) “Preliminary data errors and their impact
on the forecast error of simultaneous equation models,” Journal of Business and
Economic and Statistics 4, 445-453.
23. Walsh, Carl E. (2003) Monetary Theory and Policy. The MIT Press, Cambridge,
Massachusetts.
APPENDIX
⎛
⎞
1 0 τ −1 −τ φ(1 − ρχ ) 0 0 0
0 0 0
⎜ −κ 1 0 0 −β
0
−1 0 0
0 0 0 ⎟
⎜
⎟
⎜ 0 01 0 0
0
0 −1 0
0 0 0 ⎟
⎜
⎟
⎜ 1 00 0 0
⎟
0
0
0
−1
0
−1
0
⎜
⎟
⎜ 0 10 0 0
⎟
0
0
0
0
−1
0
−1
⎜
⎟
⎜ 0 00 0 0
0
0 0 −bxx −bxπ 1 0 ⎟
⎜
⎟,
Γ0 = ⎜
⎟
0
0
0
0
0
0
0
0
−b
−b
0
1
πx
ππ
⎜
⎟
⎜ 0 00 0 0
1
0 0 0
0 0 0 ⎟
⎜
⎟
⎟
⎜ 0 00 0 0
0
1
0
0
0
0
0
⎟
⎜
⎟
⎜ 0 00 0 0
0
0
1
0
0
0
0
⎟
⎜
⎝ 1 00 0 0
0
0 0 0
0 0 0 ⎠
0 10 0 0
0
0 0 0
0 0 0
⎛
where
⎞
00000 0 0 0 0
0 00
⎜0 0 0 0 0 0 0 0 0
0 0 0⎟
⎟
⎜
3,9
3,10
⎜ 0 0 ρ 0 0 0 0 0 Γ1 Γ1 0 0 ⎟
⎟
⎜
⎜0 0 0 0 0 0 0 0 0
0 0 0⎟
⎟
⎜
⎜0 0 0 0 0 0 0 0 0
⎟
0
0
0
⎜
⎟
⎜0 0 0 0 0 0 0 0 0
⎟
0
0
0
⎟,
Γ1 = ⎜
⎜0 0 0 0 0 0 0 0 0
0 0 0⎟
⎜
⎟
⎜0 0 0 0 0 ρ 0 0 0
⎟
0
0
0
χ
⎜
⎟
⎜0 0 0 0 0 0 ρ 0 0
0 0 0⎟
z
⎜
⎟
⎜0 0 0 0 0 0 0 ρ 0
⎟
0
0
0
v
⎜
⎟
⎝0 0 0 1 0 0 0 0 0
0 0 0⎠
00001 0 0 0 0
0 00
Γ03,9 = (1 − ρ)ψ2 , Γ03,10 = (1 − ρ)ψ1
⎛ ⎞
⎞
00
00000
⎜0 0⎟
⎜0 0 0 0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜0 0 0 0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜0 0 0 0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜0 0 0 0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜0 0 0 1 0⎟
⎜ ⎟
⎟
Ψ =⎜
⎜0 0 0 0 1⎟ , Π = ⎜0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜1 0 0 0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜0 1 0 0 0⎟
⎜ ⎟
⎟
⎜
⎜0 0⎟
⎜0 0 1 0 0⎟
⎜ ⎟
⎟
⎜
⎝1 0⎠
⎝0 0 0 0 0⎠
01
00000
⎛