Directory UMM :Data Elmu:jurnal:J-a:Journal Of Economic Dynamics And Control:Vol24.Issue5-7.Jul2000:

Journal of Economic Dynamics & Control
24 (2000) 1097}1119

A solution method for consumption decisions
in a dynamic stochastic general
equilibrium model
J.A. Sefton*
National Institute of Economic and Social Research, 2, Dean Trench Street, Smith Square,
London SW1P 3HE, UK

Abstract
In this paper we describe a numerical solution of the consumer's life-cycle problem
based on value function iteration. The advantage of our approach is that it retains the
versatility of the value function iteration approach and achieves a high degree of accuracy
without resorting to the very computationally burdensome task of calculating a very "ne
grid. There are two innovations, the "rst is not to discretise the state space but e!ectively
to allow the states to take any value on the real line by using two di!erent third-order
interpolation algorithms: bicubic spline for extrapolation and interpolation on the edge
of the grid and the faster cubic convolution interpolation for inside the grid. The second is
to compute a pair of nested grids, one coarse and one "ne. The "ne grid is used to
calculate the consumption paths of the majority of individuals, and the coarse grid to

catch only the few with very high incomes. We shall discuss our approach in relation
to those already in the literature. We shall argue that value function iteration approach
is probably the most #exible and robust way to solve these problems. We shall show
that our implementation achieves a high degree of accuracy, using a modi"ed
den Haan and Marcet simulation accuracy test, without comprising signi"cantly on
speed. ( 2000 Elsevier Science B.V. All rights reserved.
JEL classixcation: C68; D58

* Fax: 0171-222-1435. The author would like to thank Martin Weale, Jayasri Dutta, David Miles
and Kjetil Storesletten for their help as well as all the participents of CORE symposium on General
Equilibrium Models. The research was funded from E.S.R.C. grant No. R000236788.
E-mail address: jsefton@niesr.ac.uk (J.A. Sefton)

0165-1889/00/$ - see front matter ( 2000 Elsevier Science B.V. All rights reserved.
PII: S 0 1 6 5 - 1 8 8 9 ( 9 9 ) 0 0 0 3 8 - X

1098

J.A. Sefton / Journal of Economic Dynamics & Control 24 (2000) 1097}1119


Keywords: Computational methods; Value function iteration; Hetereogeneous agent
models

1. Introduction
In this paper we describe a numerical solution of the consumer's life-cycle
problem based on value function iteration. The advantage of our approach is
that it retains the versatility of the value function grid approach: it can easily
accommodate increasing complexities such as liquidity constraints, margins
between interest rates on lending and borrowing, means-tested bene"ts, uncertain life length etc. At the same time it achieves a high degree of accuracy
without resorting to the computationally burdensome (and therefore timeconsuming task) of calculating a very "ne grid. The motivation for solving this
consumer problem e$ciently and accurately is to be able to solve a general
equilibrium model with as near as possible a continuum of consumers.
We have made two innovations to the basic value function iteration approach
as described in Taylor and Uhlig (1990, p. 2). The "rst is e!ectively to allow
the states of the problem to lie anywhere on the real line rather than at a set
of discrete grid points by using two di!erent third-order interpolation algorithms: bicubic spline for extrapolation and interpolation on the edge of the
grid and the faster cubic convolution interpolation for inside the grid. The
second is to compute a pair of nested grids; a coarse one and a "ne one. The
"ne grid is used to calculate the consumption paths of the majority of individuals, and the coarse grid to catch the few very wealthy, or productive or just
plain lucky ones.

Any numerical solution algorithm must be assessed according to its accuracy,
speed, #exibility and robustness. We discuss in detail these concepts and show
how the performance of other algorithms in the literature can be assessed
according to these criteria. We argue that our algorithm compromises slightly
on speed, but gains in both #exibility and robustness. We assess its accuracy in
detail by using the den Haan and Marcet test of simulation accuracy. This is
a powerful test which has been used previously by Den Haan and Marcet (1994)
and Campbell and Koo (1997) to rank solutions to stochastic steady-state
problems according to their accuracy. We have adapted this test slightly so that
now its results also include an absolute measure of accuracy as well as a comparative one. These tests all suggest that our solution algorithm attains a very
high degree of accuracy.
The structure of the paper is as follows. Section 2 describes the model we wish
to solve. Section 3 discusses our assessment criteria and discuss the other
solution algorithms in the literature with reference to these criteria. Section
4 describes in detail our algorithm and discusses its speed, #exibility and

J.A. Sefton / Journal of Economic Dynamics & Control 24 (2000) 1097}1119

1099


robustness and "nally Section 5 tests the accuracy of our algorithm using the
den Haan and Marcet (1990) test.

2. The general equilibrium model
The economy consists of n individuals (in the simulation results presented
here there are 5000 individuals). Each individual is uncertain about his lifespan. The life of a individual will start at age 20, q"0, and will always be
less than or equal to 90 years, q"70. We shall denote the conditional probabilities of dying at the end of period q given that the individual has survived
to the beginning of that period as p implying of course that p "1. Thereq
70
fore the probability that an individual will survive another i years from period
q, / , is simply the cumulative product of the conditional probabilities
q, i
/ "
>
r "a t!d,
s "(1!a) t ,
t
t
K
hM ¸

t
t t
so that the demand and supply sides are in equilibrium. This problem therefore
resembles in structure most of the general equilibrium models in the literature
that have generalised away from the representative consumer.
The model of the income dynamics has an important consequence, that the
value function is not homogeneous in its two arguments because the expected
variance in income is related to its absolute level. Dutta et al. (1997) show that
this complexity is necessary, if one is to model e!ectively the distribution of
incomes in the United Kingdom. However it is not di$cult to think of other
market attributes or imperfections that would imply that consumption depends
on the absolute level of income and wealth rather than more simply their ratio.
Some examples are the consumption #oor in Hubbard et al. (1994), the means
tested bene"t described in Hubbard et al. (1995) or a required minimum level of
assets before access to a particular asset market is possible.1

3. Numerical solution algorithms
The general equilibrium model described in the previous section is solved by
iterating between the supply and demand side of the economy. First, a set of
prices is proposed; given these prices one solves the demand side of the economy

for the implied desired level of aggregate wealth and consumption. Given these
levels for the aggregate variables, one can then solve the supply side for a new set
of implied prices. Now the process can be repeated until a "xed point is found.
The convergence of the algorithm can be increased dramatically by introducing
some damping in the interest rate iterations.
The time-consuming and di$cult part of the algorithm is solving the demand
side given a set of prices, and this is the area that we wish to concentrate on for
the majority of this section. In Table 1 we attempt a summary taxonomy of
solution methods for this problem.

1 Meade (1976, pp. 173}175) suggested that this is a cause of the high level of dispersion in the
wealth holdings of households.

1102

J.A. Sefton / Journal of Economic Dynamics & Control 24 (2000) 1097}1119

Table 1
A taxonomy of solution methods


Grid methods

LQ approximation
Parametrized expectations

Euler equation

Value function

(Hubbard et al., 1995;
Baxter et al., 1990;
Attanasio et al., 1995)
Christiano (1990)
den Haan and Marcet (1990)

Christiano (1990)

King et al. (1988)

3.1. Desirable properties of any solution algorithms

Any numerical solution algorithm is a compromise between the following
considerations
(1) Accuracy,
(2) Speed,

(3) Flexibility,
(4) Robustness.

The "rst point needs little elaboration. Some degree of accuracy is obviously
a pre-requisite for the algorithm to be of any use; however it is often di$cult to
ascertain precisely the accuracy of the algorithm. We shall discuss this in detail
in Section 5.
Speed is always a consideration in this area as general equilibrium models are
exceptionally computationally burdensome, and so it is much possible for the
solution of a general equilibrium model to take some days to solve. This is one of
the reasons that some authors have decided to approximate the non-linear
optimisation problem by a Linear Quadratic Gaussian one.2 The "nal two
criteria, though slightly more abstract, should also be taken into account in the
design phase. A #exible algorithm is one that can solve a wide variety of
problems without substantial redesigning. This criterion obviously becomes

more important the more experiments the researcher wishes to undertake.
Finally, a robust algorithm is an algorithm that has two desirable properties, the
"rst is that it converges from a wide variety of initial guesses, and the second is
that an error in one time period does not propagate causing larger errors or even
instability later.
I shall now argue on the basis of these criteria that the value function grid
method is probably the best way to solve the demand side or consumer choice
problem in general equilibrium models. Any solution procedure must solve
either the Bellman equation (4) or the derived "rst-order conditions, the Euler

2 The other principle reason is for analytical simplicity. This motivation cannot be criticised from
the computational viewpoint.

J.A. Sefton / Journal of Economic Dynamics & Control 24 (2000) 1097}1119

1103

equations. The former is a multivariable minimisation problem whereas the
latter involves solving a set of non-linear equation. Press et al. (1993, p. 373)
argue that, in terms of their computation burden, the two problems, if well

de"ned, are almost identical. However on the basis of both the #exibility and
robustness criteria an approach based on solving the Bellman equation must be
regarded as superior. To derive the "rst-order conditions, one has to assume
di!erentiability of the value function with respect to the choice variable on its
closed feasible domain. Therefore one has already assumed away an interesting
class of problems where the value function may not be smooth, for example
where there is means testing up to some maximum limit, or di!erent rates of
interest for borrowing or lending etc.3
The argument concerning robustness is slightly more involved. Solving the
consumers' decision problem using the Bellman equation rather than the Euler
equation can be shown to be more robust under some very weak conditions. The
conditions are the same as those required to guarantee that the value function
will tend to some "nite limit as one iterates further back in time. The argument is
as follows: if these conditions are satis"ed then we know the value function will
tend to the same limit if we iterate backward whatever the initial starting point.
Therefore if there is an error in the grid at any point, this error must slowly be
attenuated as we iterate back in time. Conversely with the Euler equation
approach, we know that under the same conditions that the ratio of consumption in any two periods will tend to a constant. Therefore if there is an error in
the calculated level of consumption in any period, this will not be attenuated but
propagated backwards so that the ratio of the consumption level in any two

periods remains roughly constant. The result of the propagation of this error is
that when we calculate the consumption path of a consumer by &running' the
consumer through our grids, there will be a consistent error in the growth rate of
his or her consumption path and not just the levels. This is even though at each
point in time the Euler equations are almost satis"ed.
Therefore based on the criteria of #exibility and robustness we are persuaded
that the solution method must be based on solving the Bellman equation
rather than the Euler equations. A very popular approach then has been
to linearise the Bellman equation and solve the corresponding Linear Quadratic
model. This approach seems to work for typical real business cycle models or
representative agent models where the actual solution path is usually close to
the equilibrium path. However in a general equilibrium model where the
consumption paths of each individual are far more volatile than the aggregate,
the linear approximation fare less well. The other problem is that the solution to

3 It would of course be possible to solve the Euler equation over di!erent domains of the choice
variable where the value function is smooth and then take the maximum over the di!erent domains,
but this leads to unacceptable complexity in all but the most simple problems, (Deaton, 1991).

1104

J.A. Sefton / Journal of Economic Dynamics & Control 24 (2000) 1097}1119

these models exhibits the property of certainty equivalence and so cannot be
used to study precautionary savings.4
We have therefore eliminated all but two of the possible solution techniques.
As far as we are aware there as yet been no work be done on parametrising
the value function, but there is no reason why this should be any harder
than parametrising the expectation of the choice variable as in den Haan and
Marcet (1990). Choosing between these two is more di$cult. If the value
function were not smooth, then probably the grid method would be preferable
because it is clearly di$cult to approximate unsmooth functions with
polynomials with any degree of accuracy. However as the number of states
increases the grid method becomes computationally very expensive because
the number of grid points increases with the power of the number of states. An
ideal might be a combination of the two, where the grid dimensions correspond
to those states with respect to whom the value function is not smooth. Then at
each of these grid points the value function is parametrised with respect to the
remaining states with respect to whom it is smooth.

4. Our solution procedure
We have argued that any approach based on Euler-equations is unlikely
to be particularly #exible. We now present a grid method approach to solving the value function or Bellman equation. Its principle advantage over
a method based on parametrising the value function is that it can cope easily
with cases when the value function is not smooth. Its drawback is that as soon
as there is more than one state, the computational burden can become insurmountable. The reason for this is that in the literature, it has always
been recommended that to achieve any sort of accuracy it is necessary for the
grid to be relatively "nely spaced. The number of points at which the grid
is calculated goes up with the power of the number of states. Therefore if one
of the domains of the two states are discretised to a 100 points. This can
soon mean one is solving 10,000 optimisation problems to calculate a grid
for each time period. If each household lives for 100 years, then for a simulation of 100 years (where it would be necessary to calculate a new set of grids
for those born in each year), we would be solving 1]108 optimisation
problems for each solution of the demand side! If one also takes into account
that to calculate, E < (w , y )(1~c) for each iteration of the optimisation
t t`1 t`1 t`1
problem requires some form of numerical integration, then the size of the
problem is all too clear. The major contribution of this paper is to suggest

4 This drawback though can be removed without too much extra complication by using an
exponential linear quadratic approximation as in Whittle (1990).

J.A. Sefton / Journal of Economic Dynamics & Control 24 (2000) 1097}1119

1105

Fig. 1. The layout of the grids.

a algorithm that makes it possible to calculate an accurate solution using
a robust and #exible approach.
It is computationally possible to calculate only a coarse grid in a reasonable
period of time. We therefore have to make some use of interpolation. Our choice
of the structure of the grid is based on the following two general observations
which are true for most equilibrium models
1. That the majority of individuals (say about 90%) are clustered in a very small
area of the state space.
2. The value function is signi"cantly smoother at the regions of the state space
where either the income or wealth of the individual is large or equivalently
the value function is large.
We therefore calculate two grids for each time period, a coarse grid for the
lower values of the value function and a very coarse grid for the higher values.
These two grids were nested so as to provide a continuous coverage of the state
space, see Fig. 1. The grid intervals were chosen so that the 90% of individuals
would have labour income and wealth holdings falling within the span of the
"ner grid.

1106

J.A. Sefton / Journal of Economic Dynamics & Control 24 (2000) 1097}1119

As both grids were coarse, compared to other algorithms in the literature, we
placed considerable reliance on the interpolation routines.
4.1. Interpolation routines
We used two di!erent interpolation routines within our algorithm; a
modi"cation of the bicubic interpolation Press et al. (1993) algorithm and the
cubic convolution Keys (1981) algorithm. Both are third-order algorithms
(by this we mean both will be able to interpolate a third-order polynomial
on a regularly space grid with no error), but have very di!erent strengths and
weaknesses.
4.1.1. Bicubic spline interpolation
This algorithm has two strengths which we found particularly useful; it can be
adapted in order to extrapolate outside the grid, and the grid intervals need not
be equal. Its disadvantage is that it is very slow in comparison to cubic
convolution. We shall now describe brie#y how we adapted this algorithm so
that it could be used e!ectively for extrapolation as well as interpolation. This
discussion is done in only one dimension (To interpolate a function in more than
a single dimension simply requires that this algorithm be repeated for each
dimension). Let the (x , y ), i"1. n be the n couplets of nodes and function values
i i
to be interpolated. The idea behind the algorithm is to "t a n!1 cubic splines or
polynomials, f (x) ( f (x) is the spline on the interval [x , x ]) to each interval
i
i
i i`1
that satis"es the conditions
1. that the spline has the actual grid value at the grid nodes, f (x )"y and
i i
i
f (x )"y
i i`1
i`1
2. that the "rst di!erential of two splines in adjacent intervals is smooth at the
boundary, f @(x )"f @ (x ) and f A(x )"f A (x ).
i i`1
i`1 i`1
i i`1
i`1 i`1
These conditions in conjunction with a terminal condition for the "rst differential of the spline on the edge of the grid are su$cient to describe a unique
set of splines. However, it is the terminal conditions that must be chosen with
care if one is going to able to use these splines to extrapolate outside the grid.
Our condition, which is su$cient to determine uniqueness of the splines but
&bends' the splines the least, is to require that the third di!erential of the splines
in the penultimate and ultimate intervals are equal, f A@"f A@ and f A@ "f A@ .
n~1
n~2
2
1
To illustrate why this terminal condition is the most suitable assume for the
moment that each spline is described by the polynomial f (x)"s #t x#
i
i
i
u x2#v x3, and the underlying polynomial is in fact a third-order polyi
i
nomial y"s#tx#ux2#vx3. The interpolating conditions are su$cient
to ensure the spline f (x) is equal to the underlying polynomial. Therefore
n~2

J.A. Sefton / Journal of Economic Dynamics & Control 24 (2000) 1097}1119

1107

by these same conditions the coe$cients s, t, u, of the interpolating spline
f (x) are also correctly estimated. Our terminal condition, if enforced,
n~1
will imply that f A@ "6v "f A@ "6v "6v and therefore each spline
n~1
n~1
n~2
n~2
will still be a perfect estimate of the underlying polynomial even at the edges
of the grid. These splines can therefore be used for extrapolation outside the grid
as well as interpolation within the grid.
The polynomial, f (x), has the following functional form
i
f (x)"af (x )#bf (x #1)#((a3!a) f A(x )#(b3!b) f A(x #1))h2/6,
i i
i i
i
i i
i i
where a"(x !x)/(x !x ) is distance of the independent variable from
i`1
i`1
i
the penultimate node in intervals, b"x!x /(x !x ) is distance of the
i i`1
i
independent variable from the "nal node in intervals, h"x !x is
i`1
i
the interval distance.
Clearly f (x )"y and f (x )"y
if the spline is to satisfy the "rst of the
i i
i
i i`1
i`1
above conditions. If one is to "nd the values of the second di!erential of the
spline at the nodes such that the second and terminal conditions are satis"ed, it
is necessary to solve the following set of linear equations

C

(x !2x #x ) (x !2x #x )
2
1
3
1
2
! 3
6
6
x !x
x !x
2
1
3
1
6
3

x !x
3
2
6

}

}
x !x
n~2
n~1
6

}

C DC D
f A (x )
1 1

y !y
x !x y !y
1
2
1 3
2! 2
x !x
x !x x !x
2
1
3
1 3
2
y !y
y !y
1
3
2! 2
x !x
x !x
2
1
3
2

A

f A (x )
2 2

f A (x )
3 3

]

F

f A (x )
n~2 n~2
f A (x )
n~1 n~1
f A (x )
n~1 n

"

B

F

y !y
y !y
n
n~1 ! n~1
n~2
x !x
x !x
n
n~1
n~1
n~2
y !y
x !x
y !y
n~2
n
n~1 n
n~1 ! n~1
x !x
x !x
x !x
n~1
n~2
n
n~2 n
n~1

A

D

x !x
x !x
n
n~2
n
n~1
3
6
!(x !2x #x ) (x !2x #x )
n~1
n
n~2
n
n~1
n~2
6
6

.

B

The equations have been written in this tridiagonal manner as it enables them to
be solved e$ciently by back substitution.

1108

J.A. Sefton / Journal of Economic Dynamics & Control 24 (2000) 1097}1119

As this algorithm involves an n]n matrix inversion it will be relatively slow.
We therefore used it only when the points to be interpolated were near to or
outside the grid when it was either inadvisable or infeasible to use the cubic
convolution algorithm described below.
4.1.2. Cubic convolution interpolation
This is the algorithm developed by Keys (1981) for digital image processing. It
works by postulating an interpolating function of the form

A

f (x)"+ c u
i
i

B

x!x
i ,
h

where u are the kernel functions. These are de"ned to be zero in all but a small
region around a particualr node. By making a careful set of conditions about
where the functions are zero and their continuity and smoothness, it is
possible to express the coe$cients of these kernel functions as a linear function
of the (x , y ) couplets. Therefore the actual interpolation is done by simple set of
i i
linear operations. It is this which makes it very fast and further it is almost as
accurate as the bicubic spline interpolation away from the edges of the grid
(Keys, 1981).
The decision we therefore made was to use cubic convolution method when it
made sense simply because it was much faster and nearly as accurate bicubic
spline method, and to use the bicubic spline algorithm otherwise. The actual
change over point was chosen to be half way across the intervals on the edge of
the grid. This is illustrated in Fig. 2.
4.2. The algorithm
We have now described in detail the components of the algorithm, and in this
section we link them together. The algorithm has the following structure:
1. The two grids for the "nal period, ¹, are constructed analytically. Both the
value of the value function,