Introduction Adaptive learning: a general framework

Journal of Economic Behavior Organization Vol. 44 2001 71–83 Can agents learn their way out of chaos? Martin Schönhofer Department of Economics, University of Bielefeld, P.O. Box 100 131, D-33501 Bielefeld, Germany Received 10 June 1998; received in revised form 22 September 1999; accepted 5 October 1999 Abstract In an OLG-model with adaptive learning forecast errors of the agents are analyzed in regions where the resulting dynamical system behaves chaoticly. Agents think they are living in stochastic world. It is shown that they cannot reject the hypothesis that mean and autocorrelation coefficients of their forecast errors are 0. Thus, they have no incentive to switch to another learning rule. Predictions cannot be unmasked as being wrong, because of the limited statistical tools the agents possess bounded rationality. These phenomena occur in the OLG-model even with a Cobb–Douglas utility function. © 2001 Published by Elsevier Science B.V. JEL classification: D83; D84 Keywords: OLG-model; Forecast errors; Cobb–Douglas utility function

1. Introduction

Long ago Kriman 1975 produced an example of least squares learning in which adaptive decision-makers converged to an estimate of environmental feedback which indicated that no further improvements in behavior were possible, when in fact such improvements existed. A similar idea has been explored by Grandmont 1998 and Sorger 1994. On the other hand, Marcet and Sargent 1989 presented an example where least squares learning converged to competitive equilibria. In this paper, I define a general class of adaptive learning models in which least squares learning is a special case and show how it can be formulated as an autonomous dynamical system. I then show that least squares learning is consistent in the sense of Hommes and Sorger 1996, even though they face a deterministic environment. Agents mistakenly inter- pret the data as the result of a stochastic process even though they are actually generated by a completely deterministic process. Thus, agents using least squares learning cannot learn their way out of chaos. 0167-268101 – see front matter © 2001 Published by Elsevier Science B.V. PII: S 0 1 6 7 - 2 6 8 1 0 0 0 0 1 5 0 - 5 72 M. Schönhofer J. of Economic Behavior Org. 44 2001 71–83

2. Adaptive learning: a general framework

Böhm and Wenzelburger 1997, 1999 developed a framework in which adaptive learning can be formally defined. 2.1. Forecast feedback The formal framework encompasses economic models with forecast feedback in which agents’ decisions influence the time-series they use for their estimation. Let X ⊂ R n be the space of the endogenous economic variables x t ∈ X and y t ∈ Y ⊂ R q be a vector of variables for which expectations are taken. The economic law is assumed to be given by the continuous map F : X × Y → X , x t +1 = F x t , y e t +1 . 1 y e t +1 ∈ Y is the predicted value for y t +1 , where ψ : X → Y, y e t +1 = ψx t . 2 We get a discrete dynamical system on X by inserting 2 into the economic law 1, given by x t +1 = F ψ x t := F x t , ψ x t , x t ∈ X t = 0, 1, . . . 3 Using this formal framework Böhm and Wenzelburger 1997, 1999 show that perfect predictors need not exist even in simple economic models. 2.2. Forecast feedback with adaptive learning Now let us relax the assumption of the time–independence of the predictors. Let the space of feasible predictors P be a subspace of the continuous differentiable functions C 1 X , Y. An adaptive learning rule chooses on the basis of past realizations a predictor from the functional space P. Definition 1. An adaptive learning rule LR t is a map, which maps past realizations {x i } t i=0 into the set of predictors P ψ t = LR t {x i } t i=0 . Adaptive learning or an adaptive learning process is a sequence of predictors {ψ t } ∞ t =1 induced by a sequence {LR t } ∞ t =1 . This results in a non-autonomous dynamical system on X : x t +1 = F ψ t x t := F x t , ψ t x t , x t ∈ X t = 0,1, . . . 4 2.3. Least squares learning Consider the special case of linear predictors P = LR n , R with ψx t = β T x t , where β T = β 1 , . . . , β n is the transposed parameter vector β. At each period, β t is determined with the adaptive learning rule M. Schönhofer J. of Economic Behavior Org. 44 2001 71–83 73 β t = argmin β t X i=1 y i − β T x i−1 2 , 5 which yields a sequence of linear predictors {ψ t } with ψ t x t := β T t x t , x t ∈ X . 6 The following proposition shows that with the adaptive learning rule 5 the dynamical system 4 is autonomous. Transformation of 5 leads to β t = t X k=1 x k−1 x T k−1 −1 t X k=1 x k−1 y k . 7 Eq. 7 can be written recursively. Define R t := t X k=1 x k−1 x T k−1 . Then, 7 implies that t −1 X k=1 x k−1 y k = R t −1 β t −1 . Furthermore, R t −1 = R t − x t −1 x T t −1 , ⇒ β t = R −1 t t −1 X k=1 x k−1 y k + x t −1 y t = R −1 t [R t −1 β t −1 + x t −1 y t ] = R −1 t [R t − x t −1 x T t −1 β t −1 + x t −1 y t ], ⇒ β t = β t −1 + R −1 t x t −1 [y t − x T t −1 β t −1 ], R t = R t −1 + x t −1 x T t −1 , ⇒ β t = ˆ φ 1 x t −1 , β t −1 , R t , R t = ˆ φ 2 x t −1 , R t −1 . From 4 and 6 follows x t +1 = F x t , ψ t x t = F x t , β T t x t = F x t , ˆ φ 1 x t −1 , β t −1 , R t T x t = ˜ F x t , x t −1 , β t −1 , R t . Rename a t = x t −1 and b t = β t −1 . This yields          x t +1 = ˜ F x t , a t , b t , R t , a t +1 = x t , b t +1 = ˆ φ 1 a t , b t , R t , R t +1 = ˆ φ 2 x t , R t , 8 and thus k t +1 = Gk t with k T t +1 = x t +1 , a t +1 , b t +1 , R t +1 , where G is defined in 8. 74 M. Schönhofer J. of Economic Behavior Org. 44 2001 71–83 2.4. Consistent adaptive learning An adaptive learning process should be rejected by agents in the model, if it shows systematic errors. In the following an adaptive learning process will be called ‘consistent’, if mean and autocorrelation coefficients of the forecast errors are insignificantly different from 0. Forecast errors in a dynamical system with adaptive learning, as defined in 4, are given by ǫ t = f x t , ψ t x t − ψ t x t . Agents act as if they are living in a stochastic world. Thus, they think {ǫ t } T t =1 are realizations of a stochastic process. Let us summarize our assumptions on the relation between what agents believe and what they observe. 1. Agents believe that y t +1 = ψ t x t + ǫ t , where {ǫ t } ∞ t =1 is a sequence of uncorrelated random variables with mean 0. 2. Based on this belief, the agents make the point estimate y e t +1 = ψ t x t . In other words, they do not maximize expected utility but they apply a certainty equivalence prin- ciple and replace the unknown realization of y t +1 by its expected value given their belief. 3. In period T + 1 they observe their past forecast errors {ǫ t } T t =1 and they use these obser- vations to test the hypothesis that forecast errors have mean zero and are uncorrelated. Definition 2. An adaptive learning process {ψ t } ∞ t =1 is called consistent, if µ = E[ǫ t ] = 0 ρ k = Cov[ǫ t , ǫ t +k ] Var[ǫ t ] = 0 k ≥ 1. Statisticians and econometricians observe in practice only finitely many forecast errors {ǫ t } T t =1 . With these finite observations they estimate finitely many k max autocorrelation coefficients. Thus, the notion of consistency can be formulated for finite observations. Definition 3. Given a time–series with T observations. An adaptive learning process is called α-consistent, if the Null hypothesis H µ = 0, ρ k = 0, 1 k k max , cannot be rejected at a certain confidence level 1 − α. µ and ρ k are estimated with ˆ µ = 1 T T X t =1 ǫ t , ˆ ρ k = ˆ γ k ˆ γ , M. Schönhofer J. of Economic Behavior Org. 44 2001 71–83 75 if ˆ γ k = 1 T T −k X t =1 ǫ t − ˆ µǫ t +k − ˆ µ, 1 ≤ k ≤ k max . An alternative test for zero autocorrelation is the Box-Pierce-test. 1

3. Consistent adaptive learning in the OLG-model