Replicator dynamics with inertia

H . Dawid Mathematical Social Sciences 37 1999 265 –280 267 example of a generalized rock-scissors paper game in Section 4. In Section 5 we shortly deal with the more general case of monotone imitation dynamics and conclude with some remarks in Section 6.

2. Replicator dynamics with inertia

We consider a situation where a large number of economic agents we will proceed as if the number of agents is infinite interact with each other by iteratedly playing an evolutionary game. Every agent chooses every period a pure strategy i [ I, where n 5 uIu is the number of pure strategies and receives of payoff of 9 Pe ,s 5 e As, i i n n n where A is a n 3n payoff matrix and s [ D 5 hx[R ux 0, o x 51 j is the mixed i i 51 i population strategy generated by the choices of all agents. We assume that all payoffs are non-negative. Furthermore, we assume that the agents do not know the exact structure of their payoff function and have to base their choice of strategy on very limited information. Every period the average population payoff is made public and thus is known by all agents. Furthermore every agent meets each period a randomly chosen other agent and gets to know the strategy this agent used last period as well as the resulting payoff. Let us denote the strategy of the agent he meets with i [I. The agent compares the payoff of i with the average population payoff and adopts the strategy i with a probability proportional to the ratio of the payoff of i and the average population payoff. Thus, the probability that an arbitrary agent adopts in the next period strategy i due to a meeting 9 9 e As e As i i ]] ]] with another agent who used i is given by x s , where x is the probability i s9As s9As that he chooses i given he meets an agent who used i and s is the probability to meet i 2 such an agent. Note that the parameter x has to be sufficiently small to guarantee that 9 e As i n ]] x 1 for all i [I and s[ D . Furthermore, the probability that an arbitrary agent s9As uses i in the next period because he used it in the last and did not adopt a new strategy is n 9 e As j ]] s 1 2 x O s 5 1 2 xs . S D i j i s9As j 51 This shows that a:512x is the probability that an agent sticks to his old strategy. We call this parameter the level of inertia in the population. Since we assume that the number of agents is infinite the population strategy at time t 11 is given by the expected 2 Those readers who feel uncomfortable with the fact that the admissible range of x depends on the payoff 9 e As i ]] S D matrix A could assume that the imitation probability is actually given by min 1, x . We always s9As assume that x is sufficienlty small that for the payoff matrix considered all imitation probabilities are less or equal to 1. 268 H . Dawid Mathematical Social Sciences 37 1999 265 –280 population strategy given s . Thus, the evolution of the population strategy is given by t the following dynamical system As t ]] s 5 as 1 1 2 a diags 1 t 11 t t 9 s As t t We call this dynamical system replicator dynamics with inertia, since the case a 50 corresponds to the discrete time replicator dynamics see e.g. Weibull, 1995. The replicator dynamics has been analyzed in great detail in the game theory literature see e.g. Hofbauer and Sigmund, 1988; Weissing, 1991; Weibull, 1995, however it is mainly motivated by biological rather than by economic models. Here we provide an interpretation of the replicator dynamics with inertia as an imitation dynamics in the spirit of word of mouth learning see Ellison and Fudenberg, 1995; Dawid, 1999. Note however that a certain level of inertia always has to be present if the dynamics is interpreted according to the model presented above. Cabrales and Sobel 1992 consider the dynamics Eq. 1 in a more technical context without giving a specific economic interpretation of the model and without restricting the range of a. Of course the dynamics can also be motivated by a model where all agents have information about the payoffs of all other agents in the population and with probability 12 a choose some other agent to imitate, where again the probability to be imitated is proportional to the past success. It is well known that any Nash equilibrium of the symmetric game with payoff matrix A is a fixed point of both the discrete time replicator dynamics and also its continuous time pendant ~s 5 diags[As 2 1 s9As]. 2 However, there are fixed points which are no equilibria; for example all vertices of the simplex. As already pointed out in the introduction every evolutionary stable strategy is locally asymptotically stable with respect to the continuous time replicator dynamics. The discrete time replicator dynamics however may diverge from an equilibrium even if it is evolutionary stable because the equilibrium may be ‘overshot’ by such a wide margin that the trajectory departs more and more from the equilibrium Weissing, 1991. Since the notion of overshooting is a central point of the present paper we would like to make precise what we mean by this term. We say that a discrete time dynamic shows overshooting near an equilibrium if the fixed point is unstable with respect to the discrete time dynamics but stable with respect to Eq. 2. Note however that even if the discrete time dynamics is stable the convergence speed varies with changing levels of inertia. In particular there might be oscillating converging paths where the oscillations could be dampened by increasing the level of inertia. On the other hand, a very high level of inertia causes tiny approach steps and accordinlgy a slow convergence. It is the aim of this paper to derive characterizations of the crucial level of inertia which avoids instability due to overshooting.

3. Local stability analysis