An example Directory UMM :Data Elmu:jurnal:M:Mathematical Social Sciences:Vol37.Issue3.May1999:

H . Dawid Mathematical Social Sciences 37 1999 265 –280 275

4. An example

Consider the following extension of a circular Rock-Scissors Paper RSP game to a 6 game with four strategies: 4 8 1 1 4 8 A 5 . 8 1 4 1 2 4.2 4.2 4.2 0.2 1 2 1 ] There are three symmetric Nash equilibria of the game: m 5 1,1,1,0, m 50,0,0,1 3 3 1 2 3 and m 50.2, 0.2, 0.2, 0.4. Whereas m and m are ESS the equilibrium m is no ESS 7 and unstable with respect to the continuous time replicator dynamics which in turn implies that it is unstable with respect to Eq. 1 for any level of inertia a. As long as there is relatively large and equal weight on the first three strategies these three strategies have a higher payoff than the fourth one. However, if the frequencies of the first three strategies differ decisively the payoff of the fourth strategy becomes above average and it increases in frequency. This, in turn decreases the payoff of all strategies but the effect on the payoff of the first three strategies is larger than the effect on the 1 2 fourth strategy. Accordingly, instability of the equilibrium m leads to the selection of m as the long run outcome of the learning process. It follows from corrollary 2 and also 1 from Weissing 1991 results on circular RSP games that m is unstable with respect to 1 the discrete time replicator dynamics. However, since m is an ESS it is stable with respect to the continuous time replicator dynamics, and therefore there exists a crucial 1 level of inertia – using proposition 3 we calculate this level as a50.649 – such that m is locally asymptotically stable with respect to the dynamics with inertia if a .a and unstable otherwise. In this game the level of inertia determines for a large set of initial 1 2 strategies s whether the process will end up at m or at m . In Fig. 1 we show the trajectory of the learning process initialized at s 50.25, 0.2, 0.2, 0.35 for a 50.58. It can be clearly seen that initially the payoff of e is below average and the 4 population strategy spirals down towards the subsimplex characterized by s 50. For 4 1 a ,a the equilibrium m is locally unstable and hs j approaches the boundary of the t subsimplex, where the payoff of e is above average. Hence, s increases again and 4 4 2 eventually the population strategy is driven towards the pure strategy equilibrium m . Things change dramatically if we increase the probability that an agent sticks to his old 1 strategy at a given period by 12 a 50.7. Now the equilibrium m is locally asymptotically stable and a trajectory initialized as above approaches the subsimplex 1 s 50 but this time spiraling inwards and converging towards m Fig. 2. 4 This shows that the level of inertia determines the dynamic equilibrium selection of the process for some initial values s . Note, however, that the process converges towards 2 m for all initial strategies s with s .0.4 regardless of the value of a. It follows from 0,4 6 Deckel and Scotchmer 1992 use a similar game to show that the discrete time replicator dynamics does even in the long run not necessarilly eliminate strategies which are dominated by a combination of other strategies in the game. 7 The relevant eigenvalues of M are h0.08, 20.161.212ij. 3 m 276 H . Dawid Mathematical Social Sciences 37 1999 265 –280 Fig. 1. The trajectory of the population strategy hs j for the extended RSP game and a level of inertia t a 50.58,a s 50.25,0.2,0.2,0.35. Fig. 2. The trajectory of the population strategy hs j for the extended RSP game and a level of inertia t a 50.7.a s 50.25,0.2,0.2,0.35. H . Dawid Mathematical Social Sciences 37 1999 265 –280 277 our reasoning that for a ,a this equilibrium is almost a global attractor. Only 3 2 trajectories with either s 50 or s 5 m do not eventually converge towards m . Since 0,4 1 2 13 ] the equilibrium m with equilibrium payoff v5 for the agents Pareto dominates m 3 with v 50.2 a higher level of inertia in the population can for a large set of initial population distributions significanlty increase the long run payoff of all agents in the population.

5. Monotone selection dynamics with inertia