= Pspsi + L PkiPk

= Pspsi + L PkiPk

ki's m = L PkPki .

k=l (b) Dividing both sides of the relation established in part (a) by t; , we obtain

lI'i = L lI'kPki ,

1n

k=l

where lI'i = pdt; . Thus, the lI'i solve the balance equations. Furthermore, the lI'i are nonnegative, and we clearly have

2:::;:1 Pi = t; or 2:::;:1 lI'i = 1 . Hence, (11'1 , , lI'm ) is

a probability distribution. (c) Consider a probability distribution (11'1 , . . . , 11' m) that satisfies the balance equations.

Fix a recurrent state 5, let t; be the mean recurrence time of s. and let ti be the mean

402 Markov Chains Chap. 7 first passage time from a state i i= s to state s. We will show that 7rst; = 1 . Indeed,

we have

t: = 1+ L PSjtj '

j#s ti = 1+ L Pijtj , for all i i= s.

j#s

MUltiplying these equations with 7rs and 7ri , respectively, and adding, we obtain

7rst: + L 7riti = 1+ L 7ri L Pijtj .

i=l

By using the balance equations, the right-hand side is equal to

i=l j'i-s

j'i-s i=l

j'i-s

By combining the last two equations, we obtain 7rst; = 1. Since the probability distribution (7rI , . . . , 7r Tn ) satisfies the balance equations, if the initial state Xo is chosen according to this distribution, all subsequent states Xn have the same distribution. If we start at a transient state i, the probability of being

at that state at time n diminishes to as n - 00. It follows that we must have 7ri = o.

(d) Part (b) shows that there exists at least one probability distribution that satisfies the balance equations. Part (c) shows that there can be only one such probability distribution.

Problem 35. * The strong law of large numbers for Markov chains. Consider

a finite-state Markov chain in which all states belong to a single recurrent class which is aperiodic. For a fixed state s, let Yk be the time of the kth visit to state s. Let also Vn be the number of visits to state s during the first n transitions.

(a) Show that Yk/k converges with probability 1 to the mean recurrence time t; of state s.

(b) Show that Vn/ n converges with probability 1 to 1ft; .

(c) Can you relate the limit of Vn/ n to the steady-state probability of state s? Solution. (a) Let us fix an initial state i, not necessarily the same as s. Thus, the

random variables Yk+ l - Yk , for k 2:: 1 , correspond to the time between successive visits to state s. Because of the Markov property (the past is independent of the future, given the present), the process "starts fresh" at each revisit to state s and, therefore, the random variables Yk+ l - Yk are independent and identically distributed, with mean equal to the mean recurrence time t; . Using the strong law of large numbers, we obtain

with probability 1.

Problems 403 (b) Let us fix an element of the sample space (a trajectory of the Markov chain). Let

Yk and Vn be the values of the random variables Yk and Vn , respectively. Furthermore, let us assume that the sequence Yk/ k converges to t;; according to the result of part

(a), the set of trajectories with this property has probability 1. Let us consider some

n between the time of the kth visit to state s and the time just before the next visit to

that state:

Yk �n < Yk+!-

For every n in this range, we have Vn = k, and also

-- 1 - 1 < 1

� -,

Yk+! n

Yk

from which we obtain