L 1I"iPij = L qPij .
L 1I"iPij = L qPij .
i= 1
Since 1I"i ::; q for all i, we must have 1I"iP1j = qPij for every i, Thus, 1I"i = q for every state i from which a transition to j is possible. By repeating this argument, we see that 1I"i = q for every state i such that there is a positive probability path from i to j. Since all states are recurrent and belong to the same class, all states i have this property,
392 7 a nd
1Tl
is the same for all i . S in c e the 1Ti add to L we obtain 1Tl = 1 1m for
't .
m is even
the balance and normalization equations.
but assume that probabilities of
Problem 23. *
Consider the queueing Example
a packet arrival and a packet transmission depend on the state of the
one following occurs:
h, > 0 for i < m . and bm = O.
(ii) one
d i > 0 if i � I ! and
( ii i ) no new packet arrives and no
pens with probability 1 - bi - di if i �
I , and with probability 1 - bi if i =
Calculate
Solution. We
a chain
states are 0, 1 , ... ,m, t o t he number o f packets currently stored at t h e node. The transition probability graph
is given in
for Problem 23. to
Transition
the form
0. 1 . . .. . m - 1.
Thus we 'iTj +
1 = Pi 'iTi , where
1 = 'iTo + 'iTl + . . . + 1Tm , we obtain
1 = 11"0 ( 1 + Po + POPI + . . . + Po ' . .
d,
1 + po + POPl + . . . + Po . . . Pm - I
Problems 393 The remaining steady-state probabilities are
7T"i Po ' . . pt - l
i= 1, ... , m.
1 + po + POPI +... + po . . . pm - l
Problem 24. * Dependence of the balance equations. Show that if we add the first m -1 balance equations 7T"j = 2:;: 1 7T"kPkj , for j = 1, . .. ,m -1, we obtain the last equation 7T" m = 2:;; =1 7T"kPkm'
Solution. By adding the first m -1 balance equations, we obtain
m-l
m-l m
L 7T") = LL 7T"kPkj
= L 7T"k L Pk)
k=1
= L 7T"k ( 1 - Pk m )
k=1
m-l
= 7T" m + L 7T"k - L 7T"kpkm ·
k=1
k=1
This equation is equivalent to the last balance equation 7T" m = 2:;; =1 7T"kPkm' Problem 25. * Local balance equations. We are given a Markov chain that has
a single recurrent class which is aperiodic. Suppose that we have found a solution 7T"1 , •.. , 7T" m to the following system of local balance and normalization equations:
i, j = I , . .. , m,
L 7T"i = 1,
i= 1, ... , m.
i=1
(a) Show that the 7T"J are the steady-state probabilities. (b) What is the interpretation of the equations 7T"iPij = 7T"jpji in terms of expected
long-term frequencies of transitions between i and j? (c) Construct an example where the local balance equations are not satisfied by the
steady-state probabilities. Solution. (a) By adding the local balance equations 7T"iPij = 7T"jpji over i, we obtain
L 7T"iPij = L 7T"jpji = 7T"j ,
i=1
i=1
so the 7T"j also satisfy the balance equations. Therefore, they are equal to the steady state probabilities.
394 Markov Chains Chap. 7 (b) We know that 'TriPij can be interpreted as the expected long-term frequency of
transitions from i to j, so the local balance equations imply that the expected long term frequency of any transition is equal to the expected long-term frequency of the
reverse transition. (This property is also known as time reversibility of the chain.) (c) We need a minimum of three states for such an example. Let the states be 1 , 2, 3,
and let P12 > 0, P13 > 0, P21 > 0, P32 > 0, with all other transition probabilities being
0. The chain has a single recurrent aperiodic class. The local balance equations do not hold because the expected frequency of transitions from 1 to 3 is positive, but the expected frequency of reverse transitions is 0.
Problem 26.* Sampled Markov chains. Consider a Markov chain Xn with tran sition probabilities Pij , and let rij (n) be the n-step transition probabilities.
(a) Show that for all n � 1 and l � 1, we have
L )rkj(l).
rij (n + l)
rik (n
k=l (b) Suppose that there is a single recurrent class, which is aperiodic. We sample the
Markov chain every l transitions, thus generating a process Yn, where Yn = X/n o Show that the sampled process can be modeled by a Markov chain with a single aperiodic recurrent class and transition probabilities rij (l) .
(c) Show that the Markov chain of part (b) has the same steady-state probabilities
as the original process. Solu tion. (a) We condition on Xn and use the total probability theorem. We have
rij (n + l) = P(Xn+l = j I Xo = i)
P(Xn = k ! Xo = i) P(Xn+1 = j ! Xn = k, Xo = i)
P(Xn = k I Xo = i) P(Xn+1 = j I Xn = k)
k=l
L rzk (n )rkj(l),
k== l
where in the third equality we used the Markov property. (b) Since Xn is Markov, once we condition on X/n , the past of the process (the states Xk
for k < In) becomes independent of the future (the states Xk for k > In). This implies that given Yn, the past (the states Yk for k < n) is independent of the future (the states Yk for k > n). Thus, Yn has the Markov property. Because of our assumptions on Xn, there is a time n such that
P(Xn = j ! Xo = i) > 0,
for every n � n, every state i, and every state j in the single recurrent class R of the process Xn . This implies that
P(Yn = j I Yo = i) > 0,
Problems 395
for every n � 11, every i, and every j E R. Therefore, the process Yn has a single
recurrent class, which is aperiodic. ( c ) The n-step transition probabilities rij ( n ) of the process Xn converge to the steady
state probabilities 7rj . The n-step transition probabilities of the process Yn are of the form rij ( ln ) , and therefore also converge to the same limits 7rj . This establishes that the 7rj are the steady-state probabilities of the process Yn.
Problem 27.* Given a Markov chain Xn with a single recurrent class which is aperi odic, consider the Markov chain whose state at time n is (Xn- 1 , Xn ). Thus, the state in the new chain can be associated with the last transition in the original chain.
( a ) Show that the steady-state probabilities of the new chain are
TJij = 7r i P i j ,
where the 7ri are the steady-state probabilities of the original chain. ( b ) Generalize part ( a ) to the case of the Markov chain (Xn-k , Xn-k+l , . . . ,Xn ) ,
whose state can be associated with the last k transitions of the original chain. Solution. ( a ) For every state (i, j) of the new Markov chain, we have
p( (Xn- 1 , Xn) = (i, j)) = P(Xn- 1 = i) P(Xn = j I Xn -1 = i) = P(Xn- 1 = i) Pij. Since the Markov chain Xn has a single recurrent class which is aperiodic, P(Xn-1 = i)
converges to the steady-state probability 7ri , for every i. It follows that p( (Xn- 1 , Xn) =
(i, j)) converges to 7riPij , which is therefore the steady-state probability of (i, j).