SUMMARY AND DISCUSSION

7.6 SUMMARY AND DISCUSSION

In this chapter, we have introduced Markov chain models with a finite number of states. In a discrete-time Markov chain, transitions occur at integer times

according to given transition probabilities Pij ' The crucial property that dis­ tinguishes Markov chains from general random processes is that the transition probabilities Pij apply each time that the state is equal to i, independent of the previous values of the state. Thus, given the present, the future of the process is independent of the past.

Coming up with a suitable Markov chain model of a given physical situ­ ation is to some extent an art. In general, we need to introduce a rich enough set of states so that the current state summarizes whatever information from the history of the process is relevant to its future evolution. Subject to this requirement, we usually aim at a model that does not involve more states than necessary.

Given a Markov chain model, there are several questions of interest. (a) Questions referring to the statistics of the process over a finite time horizon.

We have seen that we can calculate the probability that the process follows a particular path by multiplying the transition probabilities along the path. The probability of a more general event can be obtained by adding the probabilities of the various paths that lead to the occurrence of the event. In some cases, we can exploit the Markov property to avoid listing each and every path that corresponds to a particular event. A prominent example is the recursive calculation of the n-step transition probabilities, using the Chapman-Kolmogorov equations.

(b) Questions referring to the steady-state behavior of the Markov chain. To address such questions, we classified the states of a Markov chain as tran­ sient and recurrent. We discussed how the recurrent states can be divided into disjoint recurrent classes, so that each state in a recurrent class is ac­ cessible from every other state in the same class. We also distinguished between periodic and aperiodic recurrent classes. The central result of Markov chain theory is that if a chain consists of a single aperiodic recur­ rent class, plus possibly some transient states, the probability Tij (n) that the state is equal to some j converges, as time goes to infinity, to a steady­ state probability 7rj , which does not depend on the initial state i. In other words, the identity of the initial state has no bearing on the statistics of Xn when n is very large. The steady-state probabilities can be found by solving a system of linear equations, consisting of the balance equations and the normalization equation

Ej 7rj 1.

(c) Questions referring to the transient behavior of a Markov chain. We dis­ cussed the absorption probabilities (the probability that the state eventu­ ally enters a given recurrent class, given that it starts at a given transient state) , and the mean first passage times (the expected time until a particu­ lar recurrent state is entered, assuming that the chain has a single recurrent

Sec. 7.6

Summary and Discussion 379 class ). In both cases, we showed that the quantities of interest can be found

by considering the unique solution to a system of linear equations. We finally considered continuous-time Markov chains. In such models,

given the current state, the next state is determined by the same mechanism as in discrete-time Markov chains. However, the time until the next transition is an exponentially distributed random variable, whose parameter depends only the current state. Continuous-time Markov chains are in many ways similar to their discrete-time counterparts. They have the same Markov property (the future is independent from the past, given the present ). In fact, we can visualize a continuous-time Markov chain in terms of a related discrete-time Markov chain obtained by a fine discretization of the time axis. Because of this correspon­

dence, the steady-state behaviors of continuous-time and discrete-time Markov chains are similar: assuming that there is a single recurrent class, the occupancy

probability of any particular state converges to a steady-state probability that does not depend on the initial state. These steady-state probabilities can be

found by solving a suitable set of balance and normalization equations.

380 Markov Chains Chap. 7