Discrete Time Markov Chains
Outline Lecture #5 z Markov Processes
Markov Processes z
Discrete Time Markov Chain z
Homogeneous, Irreducible, . Transient/Recurrent, Periodic/Aperiodic
ดร อนันต ผลเพิ่ม Anan Phonphoem, Ph.D. z
Ergodic anan@cpe.ku.ac.th http://www.cpe.ku.ac.th/~anan z
Stationary Probability Computer Engineering Department Kasetsart University, Bangkok, Thailand z Transient Behavior z
Birth-Death Process 7/25/2003
7/25/2003
2
z
X(t) is a Markov Process if it satisfies the Markov z Discrete Time Markov Process: State changes occur at integer points
(Memoryless) Property P{X(t ) = x | X(t ) = x ,X(t ) = x ,…, X(t ) = x }
n+1 n+1 n n n-1 n-1
1
1 z
Continuous Time Markov Process:
= P{X(t ) = x | X(t ) = x } State changes occur at arbitrarily time
n+1 n+1 n n
Where t < t < … < t < t < t
1 2 n-1 n n+1
z
X(t) only depends upon the current state
z The past history is summarized in the current state
3 7/25/2003 4 7/25/2003 From Markov Processes … Discrete Time Markov Chains
z
z Markov Chain: One can stay in a Discrete state (position) Discrete state space Markov Process and is permitted to change state at Discrete time.
z
Discrete Time Markov Chain: State (Discrete State) changes occur at integer points
z
Continuous Time Markov Chain: State (Discrete State) changes occur at arbitrarily time
Discrete Time Markov Chains Discrete Time Markov Chains
z From initial probability and one-step transitionP{X = j | X = i , X = i ,…, X = x }
n
1
1
2 2 n-1 n-1
probability, = P{X = j | X = i } Where n = 1,2,3,…
n n-1 n-1
z we can find probability of being in various
z
X : The system is in state j at time n states at time n n
z The system can begin at state 0 with initial
probability P[X = x] z P{X = j | X = i } is the one-step n n-1 n-1 transition probability
7/25/2003 7/25/2003
7
8 z z
If transition probabilities are independent of m-step transition probabilities are (m) n, it is called Homogeneous Markov Chain. p ≡ P[X = j | X = i ]
ij n+m n
z Let p ≡ P[X = j | X = i ]
ij n n-1
(m-1) = p m = 2,3,… z
∑ p We are in state i and going to be in state j in
ik kj ∀k
the next step z The state transition prob. will only depend on the initial probability and transition m -1 k j i probability, regardless of transition time.
… 9 7/25/2003 10 7/25/2003
Homogeneous Markov Chain Irreducible Markov Chain (m) z
A Markov Chain is irreducible if every state p = j | X = i ] ≡ P[X
ij n+m n
can be reached from every other state in a (m-1) finite number of steps.
= p m = 2,3,… ∑ p
ik kj ∀k
(m ) p > 0 for m = integer
ij
m -1 j i k
…
Not Irreducible Markov Chain Not Irreducible Markov Chain z Case 1 z Case 2
- – –
For A = set of all states in a Markov chain For A = set of all states in a Markov chain A A ⊂ A
- – –
⊂ A
1 1 c
If no one-step transition from state A to A If A consists of one or more state E that once – –
1
1 1 i
get in state E , the process cannot move to any A is defined as “Closed” –
i
1
other states E is called “Absorbing State” –
i
- – p = 1
ii
7/25/2003 7/25/2003
13
14
(n) z
f = P[the process first returns to state j after z If f < 1
j j
leaving state j in n steps] State E is called “Transient State” –
j z f = P[the process returns to state j after leaving j
z If f = 1
j
state j] ∞
- – (n) j
State E is called “Recurrent State”
f = ∑ f
j j
n = 1 If M = ∞
- – j z
M = Mean recurrence time of state j z State E is called “Recurrent Null State”
j
j ∞ If M <
- – (n) ∞ j
∑ nf
M =
j j
n = 1 z State E is called “Recurrent Nonnull State” j
15 7/25/2003 16 7/25/2003 Periodic or Aperiodic Ergodicity z z
Let β = integer E = Ergodic if
j
- – j
E = Aperiodic and Recurrent Nonnull z If the only possible steps that the process
returns to state E are β, 2β, 3β, … z ∞, and β = 1 f = 1, M <
i j j
- – z
If β > 1 and β is the largest integer z
A Markov Chain is ergodic
State E is called “Periodic” i
If all states of a Markov Chain are ergodic – z
The recurrence time for state E has period β j
If number of states is finite and all states of a – If β = 1
- – Markov Chain are aperiodic, and irreducible
z State E is called “Aperiodic” i
Theorem 1 Definition
(n)
z The states of an irreducible Markov Chain z π Let = P[finding the system in state E at
j j th
are either the n step]
(n)
all transient or π
- – = P[X = j]
j n
all recurrent nonnull or
- – z
Let π = Stationary Probability
j
- – all recurrent null
= P[being in state j at arbitrarily time] z
If periodic, then all states have the same = The limiting state probabilities period β
7/25/2003 7/25/2003
19
20
z
Either z
In an irreducible and aperiodic,
z
Case (a) homogeneous Markov Chain,
All states are transient or
- – z
the limiting state probabilities [ π ] always
j
All states are recurrent null
- – exist and are independent of the initial state Î
π = 0 ∀j j
(0)
Î No stationary distribution exist.
π probability distribution [ ]
j z
Or Case (b)
(n)
π = lim π
j j
All states are recurrent nonnull
- – ∞ n -> Î
π > 0 ∀j j Î
Stationary distribution exist Î π = 1 / M j j
21 7/25/2003 22 7/25/2003 To solve for π Markov Chain Example j z
Driving from town to town Balance Equations: π = ∑ π p p = 3/4 j i ij
01
i
(Linear dependency) p = 1/4
10
1 p = 1/4
Normalization condition:
02 1 = ∑ π i
i
p = 1/4 p = 3/4
20
12
2 p = 1/4
21 p = 1/2
22
1 = 0.28 π
1 p
2 π
1
π
Solution: z
P = 7/25/2003
3/4 1/4 1/4 3/4 1/4 1/4 1/2
22 = 1/2
21 = 1/4 p
12 = 3/4 p
20 = 1/4 p
02 = 1/4 p
10 = 1/4 p
2
π
01 = 3/4
26 Markov Chain Example p
π = πP 7/25/2003
, …] z From Balance equation
2
, π
1
Let π = [π , π
] z
ij
z Let P = Transition probability matrix = [p
7/25/2003
25 Markov Chain Example
- 1/4 π
- 1/4 π
- 1/4 π
- 3/4 π
- 1/2 π
- + π
- – Finite number of st
- – Irreducible
2 π
(n)
= π
(n-1)
P
(n)
= π
(0)
P
n
Transient Behavior z
From stationary probability: π = lim π
n -> ∞ z From
P π
π (n)
= π (n-1)
P lim π
(n)
= lim π
(n-1)
P n -> ∞ n -> ∞
π = πP
z
Note: The solution π is independent of
π (0)
(n)
(0)
2 = 1/4 π
This is the ergodic Markov Chain
1
2 1 = π + π
1
2 π = πP
7/25/2003
28 π = 0.20
1
1 = 3/4 π + 0 π
2 = 0.52
27 π = 0
Transient Behavior z
= π
We want to know the probability of finding the process in state E j at time n z
π
(n)
= [ π
(n)
, π
1 (n)
, π
2 (n)
, …] z From Transition Probability P
(1)
This is the stationary (equilibrium) state probability z
- – We can calculate: π
- – By recursive: π
∞
2 (n)
= [ 0, 0, 1]
0.500 0.313 0.187
2
0.52 0.531
0.50
1
π
0.28 0.266
3 1 n π
0.25
π
1 (n)
0.20 0.203
0.25
π
(n)
Discrete time / Continuous time z State changes can only happen between neighbors
(0)
∞
Homogeneous, aperiodic, and irreducible z
0.25
7/25/2003
π
(0)
= [ 1, 0, 0]
0.688 0.062
0.25
2
0.52 0.454
π
(n)
2 (n)
0.28 0.359
0.75
π
1 (n)
0.20 0.187
1
π
3 1 n 7/25/2003
31 Transient Behavior
32 Birth-Death Process z A Markov Process z
Size of population
j = i – 1 1 – λ
i
- – System is in state E k
- – α
when consists of k members
- – Changes in population size occur by at most one
- – Size increased by one Î “Birth”
- – Size decreased by one Î “Death”
- – α
α i
Birth = a customer arrival to the system
Population = customers in the queueing system z Death = a customer departure from the system z
Pure Death = no increment, only decrement Queueing Theory Model z
> 0 (birth is allowed) z Pure Birth = no decrement, only increment z
i
λ
= birth (increase one in population) z
i
α = 0 (no population Æ no death) z λ
= death (less one in population size) z
i
α
Birth-Death Process z
λ i
z Transition probabilities p ij do not change with time
i
i
= i-1 i i+1
0 Otherwise p ij
j = i + 1
i
j = i λ
i
i
7/25/2003
33 z
34 α
7/25/2003
1 – λ
- α
- α
- α
7/25/2003
1 - λ P =
λ α
1 1 - λ
1
1
λ1 α
2 1 - λ
2
2 λ
2 … α i
1 - λ i
i λ i
… …