Lecture06-DTMC
Download
Report
Transcript Lecture06-DTMC
Al-Imam Mohammad Ibn Saud University
CS433
Modeling and Simulation
Lecture 06 – Part 03
Discrete Markov Chains
12 Apr 2009
Dr. Anis Koubâa
Classification of States: 1
2
A path is a sequence of states, where each transition has a positive probability
of occurring.
State j is reachable (or accessible) ) (ميكن الوصول إليهfrom state i (ij) if there is a
path from i to j –equivalently Pij (n) > 0 for some n≥0, i.e. the probability to go
from i to j in n steps is greater than zero.
States i and j communicate (ij) ) (يتصلif i is reachable from j and j is
reachable from i.
(Note: a state i always communicates with itself)
A set of states C is a communicating class if every pair of states in C
communicates with each other, and no state in C communicates with any state
not in C.
Classification of States: 1
3
A state i is said to be an absorbing state if pii = 1.
A subset S of the state space X is a closed set if no state outside of
S is reachable from any state in S (like an absorbing state, but with
multiple states), this means pij = 0 for every i S and j S
A closed set S of states is irreducible ) (غري قابل للتخفيضif any state j S
is reachable from every state i S.
A Markov chain is said to be irreducible if the state space X is
irreducible.
Example
4
Irreducible Markov Chain
p01
p00
0
p12
1
p10
Reducible Markov Chain
p01
p00
0
p10
p22
2
p21
p12
1
p23
2
p32
3
p14
Absorbing
State
4
p22
Closed irreducible set
p33
Classification of States: 2
5
State i is a transient state )(حالة عابرةif there exists a state j such that j is reachable from i but
i is not reachable from j.
A state that is not transient is recurrent ) (حالة متكررة. There are two types of recurrent states:
1.
Positive recurrent: if the expected time to return to the state is finite.
2.
Null recurrent (less common): if the expected time to return to the state is infinite (this requires an infinite
number of states).
A state i is periodic with period k >1, if k is the smallest number such that all paths leading
from state i back to state i have a multiple of k transitions.
A state is aperiodic if it has period k =1.
A state is ergodic if it is positive recurrent and aperiodic.
Classification of States: 2
6
Example from Book
Introduction to Probability: Lecture Notes
D. Bertsekas and J. Tistsiklis – Fall 200
Transient and Recurrent States
7
We define the hitting time Tij as the random variable that represents the time to go
from state j to stat i, and is expressed as:
T ij min k 0 : X k j | X 0 i
k is the number of transition in a path from i to j.
Tij is the minimum number of transitions in a path from i to j.
We define the recurrence time Tii as the first time that the Markov Chain returns to
state i. Tii min k 0 : X k i | X 0 i
The probability that the first recurrence to state i occurs at the nth-step is
f ii( n ) Pr T ii n P X n i , X n 1 i ,..., X 1 i | X 0 i
Pr T i n | X 0 i
Ti Time for first visit to i given X0 = i.
The probability of recurrence to state i is
f i f ii Pr T ii f ii( n )
n 1
Transient and Recurrent States
8
The mean recurrence time is
M i E T ii E T i | X 0 i n f ii( n )
n 0
A state is recurrent if fi=1
f i Pr T ii Pr T i | X 0 i 1
If Mi < then it is said Positive Recurrent
If Mi = then it is said Null Recurrent
A state is transient if fi<1
f i Pr T ii Pr T i | X 0 i 1
If f i 1 , then 1 f i Pr T ii is the probability of never returning to state i.
Transient and Recurrent States
9
We define Ni as the number of visits to state i given X0=i,
1 if X n i
N i I X n i where I X n i
i 0
0 if X n i
Theorem: If Ni is the number of visits to state i given X0=i,
then E N i | X 0 i P
n 0
Proof
(n )
ii
1
1 f i
Pii( n )
Transition Probability from
state i to state i after n steps
Transient and Recurrent States
10
The probability of reaching state j for first time in n-steps starting from X0 = i.
f ij( n ) Pr T ij n P X n j , X n 1 j ,..., X 1 j | X 0 i
The probability of ever reaching j starting from state i is
f ij Pr T ij f ij( n )
n 1
Three Theorems
11
If a Markov Chain has finite state space,
then: at least one of the states is recurrent.
If state i is recurrent and state j is reachable from state i
then: state j is also recurrent.
If S is a finite closed irreducible set of states,
then: every state in S is recurrent.
Positive and Null Recurrent States
12
Let Mi be the mean recurrence time of state i
M i E Tii k Pr Tii k
k 1
A state is said to be positive recurrent if Mi<∞.
If Mi=∞ then the state is said to be null-recurrent.
Three Theorems
If state i is positive recurrent and state j is reachable from state i
then, state j is also positive recurrent.
If S is a closed irreducible set of states,
then every state in S is positive recurrent
or, every state in S is null recurrent,
or, every state in S is transient.
If S is a finite closed irreducible set of states,
then every state in S is positive recurrent.
Example
13
p01
0
p00
p10
p12
1
p23
2
p32
3
p14
Transient
States
Recurrent State
4
p22
Positive
Recurrent
States
p33
Periodic and Aperiodic States
14
Suppose that the structure of the Markov Chain is such that state
i is visited after a number of steps that is an integer multiple of
an integer d >1. Then the state is called periodic with period d.
If no such integer exists (i.e., d =1) then the state is called
aperiodic.
Example
1
0
0.5
1
0.5
Periodic State d = 2
2
1
0 1 0
P 0.5 0 0.5
0 1 0
Steady State Analysis
15
Recall that the state probability, which is the probability of
finding the MC at state i after the kth step is given by:
i k Pr X k i
π k 0 k , 1 k ...
An interesting question is what happens in the “long run”, i.e.,
i lim k
k
This is referred to as steady state or equilibrium or stationary
state probability
Questions:
Do these limits exists?
If they exist, do they converge to a legitimate probability distribution,
i.e., i 1
How do we evaluate πj, for all j.
Steady State Analysis
16
Recall the recursive probability
π k 1 π k P
If steady state exists, then π(k+1) π(k), and therefore the
steady state probabilities are given by the solution to the
equations
π πP
and
i
1
i
If an Irreducible Markov Chain, then the presence of periodic
states prevents the existence of a steady state probability
Example: periodic.m
0 1 0
P 0.5 0 0.5
0 1 0
π 0 1 0 0
Steady State Analysis
17
THEOREM: In an irreducible aperiodic Markov chain consisting of
positive recurrent states a unique stationary state probability
vector π exists such that πj > 0 and
1
j lim j k
k
Mj
where Mj is the mean recurrence time of state j
The steady state vector π is determined by solving
π πP
and
i
Ergodic Markov chain.
i
1
Discrete Birth-Death Example
18
1-p
p
0
1-p
1
i
p
p
p
0
p 1 p
p
0
1 p
P
0
p
0
1-p
Thus, to find the steady state vector π we need to solve
π πP
and
i
i
1
Discrete Birth-Death Example
19
In other words
Solving these equations we get
0 0 p 1 p
j j 1 1 p j 1 p, j 1, 2,...
1 p
1
0
p
1 p
2
0
p
2
1 p
j
0
p
j
In general
Summing all terms we get
1 p
0
1 0 1
p
i 0
i
1 p
p
i 0
i
Discrete Birth-Death Example
20
Therefore, for all states j we get
j
i
1 p
1 p
j
p
p
i 0
If p<1/2, then
1 p
p
i 0
i
If p>1/2, then
p
1 p
p 2 p 1 0
i 0
i
j 0, for all j
All states are transient
2 p 1 1 p
j
, for all j
p p
j
All states are positive recurrent
Discrete Birth-Death Example
21
If p=1/2, then
1 p
p
i 0
i
j 0, for all j
All states are null recurrent
Reducible Markov Chains
22
Transient
Set T
Irreducible
Set S1
Irreducible
Set S2
In steady state, we know that the Markov chain will eventually end
in an irreducible set and the previous analysis still holds, or an
absorbing state.
The only question that arises, in case there are two or more
irreducible sets, is the probability it will end in each set
Reducible Markov Chains
23
Transient
Set T
r
s1
sn
Irreducible
Set S
i
Suppose we start from state i. Then, there are two ways to go to
S.
In one step or
Go to r T after k steps, and then to S.
Define i S Pr X k S | X 0 i , k 1, 2,...
Reducible Markov Chains
24
First consider the one-step transition
Pr X1 S | X 0 i
Next consider the general case for k=2,3,…
p
jS
ij
Pr X k S , X k 1 rk 1 T ..., X 1 r T | X 0 i
Pr X k S , X k 1 rk 1 T ...,| X 1 r T , X 0 i
Pr X 1 r T | X 0 i
Pr X k S , X k 1 rk 1 T ...,| X1 r T pir
r S pir
i S pij r S pir
jS
rT