Markov Chains

Download Report

Transcript Markov Chains

11. Markov Chains
Courtesy of J. Akinpelu, Anis Koubâa, Y.
Wexler, & D. Geiger
Random Processes
A stochastic process is a collection of random variables
X ( t ), t  I
– The index t is often interpreted as time.
– X (t) is called the state of the process at time t.
•
Discrete-valued or continuous-valued
– The set I is called the index set of the process.
•
•
If I is countable, the stochastic process is said to be a discrete-time
process.
If I is an interval of the real line, the stochastic process is said to be
a continuous-time process.
– The state space E is the set of all possible values that the
random variables X (t) can assume.
2
Discrete Time Random Process
If I is countable,
X (t) is often denoted by X n
n = 0,1,2,3,…
time
0
1
2
3
4
Events occur at specific points in time
3
Discrete time Random Process
State Space = {SUNNY,
RAINY}
X day i   "S " or " R ": RANDOM VARIABLE that varies with the DAY
X day 2  "S "
X day 1  "S "
X day 4  "S "
X day 3  " R "
X day 6  "S "
X day 5  " R "
X day 7   "S "
Day
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7
MON TUE WED
SAT SUN
THU FRI
X day i  IS A STOCHASTIC PROCESS
X(dayi): Status of the weather observed each DAY
Markov processes
A stochastic process X n ,n  0,1,2,
is
called a Markov process if
P{ X n 1  j | X n  i, X n 1  in 1 , , X 1  i1 , X 0  i0 }
 P( X n 1  j | X n  i}
for all states i 0 , i1 , , in1 , i, j and all n  0
If Xn’s are integer-valued,
Xn is called a Markov Chain
5
What is “Markov Property”?
Pr X DAY 6  "S " | X DAY 5  " R ", X DAY 4  "S ",..., X DAY 1  "S " 
Pr X DAY 6  "S " | X DAY 5  " R "
PAST EVENTS
X day 2  "S "
X day 1  "S "
NOW FUTURE EVENTS
X day 4  "S "
X day 3  " R "
X day 5  " R "
?
Probability of “R” in DAY6 given all previous states
Probability of “S” in DAY6 given all previous states
Day
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7
MON TUE WED
SAT SUN
THU FRI
Markov Property: The probability that it will be (FUTURE) SUNNY in DAY 6
given that it is RAINY in DAY 5 (NOW) is independent from PAST EVENTS
Markov Chains
We restrict ourselves to Markov chains such that the
conditional probabilities
p ij  P {X n 1  j | X n  i}
are independent of n, and for which
i , j  E , where E  { 0, 1, 2 ,  }
(which is equivalent to saying that state space E is
finite or countable). Such a Markov chain is called
homogeneous.
7
Markov Chains
Since
– probabilities are non-negative, and
– the process must make a transition into some
state at each time in I, then
p ij 0
for i, j  E ;

p ij  1

j
for i  E .
0
We can arrange the probabilities Pij into a square
matrix {Pij } called the transition matrix.
8
Markov Chain: A Simple Example
Weather:
• raining today
40% rain tomorrow
60% no rain tomorrow
• not raining today
20% rain tomorrow
80% no rain tomorrow
State transition diagram:
0.6
0.4
rain
0.8
no rain
0.2
9
Rain (state 0), No rain (state 1)
Weather:
• raining today
40% rain tomorrow
60% no rain tomorrow
• not raining today
20% rain tomorrow
80% no rain tomorrow
The transition (prob.) matrix P:
 0.4 0.6 

P  
 0.2 0.8 
• for a given current state:
- Transitions to any states
- Each row sums up to 1
10
Examples in textbook
Example 11.6
Example 11.7
Figures 11.2 and 11.3
11
Transition probability
Note that each entry in P is a one-step transition
probability, say, from the current state to the
right next state
Then, how about multiple steps?
Let’s start with 2 steps first
12
2-Step Transition Prob. of 2 state system:
states 0 and 1
Let pij(2) be probability of going from i to j in 2 steps
Suppose i = 0, j = 0, then
P(X2 = 0|X0 = 0)
= P(X1 = 1|X0 = 0)  P(X2 = 0| X1 = 1)
+ P(X1 = 0|X0 = 0)  P(X2 = 0| X1 = 0)
p00(2) = p01p10 + p00p00
Similarly
p01(2) = p01p11 + p00p01
p10(2) = p10p00 + p11p10
p11(2) = p10p01 + p11p11
In matrix form,
P(2) = P(1)P(1) = P2
13
In general, 2 step transition is expressed as
Now note that
P {X n  2  j | X n  i}




P {X n

k
2
 j | X n 1  k ,X n  i}P {X
P {X n

k
2
 j | X n 1  k }P {X
0

0

n 1
n 1
 k | X n  i}
 k | X n  i}
p ikp kj

k
0
2
P ij
14
Two-step transition prob.
State space E={0, 1, 2}


P


Hence,
1
2
1
2
1
4
1
4
0
1
4
1
4
1
2
1
2






 167

P 2  P  P   83

3
8
P {X n 2  1| X n  2}  P
2
2,1
3
16
1
4
3
16


3 
8

7 
16 
3
8
3

16
15
Chapman-Kolmogorov Equations
In general, for all n, m  0 and i, j  E ,
P{ X nm  j | X n  i }  pij ( m )  P .
m
ij
This leads to the Chapman-Kolmogorov
equations:
p ij(m  n )   p ik(m )p kj(n )
k E
foralln ,m  0,alli,j  E .
16