Transcript Lesson 21
Random Processes / Markov Processes
Pool example: The home I moved into came with an above
ground pool that was green. I spent big $s and and got the
pool clear again. Then the pump started leaking and I turned
off the pump and eventually the pool turned green again.
After fixing the pump, I finally got the pool to turn blue
again. I have made the following observations: If I observe
the pool each morning, it basically has three states: blue,
blue/green, and green.
If the pool is blue, the probability of it staying blue is about
80%, otherwise it turns blue/green.
If the pool is blue/breen, there is equal probability of
remaining blue/green, or turning blue or green.
If the pool is green, there is a 60% probability of
remaining green, otherwise the pool turns blue/green.
Random Processes / Markov Processes
If the pool is blue, the probability of it staying blue is about
80%, otherwise it turns blue/green.
If the pool is blue/breen, there is equal probability of
remaining blue/green, or turning blue or green.
If the pool is green, there is a 60% probability of
remaining green, otherwise the pool turns blue/green.
G
B/G
B
Random Processes / Markov Processes
Probability Transition Matrix (P) – probability of
transitioning from some current state to some next state in
one step.
P=
state
G
B/G
B
G B/G B
.60 .40 0.0
.33 .33 .33
0.0 .20 .80
P is referred to as the probability transition matrix.
Random Processes / Markov Processes
What is a Markov Process?
A stochastic (probabilistic) process which contains the
Markovian property.
A process has the Markovian property if:
P{ X t 1 j | X 0 k0 , X 1 k1 ,... X t 1 kt 1 , X t i} P{ X t 1 j | X t i},
for t = 0,1,… and every sequence i,j, k0, k1,…kt-1.
In other words, any future state is only dependent on it’s prior
state.
Markov Processes cont.
This conditional probability
P{ X t 1 j | X t i},
is called the one-step transition probability.
P{ X t 1 j | X t i} P{ X 1 j | X 0 i}
And if
for all t = 1,2,…
then the one-step transition probability is said to be stationary
and therefore referred to as the stationary transition
probability.
Markov Processes cont.
Let pij =
P=
P{ X t 1 j | X t i},
state
0
1
2
3
0
p00
p10
p20
p30
1
p01
p11
p21
p31
2
3
p02 p03
p12 p13
p22 p23
p32 p33
P is referred to as the probability transition matrix.
Markov Processes cont.
Suppose the probability you win is based on if you won the
last time you played some game. Say, if you won last time,
then there is a 70% chance of winning the next time.
However, if you lost last time, there is a 60% chance you lose
the next time.
Can the process of winning and losing be modeled as a
Markov process?
Let state 0 be you win, and state 1 be you lose, then:
P=
state
0
1
0
.70
.40
1
.30
.60
Markov Processes cont.
See handout on n-step transition matrix.
Markov Processes cont.
Let,
Pn =
n
state
0
1
2
3
Then P [p0
probabilities.
,
p1
0
1
p0
p0
p0
p0
,
p2
p1
p1
p1
p1
,
2 ... N
p2 … pN
p2 … pN
p2 … pN
p2 … pN
p3 …pN ] are the steady state
Markov Processes cont.
Observing that P(n) P(n-1)P,
As n , P PP.
[p0 , p1 ,,p2 ,…pN ] = [p0 , p1 ,,p2 ,…pN ]
p00 p01 p02 … p0N
p10 p11 p12 … p1N
p20 p21 p22 … p2N
pN0 pN1 pN2 … p3N
The inner product of this matrix equation results in N+1 equations
and N+1 unknowns, however rank of the P matrix is N.
However, note that p0 + p1 + p2+ p3 …pN = 1. Therefore N+1
equations and N+1 unknowns.
Markov Processes cont.
Show example of obtaining P PP from transition matrix:
P=
state
0
1
0
1
.70 .30
.40 .60
Markov Processes cont.
Break for Exercise
Markov Processes cont.
State diagrams:
P=
state
0
1
0
1
.70 .30
.40 .60
0
1
Markov Processes cont.
State diagrams:
P=
state
0
1
2
3
0
.5
.5
.25
0
1
.5
.5
.25
0
2
3
0
0
0 0
.25 .25
0
1
0
1
2
3