Markov Processes

Download Report

Transcript Markov Processes

Markov Processes
MBAP 6100 & EMEN 5600
Survey of Operations Research
Professor Stephen Lawrence
Leeds School of Business
University of Colorado
Boulder, CO 80309-0419
OR Course Outline
•
•
•
•
•
•
•
Intro to OR
Linear Programming
Solving LP’s
LP Sensitivity/Duality
Transport Problems
Network Analysis
Integer Programming
• Nonlinear Programming
• Dynamic Programming
• Game Theory
• Queueing Theory
• Markov Processes
• Decisions Analysis
• Simulation
Whirlwind Tour of OR
Markov Analysis
Andrey A. Markov (born 1856).
Early work in probability theory,
proved central limit theorem
Agenda for This Week
• Markov Processes
–
–
–
–
Stochastic processes
Markov chains
Future probabilities
Steady state
probabilities
– Markov chain concepts
• Markov Applications
– More Markov examples
– Markov decision
processes
Stochastic Processes
• Series of random variables {Xt}
• Series indexed over time interval T
• Examples: X1, X2, … , Xt, … , XT represent
–
–
–
–
monthly inventory levels
daily closing price for a stock or index
availability of a new technology
market demand for a product
Markov Chains
• Present state Xt is independent of history
– previous states or events have no current or
future influence on the current state
• Process will move to other states with known
transition probabilities
• Transition probabilities are stationary
– probabilities do not change over time
• There exist a finite number of possible states
An Example of a Markov Chain
A small community has two service stations: Petroco and
Gasco. The marketing department of Petroco has found that
customers switch between stations according to the following
transition matrix:
This
Month
Petroco
Gasco
Next Month
Petroco Gasco
0.60
0.40
0.20
0.80
=1.0
=1.0
Note: Rows sum to 1.0 !
Future State Probabilities
Probability that a customer buying from Petroco this month
will buy from Petroco next month:
p  0.6
(1)
11
In two months:
 (0.6  0.6)  (0.4  0.2)  0.44
( 2)
11
From Gasco in two months:
p
(2)
12
p
 (0.4  08
. )  (0.6  0.4)  056
.
Graphical Interpretation
First Period
0.6
0.6
Petroco
0.4
0.4
Gasco
Petroco
Second Period
0.6
Petroco
0.36
0.4
Gasco
0.24
0.2
Petroco
0.08
0.8
Gasco
0.32
1.00
Chapman-Kolmogorov Equations
Let P be the transition matrix for a Markov process. Then the
n-step transition probability matrices can be found from:
P(2) = P·P
P(3) = P·P·P
CK Equations for Example

(1)
P 
0.60 0.40

0
.
20
0
.
80


0.60 0.40 0.60 0.40 0.44 0.56
P(2)  0.20 0.80 0.20 0.80  0.28 0.72


 

Starting States
In current month, if 70% of customers shop at Petroco and
30% at Gasco, what will be the mix in 2 months?
0.60 0.40
P  0.20 0.80


s = [0.70 0.30]
sn = s0 P(n)
0.60 0.40
s2 = [0.7 0.3] 

0
.
20
0
.
80


0.44 0.56
= [0.7 0.3] 

0
.
28
0
.
72


= [0.39 0.61]
2
CK Equations in Steady State

(1)
P 
0.60 0.40

0
.
20
0
.
80


0.60 0.40 0.60 0.40 0.44 0.56
P(2)  0.20 0.80 0.20 0.80  0.28 0.72


 


9
0.60 0.40 0.33 0.67
(9)
P  0.20 0.80  0.33 0.67

 

Convergence to Steady-State
If a customer is buys at Petroco this
month, what is the long-run probability
that the customer will buy at Petroco
during any month in the future?
Prob
1.0
0.33
1
5
10
Period
Calculation of Steady State
• Want outcome probabilities equal to
incoming probabilities
• Let s = [s1, s2, …, sn] be the vector of steadystate probabilities
• Then we want
s=sP
• That is, the output state probabilities do not
change from transition to transition (e.g.,
steady-state!)
Steady-State for Example
s=sP
0.60 0.40
P  0.20 0.80


s = [p g]
0.60 0.40
[p g] = [p g] 0.20 0.80


p = 0.6p + 0.2g
g = 0.4p + 0.8g
p = 0.333
g = 0.667
p+g=1
Markov Chain Concepts
• Steady-State Probabilities
– long-run probability that a process starting in
state i will be found in state j
• First-Passage Time
– length of time (steps) in going from state i to j
• Recurrence Time
– length of time (steps) to return to state i when
starting in state i
Markov Chain Concepts (cont.)
• Accessible States
– State j can be reached from i (pij(n) > 0)
• Communicating States
– State i and j are accessible from one another
• Irreducible Markov chains
– All states communicate with one another
Markov Chain Concepts (cont.)
• Recurrent State
– A state that will certainly return to itself (fii = 1)
• Transient State
– A state that may return to itself (fii < 1)
• Absorbing State
– A state the never moves to another state (pii=1)
– A “black hole”
Markov Examples
Markov Decision Processes
Matrix Multiplication
a b  m n 
 c d   p q



am  bp an  bq


cm  dp cn  dq 
Matrix multiplication in Excel…
Machine Breakdown Example
A critical machine in a manufacturing operation breaks
down with some frequency. The hourly up-down transition
matrix for the machine is shown below. What percentage
of the time is the machine operating (up)?
Up Down
Up 0.9
0.1
Down 0.7 0.3
Credit History Example
The Rifle, CO Mercantile Department Store wants to
analyze the payment behavior of customers who have
outstanding accounts. The store’s credit department has
determined the following bill payment pattern from
historical records:
Pay No Pay
Pay
No Pay
0.9 0.1
0.8 0.2


Credit History Continued
Further analysis reveals the following credit transition matrix
at the Rifle Mercantile:
0
1
2
Pay Bad
0 0 0.2 0 0.8
1 0 0 0.2 0.8
2
Pay
Bad
0

0
0
0
0
0
0
0
0
0
0 
0.6 0.4

1
0
0
1 
University Graduation Example
Fort Lewis College in Durango has determined that students
progress through the college according to the following
transition matrix:
F
So
J
Sr
0
F 0.1 0.7 0

0
So  0 0.1 0.8
J  0 0 0.15 0.75
0
0.15
Sr  0 0
0
0
D  0 0
0
0
G  0 0
D
G
0

0.1 0 
0.1 0 

0.05 0.8
1
0

0
1 
0.2