Markov Processes
Download
Report
Transcript Markov Processes
Markov Processes
and
Birth-Death Processes
J. M. Akinpelu
Exponential Distribution
Definition. A continuous random variable X has an
exponential distribution with parameter > 0 if its
probability density function is given by
e x ,
f ( x)
0,
x0
x 0.
Its distribution function is given by
1 e x ,
F ( x)
0,
x0
x 0.
Exponential Distribution
Theorem 1. A continuous R.V. X is exponentially
distributed if and only if for s, t 0,
P{X s t | X t} P{X s}
(1)
or equivalently,
P{X s t} P{X s}P{X t}.
A random variable with this property is said to be
memoryless.
Exponential Distribution
Proof: If X is exponentially distributed, (1) follows readily.
Now assume (1). Define F(x) = P{X ≤ x}, f (x) = F(x),
and, G(x) = P{X > x}. It follows that G(x) = ‒ f (x). Now
fix x. For h 0,
G( x h) G( x)G(h) .
This implies that, taking the derivative wrt x,
dG ( x h )
dx
G ( x) G (h)
dG ( x h )
dx
f ( x)G (h)
dG ( x h )
G (h)
f ( x)dx.
Exponential Distribution
Letting x = 0 and integrating both sides from 0 to t
gives
dG ( h )
G (h)
t
f (0)dx
t
dG ( h )
G (h)
0
f (0)dx
0
log(G (h))
t
0
f (0) x 0
log(G (t )) f (0)t
G (t ) e f ( 0)t .
t
Exponential Distribution
Theorem 2. A R.V. X is exponentially distributed
if and only if for h 0,
P{X h} h o(h)
P{X h} 1 h o(h) .
Exponential Distribution
Proof: Let X be exponentially distributed, then
for h 0,
h
P{ X h} 1 e
( h) n
1 1
n 1 n!
n
( h)
h
n!
n2
h o( h).
The converse is left as an exercise.
Exponential Distribution
1.2
1
F (x )
0.8
0.6
0.4
0.2
slope (rate) ≈
0
0
1
2
3
x
4
5
6
Markov Process
A continuous time stochastic process {Xt, t 0}
with state space E is called a Markov process provided
that
P{ X s t j | X s i, X u xu , 0 u s}
P( X s t j | X s i}
for all states i, j E and all s, t 0.
known
0
s
s+t
Markov Process
We restrict ourselves to Markov processes for which the
state space E = {0, 1, 2, …}, and such that the
conditional probabilities
Pij (t ) P{X st j | X s i}
are independent of s. Such a Markov process is called
time-homogeneous.
Pij(t) is called the transition function of the Markov
process X.
Markov Process - Example
Let X be a Markov process with
r0 (t ) r1 (t ) r2 (t )
r0 (t ) r1 (t )
P(t )
r0 (t )
0
where
e t (t ) j i
Pij (t ) r j i (t )
, 0i j
( j i)!
for some > 0. X is a Poisson process.
Chapman-Kolmogorov Equations
Theorem 3. For i, j E, t, s 0,
P ij (s t ) Pik (s) Pkj (t ).
kE
Realization of a Markov Process
Xt()
7
S4
6
S2
5
4
S3
3
S1
S5
2
S0
1
0
T0
T1
T2
T3
T4
T5
t
Time Spent in a State
Theorem 4. Let t 0, and n satisfy Tn ≤ t < Tn+1, and let Wt =
Tn+1 – t. Let i E, u 0, and define
G(u) P{Wt u | X t i} .
Then
G(u v) G(u)G(v).
Note: This implies that the distribution of time remaining in a
state is exponentially distributed, regardless of the time
already spent in that state.
Wt
Tn
t
t+u
Tn+1
Time Spent in a State
Proof: We first note that due to the time homogeneity of X, G(u)
is independent of t. If we fix i, then we have
G (u v) P{Wt u v | X t i}
P{Wt u, W t u v | X t i}
P{Wt u | X t i}P{W t u v | Wt u, X t i}
P{Wt u | X t i}P{Wt u v | X t u i}
G (u ) G (v).
An Alternative Characterization of a
Markov Process
Theorem 5. Let X ={Xt, t 0} be a Markov process. Let T0, T1,
…, be the successive state transition times and let S0, S1, …, be
the successive states visited by X. There exists some number i
such that for any non-negative integer n, for any j E, and t > 0,
P{S n1 j, Tn1 Tn t | S 0 ,, S n1 , S n i ; T0 ,, Tn }
Q(i, j ) e it
where
Qij 0, Qii 0,
Q
jE
ij
1.
An Alternative Characterization of a
Markov Process
This implies that the successive states visited by
a Markov process form a Markov chain with
transition matrix Q.
A Markov process is irreducible recurrent if its
underlying Markov chain is irreducible recurrent.
Kolmogorov Equations
Theorem 6.
Pij (t ) i Qik Pkj (t ) iPij (t )
k i
and, under suitable regularity conditions,
Pij (t ) k Qkj Pik (t ) jP ij (t ) .
k j
These are Kolmogorov’s Backward and Forward
Equations.
Kolmogorov Equations
Proof (Forward Equation): For t, h 0,
Pij (t h) Pik (t ) k h Qkj P ij (t ) 1 v j h o(h) .
k j
Hence
Pij (t h) Pij (t )
h
o(h)
k Qkj Pik (t ) jP ij (t )
.
h
k j
Taking the limit as h 0, we get our result.
Limiting Probabilities
Theorem 7. If a Markov process is irreducible recurrent, then
limiting probabilities
P j lim Pij (t )
t
exist independent of i, and satisfy
j P j k Qkj P k
k j
for all j. These are referred to as “balance equations”. Together
with the condition
Pj 1 ,
j
they uniquely determine the limiting distribution.
Birth-Death Processes
Definition. A birth-death process {X(t), t 0} is a Markov
process such that, if the process is in state j, then the only
transitions allowed are to state j + 1 or to state j – 1 (if j > 0).
It follows that there exist non-negative values j and j,
j = 0, 1, 2, …, (called the birth rates and death rates) so that,
P{ X t h j | X t j 1} j 1h o(h)
P{ X t h j | X t j 1} j 1h o(h)
P{ X t h j | X t j} 1 j h j h o(h)
P{ X t h j | X t i} o(h)
if | j i | 1.
Birth and Death Rates
j-1
j
j
j-1
j
j+1
j+1
Note:
1. The expected time in state j before entering state j+1 is 1/j;
the expected time in state j before entering state j‒1 is 1/j.
2. The rate corresponding to state j is vj = j + j.
Differential-Difference Equations for
a Birth-Death Process
It follows that, if Pj (t ) P{X (t ) j} , then
d
Pj (t ) j 1P j 1 (t ) j 1 Pj 1 (t ) ( j j ) Pj (t ),
dt
j0
d
P0 (t ) 1 P1 (t ) 0 P0 (t ).
dt
Together with the state distribution at time 0, this
completely describes the behavior of the birthdeath process.
Birth-Death Processes - Example
Pure birth process with constant birth rate
j = > 0, j = 0 for all j. Assume that
1
Pj (0)
0
if j 0
if j 0.
Then solving the difference-differential equations for this
process gives
( t ) e
Pj (t )
j!
j
t
.
Birth-Death Processes - Example
Pure death process with proportional death rate
j = 0 for all j, j = j > 0 for 1 ≤ j ≤ N, j = 0 otherwise,
and
1
Pj (0)
0
if j N
otherwise.
Then solving the difference-differential equations for this
process gives
N t j
Pj (t ) (e ) (1 et ) N j
j
0 j N.
Limiting Probabilities
Now assume that limiting probabilities Pj exist.
They must satisfy:
or
0 j 1P j 1 j 1 Pj 1 ( j j ) Pj ,
0 1 P1 0 P0
( j j ) Pj j 1P j 1 j 1Pj 1 ,
0 P0 1P1.
j 0
j0
(*)
Limiting Probabilities
These are the balance equations for a birth-death
process. Together with the condition
P
j 0
j
1,
they uniquely define the limiting probabilities.
Limiting Probabilities
From (*), one can prove by induction that
i
Pj P0
i 0 i 1
j 1
j 0, 1, 2, .
When Do Limiting Probabilities
Exist?
Define
i
S 1
.
j 1 i 0 i 1
j 1
It is easy to show that
Po S
1
if S < . (This is equivalent to the condition P0 > 0.)
Furthermore, all of the states are recurrent positive, i.e.,
ergodic. If S = , then either all of the states are
recurrent null or all of the states are transient, and
limiting probabilities do not exist.
Flow Balance Method
Draw a closed boundary around state j:
j-1
j
j-1
j
Global balance
equation:
j
j+1
j+1
j 1Pj 1 j 1Pj 1 ( j j )Pj
“flow in = flow out”
Flow Balance Method
Draw a closed boundary between state j and state j–1:
j-1
j
j
j-1
j
Detailed balance
equation:
j+1
j+1
j 1Pj 1 j Pj
Example
Machine repair problem. Suppose there are m machines
serviced by one repairman. Each machine runs without
failure, independent of all others, an exponential time
with mean 1/. When it fails, it waits until the
repairman can come to repair it, and the repair itself
takes an exponentially distributed amount of time with
mean 1/. Once repaired, the machine is as good as
new.
What is the probability that j machines are failed?
Example
Let Pj be the steady-state probability of j failed
machines.
j-1=(m‒j+1)
j=(m‒j)
j
j‒1
j=
j+1
j+1=
(m j 1) Pj 1 Pj 1 [ (m j ) ]Pj
m P0 P1
Pm1 Pm
Example
j-1=(m‒j+1)
j=(m‒j)
j
j‒1
j+1
j=
i
Pj P0
i 0 i 1
j 1
(m i )
P0
i 0
j+1=
j 1
P0
P0 m(m 1) (m j 1)( / ) j
1
1 m(m 1) (m j 1)
j 1
m
j
Example
How would this example change if there were m
(or more) repairmen?
Homework
No homework this week due to test next week.
References
1. Erhan Cinlar, Introduction to Stochastic Processes,
Prentice-Hall, Inc., 1975.
2. Leonard Kleinrock, Queueing Systems, Volume I:
Theory, John Wiley & Sons, 1975.
3. Sheldon M. Ross, Introduction to Probability
Models, Ninth Edition, Elsevier Inc., 2007.