Much More About Markov Chains
Download
Report
Transcript Much More About Markov Chains
Much More About Markov Chains
Calculating Pn – how do we raise a matrix to the nth
power?
Ergodicity in Markov Chains.
When does a chain have equilibrium probabilities?
Balance Equations
Calculating equilibrium probabilities without the fuss.
The leaky bucket queue
Finally an example which is to do with networks.
For more information:
Norris: Markov Chains (Chapter 1)
Bertsekas: Appendix A and Section 6.3
How to calculate
n
P
If P is diagonalisable (3x3) then we can find
some invertible matrix such that:
1 0
P U 0 2
0 0
1n
n
P U 0
0
0
2 n
0
0
0 U 1 where are the eigenvalues
i
3
0
1
0 U
3 n
Therefore pij(n)=A1n+B2n+ C3n
assuming the eigenvalues are distinct
General Procedure
1. For an M state chain. Compute the eigenvalues
1,2,.. M
2. If the eigenvalues are distinct then pij(n) has the
general form: pij(n) a11 aM M
3. If an eigenvalue is repeated once then the general
form includes a term (an+b)n
4. As roots of a polynomial with real coefficients,
complex eigenvalues come in conjugate pairs and can
be written as sin and cosine pairs.
5. The coefficients of the general form can be found by
calculating pij(n) by hand for n= 0...M-1 and solving.
Example of
n
P
1
0
0
(where states are no’s 1, 2 and 3)
P 0 1 / 2 1 / 2
find p11(n)
1 / 2 0 1 / 2
0 det(x P) x( x 12 )2 14 14 ( x 1)(4x2 1)
Eigenvalues are 1, i/2, -i/2. Therefore p11(n) has the form:
p11(n) a b2i c 2i 12 cos n2 sin n2
n
n
n
where the substitution can be made since p11(n) must be real
we can calculate that p11(0)=1, p11(1)=0 and p11(2)=0
Example of
n
P (2)
We now have three simultaneous equations in ,
and .
1
0 12
0 14
Solving we get =1/5, =4/5 and =-2/5.
( n)
11
p
1
5
1 n 4
2
5
cos n2 52 sin n2
Equilibrium Probabilities
Recall the distribution vector of equilibrium
probabilities. If n is the distribution vector after n steps
lim n
is given by:
n
This is also the distribution which solves: P
When does this limit exist? When is there a unique
solution to the equation?
This is when the chain is ergodic:
Irreducible
Recurrent non-null (also called positive recurrent)
Aperiodic
Irreducible
A chain is irreducible if any state can be reached
from any other.
More formally for all i and j:
n : p
( n)
ij
0
0
1-
1
1
1-
2
For what values of and
is this chain irreducible?
Aperiodic chains
A state i is periodic if it is returned to after a time period
> 1.
Formally, it is periodic if there exists an integer k > 1
for which, for all n we can find an integer j:
n kj
1
(n)
pii
0 otherwise
Equivalently, a state is aperiodic if there is always a
sufficiently large n that for all m > n:
( m)
ii
p
0
A useful aperiodicity lemma
If P is irreducible and has one aperiodic state i
then all states are aperiodic. Proof:
By irreducibility there exists r, s 0 with
pji(r),pik(s) > 0
Therefore there is an n such that for all m > n:
p(jkr ns ) p(jir ) pii( n) pik( s ) 0
And therefore all the states are aperiodic (consider
j=k in the above equation).
Return (Recurrence) Time
If a chain is in state i when will it next return to state i?
This is known as “return time”.
First we must define the probability that the first return
to state i is after n steps: fi(n)
fi fi (n)
The probability that we ever return is:
n 1
A state where fi = 1 is recurrent fi < 1 is called transient.
The expectation of this is the “mean recurrence time” or
“mean return time”.
M i nf j( n )
n 1
Mi= recurrent null Mi< recurrent non-null
Return (Recurrence) Time
A finite chain is always recurrent non null.
In an irreducible aperiodic Markov Chain the limiting
probabilities
lim n
n
always exist and are independent of the starting
distribution. Either:
1. All states are transient or recurrent null in which case j=0
for all states and no stationary distribution exists.
2. All states are recurrent non null and a unique stationary
distribution exists with: 1
j
Mj
Ergodicity (summary)
A chain which is irreducible, aperiodic and
recurrent non-null is ergodic.
If a chain is ergodic, then there is a unique
invariant distribution which is equivalent to the
limit:
lim n
n
In Markov Chain theory, the phrases invariant,
equilibrium and stationary are often used
interchangeably.
Invariant Density in Periodic Chains
It is worth noting that an irreducible, recurrent non null
chain which is periodic, has a solution to the invariant
density equation but the limit distribution does not
exist. Consider:
1
0 1
P
1
0
0
1
1
=( ½ , ½ ) solves =P
n does not
However, it should be clear that lim
n
exist in general though it may for specific starting
distributions
Balance Equations
Sometimes it is not practical to calculate the
equilibrium probabilities using the limit.
If a distribution is invariant then at every
iteration, the inputs to a state must add up to its
starting probability.
The inputs to a state i are the probabilities of
each state j (j) which leads into it multiplied by
the probability pij
Balance Equations (2)
More formally if i is the probability of state i :
i p ji j
j 0
And to ensure it is a distribution:
i 0
i
1
Which, for an n state chain gives us n+1
equations for n unknowns.
Queuing Analysis of the Leaky
Bucket
A “leaky bucket” is a mechanism for managing buffers
to smooth the downstream flow.
What is described here is what is sometimes called a
“token bucket”.
A queue holds a stock of “permits” which arrive at a
rate r (one every 1/r seconds) up to W permits may be
held.
A packet cannot leave the queue if there is no permit
stored.
The idea is that the scheme limits downstream flow but
can deal with bursts of traffic.
Modelling the Leaky Bucket
Let us assume that the arrival process is a
Poisson process with a rate
Consider how many packets arrive in 1/r
seconds. The prob ak that k packets arrive is:
Queue of
permits (arrive
at 1/r seconds)
Queue of
packets (Poisson)
e / r ( / r ) k
ak
k!
Exit queue for packets
with permits
Exit of
buffer
A Markov Model
Model this as a Markov Chain which changes state
every 1/r seconds.
States 0iW represent no packets waiting and i-W
permits available. States W+i (where i > 1) represent
0 permits and i packets waiting.
a0 a1 i j 0
pij ai j 1
j i 1
Transition probabilities:
0
a2
0
a0+a1
a0
a2
1
a1
a0
2
a1
...
otherwise
a2
W
a0 W+1
a1
a1
...
Solving the Markov Model
By solving the balance equations we get:
0 a01 (a0 a1 ) 0
i 1
i ai j 1 j
i 1
j 0
1 (1 a0 a1 ) 0 / a0
1 a2 0 a11 a0 2
0 (1 a0 a1 )(1 a1 )
2
a2
a0
a0
Similarly, we can get expressions for 3 in terms
of 2 ,1 and 0. And so on...
Solving the Markov Model (2)
Normally we would solve this using the remaining
balance equation:
i 1
This is difficult analytically in this case.
i 0
Instead we note that permits are generated every step
except when we are in state 0 and no packets arrive (W
permits none used).
This means permits are generated at a rate (1-0a0)r
This must be equal to since each packet gets a permit
r
(assume none dropped while waiting).
0
ra0
And Finally
The average delay for a packet to get a permit is
given by:
No of iterations
Time taken for each
iteration of chain
Amount of time
spent in given state
1
T j ( j W )
r j W 1
taken to get out
of queue from
state j
For those states with queue
Of course this is not a closed form expression.
To complete this analysis, look at Bertsekas
P515