Communication Systems and Networks
Download
Report
Transcript Communication Systems and Networks
Markov Chains
Summary
Markov Chains
Discrete Time Markov Chains
Homogeneous
and non-homogeneous Markov
chains
Transient and steady state Markov chains
Continuous Time Markov Chains
Homogeneous
and non-homogeneous Markov
chains
Transient and steady state Markov chains
Markov Processes
Recall the definition of a Markov Process
The future a process does not depend on its past, only on its
present
Pr X tk 1 xk 1 | X tk xk ,..., X t0 x0
Pr X tk 1 xk 1 | X tk xk
Since we are dealing with “chains”, X(t) can take discrete
values from a finite or a countable infinite set.
For a discrete-time Markov chain, the notation is also
simplified to
Pr X k 1 xk 1 | X k xk ,..., X 0 x0 Pr X k 1 xk 1 | X k xk
Where Xk is the value of the state at the kth step
Chapman-Kolmogorov Equations
Define the one-step transition probabilities
pij k Pr X k 1 j | X k i
Clearly, for all i, k, and all feasible transitions from state i
pij k 1
j i
Define the n-step transition probabilities
pij k , k n Pr X k n j | X k i
x1
…
xi
xj
xR
k
u
k+n
Chapman-Kolmogorov Equations
x1
…
xi
xj
xR
Using total probability
k
u
k+n
R
pij k , k n Pr X k n j | X u r, X k i Pr X u r | X k i
r 1
Using the memoryless property of Marckov chains
Pr X k n j | X u r, X k i Pr X k n j | X u r
Therefore, we obtain the Chapman-Kolmogorov Equation
R
pij k , k n pir k , u prj u, k n ,
r 1
k u k n
Matrix Form
Define the matrix
H k , k n pij k , k n
We can re-write the Chapman-Kolmogorov Equation
H k , k n H k , u H u, k n
Choose, u = k+n-1, then
H k , k n H k , k n 1 H k n 1, k n
H k , k n 1 P k n 1
Forward ChapmanKolmogorov
One step transition
probability
Matrix Form
Choose, u = k+1, then
H k , k n H k , k 1 H k 1, k n
P k H k 1, k n
Backward ChapmanKolmogorov
One step transition
probability
Homogeneous Markov Chains
The one-step transition probabilities are independent of
time k.
Pk P
or
pij Pr X k 1 j | X k i
Even though the one step transition is independent of k,
this does not mean that the joint probability of Xk+1 and Xk
is also independent of k
Note that
Pr X k 1 j, X k i Pr X k 1 j | X k i Pr X k i
pij Pr X k i
Example
Consider a two transmitter (Tx) communication system
where, time is divided into time slots and that operates
as follows
At most one packet can arrive during any time slot and this can
happen with probability α.
Packets are transmitted by whichever transmitter is available,
and if both are available then the packet is given to Tx 1.
If both transmitters are busy, then the packet is lost
When a Tx is busy, it can complete the transmission with
probability β during any one time slot.
If a packet is submitted during a slot when both transmitters are
busy but at least one Tx completes a packet transmission, then
the packet is accepted (departures occur before arrivals).
Describe the Markov Chain that describe this model.
Example: Markov Chain
For the State Transition Diagram of the Markov Chain, each
transition is simply marked with the transition probability
p11
p01
p00
0
p10
p12
1
p22
2
p21
p20
p00 1
p01
p02 0
p10 1
p11 1 1
p12 1
p20 1
p21 2 2 1 1
2
p22 1 2 1
2
Example: Markov Chain
p11
p01
p00
0
p10
p12
1
p21
p20
Suppose that α = 0.5 and β = 0.7, then,
0.5
0
0.5
P pij 0.35 0.5
0.15
0.245 0.455 0.3
p22
2
State Holding Times
Suppose that at point k, the Markov Chain has
transitioned into state Xk=i. An interesting question is
how long it will stay at state i.
Let V(i) be the random variable that represents the
number of time slots that Xk=i.
We are interested in the quantity Pr{V(i) = n}
Pr V i n Pr X k n i, X k n 1 i,..., X k 1 i | X k i
Pr X k n i | X k n 1 i,..., X k i
Pr X k n 1 i,..., X k 1 i | X k i
Pr X k n i | X k n 1 i
Pr X k n 1 i | X k n 2 ..., X k i
Pr X k n 2 i,..., X k 1 i | X k i
State Holding Times
Pr V i n Pr X k n i | X k n 1 i
Pr X k n 1 i | X k n 2 ..., X k i
Pr X k n 2 i,..., X k 1 i | X k i
1 pii Pr X k n 1 i | X k n 2 i
Pr X k n 2 i | X k n 3 i,..., X k i
Pr X k n 3 i,..., X k 1 i | X k i
Pr V i n 1 pii piin1
This is the Geometric Distribution with parameter pii.
V(i) has the memoryless property
State Probabilities
An interesting quantity we are usually interested in is the
probability of finding the chain at various states, i.e., we
define
i k Pr X k i
For all possible states, we define the vector
Using total probability we can write
π k 0 k , 1 k ...
i k Pr X k i | X k 1 j Pr X k 1 j
j
pij k j k 1
j
In vector form, one can write
π k π k 1 P k
Or, if homogeneous
Markov Chain
π k π k 1 P
State Probabilities Example
Suppose that
0.5
0
0.5
P 0.35 0.5
0.15
0.245 0.455 0.3
with
π 0 1 0 0
Find π(k) for k=1,2,…
0.5
0
0.5
π 1 1 0 0 0.35 0.5
0.15 0.5 0.5 0
0.245 0.455 0.3
Transient behavior of the system: MCTransient.m
In general, the transient behavior is obtained by solving
the difference equation
π k π k 1 P
Classification of States
Definitions
State j is reachable from state i if the probability to go
from i to j in n >0 steps is greater than zero (State j is
reachable from state i if in the state transition diagram
there is a path from i to j).
A subset S of the state space X is closed if pij=0 for
every i∈S and j ∉ S
A state i is said to be absorbing if it is a single
element closed set.
A closed set S of states is irreducible if any state j∈S
is reachable from every state i∈S.
A Markov chain is said to be irreducible if the state
space X is irreducible.
Example
Irreducible Markov Chain
p01
p00
0
p12
1
p10
Reducible Markov Chain
p01
0
p00
p10
p22
2
p21
p12
1
p23
2
p32
3
p14
Absorbing
State
4
p22
Closed irreducible set
p33
Transient and Recurrent States
Tij min k 0 : X 0 i, X k j
Hitting Time
Recurrence Time Tii is the first time that the MC returns to
state i.
Let ρi be the probability that the state will return back to i
given it starts from i. Then,
i Pr Tii k
k 1
The event that the MC will return to state i given it started
from i is equivalent to Tii < ∞, therefore we can write
i Pr Tii k Pr Tii
k 1
A state is recurrent if ρi=1 and transient if ρi<1
Theorems
If a Markov Chain has finite state space, then at least one
of the states is recurrent.
If state i is recurrent and state j is reachable from state i
then, state j is also recurrent.
If S is a finite closed irreducible set of states, then every
state in S is recurrent.
Positive and Null Recurrent States
Let Mi be the mean recurrence time of state i
Mi E Tii k Pr Tii k
k 1
A state is said to be positive recurrent if Mi<∞. If Mi=∞
then the state is said to be null-recurrent.
Theorems
If state i is positive recurrent and state j is reachable
from state i then, state j is also positive recurrent.
If S is a closed irreducible set of states, then every
state in S is positive recurrent or, every state in S is
null recurrent, or, every state in S is transient.
If S is a finite closed irreducible set of states, then
every state in S is positive recurrent.
Example
p01
0
p00
p10
p12
1
p23
2
p32
3
p14
Transient
States
Recurrent State
4
p22
Positive
Recurrent
States
p33
Periodic and Aperiodic States
Suppose that the structure of the Markov Chain is such
that state i is visited after a number of steps that is an
integer multiple of an integer d >1. Then the state is
called periodic with period d.
If no such integer exists (i.e., d =1) then the state is called
aperiodic.
Example
1
0.5
0
1
0.5
Periodic State d = 2
2
1
0 1 0
P 0.5 0 0.5
0 1 0
Steady State Analysis
Recall that the probability of finding the MC at state i after
the kth step is given by
π k 0 k , 1 k ...
i k Pr X k i
An interesting question is what happens in the “long run”,
i.e.,
i lim k
k
This is referred to as steady state or equilibrium or
stationary state probability
Questions:
Do these limits exists?
If they exist, do they converge to a legitimate
i 1
probability distribution, i.e.,
How do we evaluate πj, for all j.
Steady State Analysis
Recall the recursive probability
If steady state exists, then π(k+1) = π(k), and therefore
the steady state probabilities are given by the solution to
the equations
π k 1 π k P
π πP
and
i
1
i
For Irreducible Markov Chains the presence of periodic
states prevents the existence of a steady state probability
Example: periodic.m
0 1 0
P 0.5 0 0.5
0 1 0
π 0 1 0 0
Steady State Analysis
THEOREM: If an irreducible aperiodic Markov chain
consists of positive recurrent states, a unique stationary
state probability vector π exists such that πj > 0 and
1
j lim j k
k
Mj
where Mj is the mean recurrence time of state j
The steady state vector π is determined by solving
π πP
and
i
Ergodic Markov chain.
i
1
Birth-Death Example
1-p
0
p
1-p
1
i
p
p
p
0
p 1 p
p
0 1 p
P
0
p
0
1-p
Thus, to find the steady state vector π we need to solve
π πP
and
i
i
1
Birth-Death Example
In other words
Solving these equations we get
0 0 p 1 p
j j 1 1 p j 1 p, j 1, 2,...
1 p
1
0
p
In general
1 p
2
0
p
2
1 p
j
0
p
j
Summing all terms we get
1 p
0
1 0 1
p
i 0
i
1 p
p
i 0
i
Birth-Death Example
Therefore, for all states j we get
j
i
1 p
1 p
j
p
p
i 0
If p<1/2, then
1 p
p
i 0
i
If p>1/2, then
p
1 p
p 2 p 1 0
i 0
i
j 0, for all j
All states are transient
2 p 1 1 p
j
, for all j
p p
j
All states are positive recurrent
Birth-Death Example
If p=1/2, then
1 p
p
i 0
i
j 0, for all j
All states are null recurrent
Continuous-Time Markov Chains
In this case, transitions can occur at any time
Recall the Markov (memoryless) property
Pr X tk 1 xk 1 | X tk xk ,..., X t0 x0
where t1 < t2 < … < tk
Pr X tk 1 xk 1 | X tk xk
Recall that the Markov property implies that
X(tk+1) depends only on X(tk) (state memory)
It does not matter how long the state is at X(tk) (age
memory).
The transition probabilities now need to be defined for
every time instant as pij(t), i.e., the probability that the MC
transitions from state i to j at time t.
Transition Function
Define the transition function
The continuous-time analogue of the ChapmanKolmokorov equation is
pij s, t Pr X t j | X s i ,
st
pij s, t
Pr X t j | X u r , X s i Pr X u r | X s i
r
Using the memoryless property
pij s, t Pr X t j | X u r Pr X u r | X s i
r
Define H(s,t)=[pij(s,t)], i,j=1,2,… then
H s, t H s, u H u , t ,
Note that H(s, s)= I
sut
Transition Rate Matrix
Consider the Chapman-Kolmogorov for s ≤ t ≤ t+Δt
H s, t t H s, t H t , t t
Subtracting H(s,t) from both sides and dividing by Δt
Taking the limit as Δt0
H s, t t H s, t H s, t H t , t t I
t
t
H s, t
H s, t Q t
t
where the transition rate matrix Q(t) is given by
H t , t t I
Q t lim
t 0
t
Homogeneous Case
In the homogeneous case, the transition functions do not
depend on s and t, but only on the difference t-s thus
pij s, t pij t s
It follows that
H s, t H t s P
and the transition rate matrix
H t , t t I
H t I
Q t lim
lim
Q, constant
t 0
t 0
t
t
Thus
1 if i j
P t
P t Q with pij 0
t
0 if i j
P t eQt
State Holding Time
The time the MC will spend at each state is a random
variable with distribution
Gi t 1 ei
where
i
Explain why…
j i
j
Transition Rate Matrix Q.
Recall that
First consider the qij, i ≠ j, thus the above equation can be
written as pij t
pij t
P t
pir t qrj
P t Q
t
r
t
t
r i
Evaluating this at t = 0, we get that
pij t
t
pii t qij pir t qrj
qij
pij(0)= 0 for all i ≠ j
t 0
The event that will take the state from i to j has exponential
residual lifetime with rate λij, therefore, given that in the
interval (t,t+τ) one event has occurred, the probability that
this transition will occur is given by Gij(τ)=1-exp{-λijτ}.
Transition Rate Matrix Q.
Since Gij(τ)=1-exp{-λijτ}.
pij
0
0
ij
In other words qij is the rate of the Poisson process that
activates the event that makes the transition from i to j.
Next, consider the qjj, thus
pij t
t
qij ij eij
pij t q jj pir t qrj
r j
Evaluating this at t = 0, we get that
pij t
t
q jj
t 0
Probability that chain
leaves state j
1 pij t q jj
t
t 0
Transition Rate Matrix Q.
The event that the MC will transition out of state i has
exponential residual lifetime with rate Λ(i), therefore, the
probability that an event will occur in the interval (t,t+τ)
given by Gi(τ)=1-exp{- Λ(i)τ}.
q jj i ei
0
i
Note that for each row i, the sum
q
ij
j
0
Transition Probabilities P.
Suppose that state transitions occur at random points in
time T1 < T2 <…< Tk <…
Let Xk be the state after the transition at Tk
Define
Pij Pr X k 1 j | X k i
Recall that in the case of the superposition of two or more
Poisson processes, the probability that the next event is
from process j is given by λj/Λ.
In this case, we have
Pij
qij
qii
,i j
and
Pii 0
Example
Assume a transmitter where packets arrive according to a
Poisson process with rate λ.
Each packet is processed using a First In First Out (FIFO)
policy.
The transmission time of each packet is exponential with
rate μ.
The transmitter has buffer to store up to two packets that
wait to be transmitted.
Packets that find the buffer full are lost.
Draw the state transition diagram.
Find the Rate Transition Matrix Q.
Find the State Transition Matrix P
Example
a
0
a
1
a
2
d
d
The rate transition matrix is given by
0
0
0
Q
0
0
0
0
The state transition
matrix is given by
1
P
0
0
a
3
d
0
0
0
0
0
0
0
State Probabilities and Transient
Analysis
Similar to the discrete-time case, we define
In vector form
With initial probabilities
Using our previous notation (for homogeneous MC)
j t Pr X t j
π t 1 t , 2 t ,...
π 0 1 0 , 2 0 ,...
π t π 0 P t π 0 e
Qt
Obtaining a general
solution is not easy!
Differentiating with respect to t gives us more “inside”
d j t
dπ t
q jj j t qij i t
π t Q
dt
dt
i j
“Probability Fluid” view
We view πj(t) as the level of a “probability fluid” that is
stored at each node j (0=empty, 1=full).
d j t
dt
q jj j t qij i t
i j
Change in the
probability fluid
inflow
outflow
i
qij
j
…
…
Inflow
qjr
Outflow
r
q jj q jr
r j
Steady State Analysis
Often we are interested in the “long-run” probabilistic
behavior of the Markov chain, i.e.,
j lim j t
t
These are referred to as steady state probabilities or
equilibrium state probabilities or stationary state probabilities
As with the discrete-time case, we need to address the
following questions
Under what conditions do the limits exist?
If they exist, do they form legitimate probabilities?
How can we evaluate these limits?
Steady State Analysis
Theorem: In an irreducible continuous-time Markov Chain
consisting of positive recurrent states, a unique stationary
state probability vector π with
j lim j t
t
These vectors are independent of the initial state
probability and can be obtained by solving
πQ = 0
Using the “probability fluid”
view
outflow
and
1
j
i
qij
i j
j t
0
qjr
dt
j
…
…
0 q jj j t qij i t
0 Change
j
inflow
Inflow
Outflow
r
Example
a
0
a
1
a
2
3
d
d
d
For the previous example, with the above transition
function, what are the steady state probabilities
Solve
πQ 0 1 2
0 1 2 3 1
a
0
0
0
0
3
0
0
0
Example
The solution is obtained
0 1 0
0 1 2 0
1 2 3 0
0 1 2 3 1
1 0
2
2 0
3
3 0
0
1
1
2
3
Birth-Death Chain
λ0
0
μ1
λ1
1
λi-1
μi
λi
i
μi+1
Find the steady state probabilities
Similarly to the previous example,
0
0
0
1 1
1
1
Q
0
2
2 2
And we solve
πQ 0
and
i 1
Example
The solution is obtained
0 0 11 0
0
1 0
1
0 0 1 1 1 2 2 0
In general
0 1
2
1 2
j 1 j 1 j j j j 1 j 1 0
0
j 1
0 ... j
0
1 ... j 1
Making the sum equal to 1
...
0
j 1
0 1
1
...
j
1
1
j
Solution exists if
0 ... j 1
S 1
j 1 1 ... j
Uniformization of Markov Chains
In general, discrete-time models are easier to work with,
and computers (that are needed to solve such models)
operate in discrete-time
Thus, we need a way to turn continuous-time to discretetime Markov Chains
Uniformization Procedure
Recall that the total rate out of state i is –qii=Λ(i). Pick
a uniform rate γ such that γ ≥ Λ(i) for all states i.
The difference γ - Λ(i) implies a “fictitious” event that
returns the MC back to state i (self loop).
Uniformization of Markov Chains
Uniformization Procedure
Let PUij be the transition probability from state i to state j for the
discrete-time uniformized Markov Chain, then
i
qik
k
if i j
qii
…
qij
if i j
Uniformization
i
qij
j
…
j
qij
U
Pij
j i qij
qik
…
…
k