Markov Chains (2)

Download Report

Transcript Markov Chains (2)

Discrete-Time Markov Chains
Introduction
Markov Modeling is an extremely important
tool in the field of modeling and analysis of
telecommunication networks
Example: Markov Models are applicable in the
following networking problems





© Tallal Elshabrawy
Connection Admission Control (CAC)
Bandwidth Allocation
Congestion Control
Routing
Queuing and Scheduling
2
Markov Property
X(t) = X
PAST
Future
t
time
 X(t) depicts a stochastic process
 The Markov Property:
Conditional on X(t)=X, the future {X(u)}u>t and the
past {X(u)}u<t are statistically independent
© Tallal Elshabrawy
3
Markov Process = Random State Process
 A Markov process is characterized by a state
variable
 The state depicts a random process with the
condition that knowledge of the value of the state
at any given time t relieves the need of any past
knowledge of the system
 Examples:
 The state could be the number of packets inside a
queue with exponential inter-arrivals and exponential
service times
© Tallal Elshabrawy
4
Types of Markov Process
Time
Amplitude
Discrete
Discrete
Continuous
Continuous
 Markov processes with discrete amplitude are
termed as Markov Chains
 The amplitude in Markov chains is discrete in the
sense that they are drawn from a state space S
where S={s0, s1, s2, ……}
© Tallal Elshabrawy
5
Discrete-Time Markov Chains
Consider Markov Chain {X(t)} in discrete-time t = 0, 1, 2, …
X(0)
X(1)
X(2)
X(3)
X(4)
X(n)
t=0
t=1
t=2
t=3
t=4
t=n
time
Pr  X  0   ε0 , X 1  ε1 ,..., X n  εn  ε 0 ,ε1 ,...,εn  S
 Pr  X  0   ε 0  Pr  X 1   ε1 ,..., X  n   εn X  0   ε 0 
 Pr  X  0   ε 0  Pr  X 1   ε1 X  0   ε 0  
Pr  X 2   ε2 ,..., X n   εn X  0   ε 0 , X 1   ε1 
Future with respect to t=1
© Tallal Elshabrawy
Present
Past
From the Markov property this term is irrelevant
in determining the future with respect to t=1
6
Discrete-Time Markov Chains
Consider Markov Chain {X(t)} in discrete-time t = 0, 1, 2, …
X(0)
X(1)
X(2)
X(3)
X(4)
X(n)
t=0
t=1
t=2
t=3
t=4
t=n
time
Pr  X  0   ε0 , X 1  ε1 ,..., X n  εn  ε 0 ,ε1 ,...,εn  S
 Pr  X  0   ε 0  Pr  X 1   ε1 X  0   ε 0  
Pr  X 2   ε2 ,..., X n   εn X 1   ε1 
Continuing in the same way
Pr  X  0   ε0 , X 1  ε1 ,..., X n  εn  ε 0 ,ε1 ,...,εn  S
Initial
Condition
Pr  X  0   ε 0   kn 1 Pr  X  k   εk X k  1   εk 1 
© Tallal Elshabrawy
Transition
Probabilities
7
Theorem
 The statistics of Markov chain X(.) are completely
specified by:
 The initial distribution (the distribution of X(0))
 The transition probabilities (the distribution of X(k)
given X(k-1)) for all k
 Goal:
To reduce the complexity of describing random processes
© Tallal Elshabrawy
8
Further Reduction: Time Invariance
 Markov X(.) is said to be time-homogeneous if and
only if the distribution of X(t) given X(s) for s<t
depends only on the difference t-s, i.e.,
Time Homogeneity means
Pr  X  t   ε1 X  s   ε 0   Pr  X  t  s   ε1 X  0   ε 0 
© Tallal Elshabrawy
9
Modeling & Notations
 Assume the Markov chain
may assume one of M
states, i.e.,
S={S0, S1, …, SM-1}
 The chain could be fully
characterized by:
 An Mx1 vector Q(0) =
{Pr[X[0]=Si}, 0≤i ≤M-1
 An MxM matrix P(t) = {Pij(t)},
0≤i ≤M-1, 0≤j≤M-1
S0
S1
S2
Si
SM-1
Sj
Pij(t)
Si
Sj
Pji(t)
Pij(t) = Pr[X(s+t)=Sj|X(s)=Si]
Pji(t) = Pr[X(s+t)=Si|X(s)=Sj]
© Tallal Elshabrawy
10
Example
 0.4 
Q  0    0.1  P  t  
 0.5 
Q t  ?
0.2 0.4 0.4 
 0.1 0.1 0.8 


 0.7 0.2 0.1 
S0
S1
S2
Pr  X  t   S 0   0.4  0.2  0.1  0.1  0.5  0.7  0.44
Pr  X  t   S1   0.4  0.4  0.1  0.1  0.5  0.2  0.27
Pr  X  t   S2   0.4  0.4  0.1  0.8  0.5  0.1  0.29
 0.44 


Q  t    0.27 
 0.29 
© Tallal Elshabrawy
QT  t   QT  0   P  t 
Matrix
Multiplication
11
Stationary Markov Chains
 If a chain is stationary:
Q(t)={Pr[X(t)=Si]}, 0≤i ≤M-1 does not depend on t
 Stationary means:
Distribution of position is insensitive to time
© Tallal Elshabrawy
12
Chapman-Kolmogorov
Si
s
Sj
Sk
t
u
time
Pr  X u  Sk X  s   Si    Pr  X u   S k , X (t )  S j X  s   S i 
j
  Pr  X u  Sk X(t )  S j , X  s   S i  Pr  X(t )  S j X  s   S i 
j
Future with
respect to
time=t
Present with
respect to
time=t
 Pr  X  u   S k X  s   S i 
Past with
respect to
time=t
Chapman-Kolmogorov Equations
  Pr  X  u   S k X  t   S j  Pr  X  t   S j X  s   S i 
j
True for all Si, Sj and for all time points s<t<u
© Tallal Elshabrawy
13
Example
Sj
Si
s=0
1.
2.
3.
t
Sk
u=2t time
0.2 0.4 0.4 
P  t    0.1 0.1 0.8 
 0.7 0.2 0.1 
To move from state S0 to S1 within time = 2t,
we have the following three Possibilities:
Transition from S0 to S0 between 0,t is followed
by the transition S0 to S1 between t,2t
Transition from S0 to S1 between 0,t is followed
by the transition S1 to S1 between t,2t
Transition from S0 to S2 between 0,t is followed
by the transition S2 to S1 between t,2t
P 2t   ?
S0
S1
S2
P01 2t   0.2  0.4  0.4  0.1  0.4  0.2  0.2
2
P01 2t    P0 j  t  Pj1  t 
j 0
© Tallal Elshabrawy
14
Chapman-Kolmogorov: Matrix Notation
Si
s
Pr  X  u   S k X  s   S i 
Sj
Sk
t
u
time
Chapman-Kolmogorov Equations
  Pr  X  u   S k X  t   S j  Pr  X  t   S j X  s   S i 
j
P  s  t   P(s)P(t )
© Tallal Elshabrawy
15
One Step Transition Probabilities
Si
t
P  t  1   P(t )P(1)
Sj
Sk
t+1
t+2
time
One Step Transition
Probability
P  t  2   P(t  1)P(1)  P(t )P (1)
2
P  t  n  P(t )Pn (1)
Discrete Time Case  P  t   P t (1)
© Tallal Elshabrawy
16
Summary
 Discrete-Time Homogeneous Markov Chain
 A complete statistical characterization consists of
 Q(0)  Initial Probability distribution
 P(1)  One Step Transition Probability
 QT(t) = QT(0)P(t)
 (i.e., Pr[X(t)=Sj] = ∑i Pr[X(0)=Si]Pr[X(t)=Sj|X(0)=Si])
 Q(t) is a Probability Vector
 (i.e., ∑i Pr[X(t)=Si] =1)
 The matrix P(t) is stochastic
 All elements >0
 Sum of each row is equal to 1 (∑j Pij(t) = 1)
© Tallal Elshabrawy
17