Transcript Q-Layout

A brief review
The exponential distribution
A continuous random variable X is said to have an exponential
distribution with parameter  ,   0, if its probability density
function is given by
  e  x
f ( x)  
0,
x0
x0

1  e   x
F ( x)   f ( y) dy  

 0,

1
 x
xf ( x) dx   x e dx 
x
E( X )  


0

x0
x0
The memoryless property
A random variable is said to be memoryless if
P( X  s  t | X  t )  P ( X  s ) for all s, t  0

P( X  s  t , X  t ) P( X  s  t )

 P( X  s )
P( X  t )
P( X  t )

P( X  s  t )  P( X  t ) P( X  s )
If X has the exponential distrbution, then
P ( X  s  t )  e   ( s  t )  e   s e  t  P ( X  t ) P ( X  s )
 Exponentially distributed random variables are
memoryless.
The exponential distribution is the only
distribution that has the memoryless property.
The minimum of n exponentially distributed
random variables
Suppose that X1, X2, ..., Xn are independent exponential
random variables, with Xi having rate i, i=1, ..., n.
What is P(min(X1, X2, ..., Xn )>x)?
P (min( X 1 , X 2 , ..., X n )  x)  P{( X 1  x), ( X 2  x), ..., (X n  x)}
 P{( X 1  x)}P{( X 2  x)} ...P{(X n  x)}
 e  1 x e  2 x ...e  N x
 e  ( 1  2 ... N ) x
 The distribution of the random variable P (min( X 1 , X 2 , ..., X n )
is exponentially distributed with mean
1
.
1  2  ...  n
Comparing two exponentially distributed
random variables
Suppose that X1 and X2 are independent exponentially
distributed random variables with rates 1 and 2.
What is P(X1 < X2)?

P( X 1  X 2 )   P( X 1  X 2 | X 1  x) f X1 ( x)dx
0

  P( X 1  X 2 | X 1  x)1e
0

 e
 2 x
0

1
1  2
1e
.
 1 x

 1 x
dx   P( x  X 2 )1e  1 x dx
dx   1e  ( 1  2 ) x dx
0

0
The Poisson process
The counting process {N(t) t ≥ 0} is said to be a Poisson process
having rate ,  > 0, if
(i) N(0) = 0.
(ii) The process has independent increments.
(iii) The number of events in any interval of length t is Poisson
distributed with mean t. That is for all s, t ≥ 0
n
(

t
)
 t
P{N (t  s )  N (t )  n}  e
n!
,
for n  0,1,...
The distribution of interarrival times
for a Poisson process
Let Tn denote the inter-arrival time between the (n-1)th event
and the nth event of a Poisson process, then the Tn (n=1, 2, ...)
are independent, identically distributed exponential random
variables having mean 1/.
Continuous Time Markov Chains
(CTMC)
• A CTMC is a continuous time analog to a discrete time
Markov chain
• A CTMC is defined with respect to a continuous time
stochastic process {X(t): t ≥0}
• If X(t) = i the process is said to be in state i at time t
• {i: i=0, 1, 2, ...} is the state space
A stochastic process {X(t): t ≥0} is a continuous time Markov
chain if for all s, t , u ≥0 and 0 ≤ u < s
P{X(t+s)=j|X(s)=i, X(u)=x(u), 0 ≤ u < s} = P{X(t+s)=j|X(s)=i}
A CTMC is said to have stationary transition probabilities if
P{X(t+s)=j|X(s)=i}
is independent of s (and depends only on t).
A CTMC is said to have stationary transition probabilities if
P{X(t+s)=j|X(s)=i}
is independent of s (and depends only on t).
 Pij(t) = P{X(t+s)=j|X(s)=i}
A CTMC is said to have stationary transition probabilities if
P{X(t+s)=j|X(s)=i}
is independent of s (and depends only on t).
 Pij(t) = P{X(t+s)=j|X(s)=i}
Note: We shall always assume that the stationary property holds
Sojourn times
• Ti: time the process spends in state i once it enters state i (the
length of a visit to state i, a random variable).
Sojourn times
• Ti: time the process spends in state i once it enters state i (the
length of a visit to state i, a random variable).
Example 1: X(0)=2, the first three transitions occur at t1 =3, t1 =
4.2 and t1 = 6.5 and X(3)=4, X(4.2) = 2 and X(6.5) =1.
Sojourn times
• Ti: time the process spends in state i once it enters state i (the
length of a visit to state i, a random variable).
Example 1: X(0)=2, the first three transitions occur at t1 =3, t2 =
4.2 and t3 = 6.5 and X(3)=4, X(4.2) = 2 and X(6.5) =1.
 The first sojourn time in state 4 = 4.2-3 = 1.2
The second sojourn time in state 2 = 6.5 – 4.2 = 1.3
Example 2: Suppose the process has been in state 3 for 10
minutes, what is the probability that it will not leave state 3 in
the next 5 minutes.
Example 2: Suppose the process has been in state 3 for 10
minutes, what is the probability that it will not leave state 3 in
the next 5 minutes.
P(T3 > 15|T3> 10) = P(T3 > 5)
More generally,
P(Ti > s+t|Ti> s) = P(Ti > t)
 Ti is memoryless and therefore has the exponential
distribution
An alternative definition of a CTMC
A CTMC is a stochastic process having the properties that each
time it enters a state i
(i) the amount of time it spends in that state before making a
transition into a different state is exponentially distributed with
some mean 1/vi (or transition rate vi )
(ii) when the process leaves state i, it next enters state j with
some probability Pij (Pii =0 and SjPij =1, for all i)
A CTMC is a stochastic process that moves from state to state
according to a probability transition matrix (similar to a discrete
time Markov chain) , but the amount of time it spends in each
state is exponentially distributed.
To define a CTMC, we need to define a state space, a probability
transition matrix, and a set of transition rates.
Example 1: Customers arrive to a store according to a Poisson
process with rate . Let N(t) be the total number of customers
that have arrived by time t.
Example 1: Customers arrive to a store according to a Poisson
process with rate . Let N(t) be the total number of customers
that have arrived by time t.
 State space is {0, 1, 2, ...};
Ti is exponentially distributed with mean 1/
Pij =1 if j=i+1 and Pij = 0 otherwise
Example 2: Customers arrive to an airline check-in counter
according to a Poisson process with rate . The time it takes the
single agent at the counter to check-in a customer is
exponentially distributed with mean 1/m.
Example 2: Customers arrive to an airline check-in counter
according to a Poisson process with rate . The time it takes the
single agent at the counter to check-in a customer is
exponentially distributed with mean 1/m.
 State space is {0, 1, 2, ...}
P0,1 = 1; Pi,i+1 = /(+m); Pi,i-1 = m/(+m)
T0 =1/  v0 = ; Ti = 1/(+m)  vi = + m for i =1, 2, ...
Example 2: Customers arrive to an airline check-in counter
according to a Poisson process with rate . The time it takes the
single agent at the counter to check-in a customer is
exponentially distributed with mean 1/m.
 State space is {0, 1, 2, ...}
P0,1 = 1; Pi,i+1 = /(+m); Pi,i-1 = m/(+m)
T0 =1/  v0 = ; Ti = 1/(+m)  vi = + m for i =1, 2, ...
The above is an example of an M/M/1 queue.
The M/M/1 queue is an example of a birth and death process.
Example 2: Customers arrive to an airline check-in counter
according to a Poisson process with rate . The time it takes the
single agent at the counter to check-in a customer is
exponentially distributed with mean 1/m.
 State space is {0, 1, 2, ...}
P0,1 = 1; Pi,i+1 = /(+m); Pi,i-1 = m/(+m)
T0 =1/  v0 = ; Ti = 1/(+m)  vi = + m for i =1, 2, ...
The above is an example of an M/M/1 queue.
The M/M/1 queue is an example of a birth and death process.
Birth and death process
Example 3: Customers arrive to a service center according to a
Poisson process with rate n when there are n customers in the
system. Customers take an amount Tn that is exponentially
distributed with mean 1/mn when there are n customers in the
system.
Birth and death process
Example 3: Customers arrive to a service center according to a
Poisson process with rate n when there are n customers in the
system. Customers take an amount Tn that is exponentially
distributed with mean 1/mn when there are n customers in the
system.
 State space is {0, 1, 2, ...}
P0,1 = 1;
Pn,n+1=n/(n+mn); Pn,n-1=mn/(n+mn)
T0 =1/0  v0 =0 ; Tn = 1/(n +mn)  vn =n +mn for n =1, 2,
...
State transition diagrams for
a B&D process
0
0
1
1
m1
2
2
m2
3
m3
• The Poisson process is a birth and death process with rate
n= and mn=0.
• The M/M/1 queue is described by a birth and death process
with rate n= and mn=m.
Example 4: Customers arrive to an airline check-in counter
according to a Poisson process with rate . The time it takes one
of the m agents at the counter to check-in a customer is
exponentially distributed with mean 1/m.
Example 4: Customers arrive to an airline check-in counter
according to a Poisson process with rate . The time it takes one
of the m agents at the counter to check-in a customer is
exponentially distributed with mean 1/m.
 The system can be modeled as a birth and death process with
transition rates
n = ;
mn = nm if 1 ≤ n < m
mn = mm if n ≥ m
Example 4: Customers arrive to an airline check-in counter
according to a Poisson process with rate . The time it takes one
of the m agents at the counter to check-in a customer is
exponentially distributed with mean 1/m.
 The system can be modeled as a birth and death process with
transition rates
n = ;
mn = nm if 1 ≤ n < m
mn = mm if n ≥ m
The above is an example of an M/M/m queue.
Transition rates
vi: rate with which the process leaves state i (once it enters state i)
qij: rate with which the process goes state j (once it enters state i)
 qij = Pijvi
(qij is also called the instantaneous transition rate from state i to j)
Transition rates
vi: rate with which the process leaves state i (once it enters state i)
qij: rate with which the process goes state j (once it enters state i)
 qij = Pijvi
(qij is also called the instantaneous transition rate from state i to j)
 vi Sj vi Pij = Sj qij
Pij qij/vi qij/Sj qij
Transition rates
vi: rate with which the process leaves state i (once it enters state i)
qij: rate with which the process goes state j (once it enters state i)
 qij = Pijvi
(qij is also called the instantaneous transition rate from state i to j)
 vi Sj vi Pij = Sj qij
Pij qij/vi qij/Sj qij
Specifying the instantaneous transition rates determines the
parameters of the CTMC
State transition diagrams
q0,2
q0,1
0
q1,2
1
q2,0
2
q2,1
q2,0
Properties
It can also be shown that
1  Pii (h)
lim
 vi
h 0
h
lim
h 0
Pij (h)
h
 qij , for i  j
The Chapman-Kolmogrov equations
For all s, t  0, Pij (t  s )   k 0 Pik (t ) Pkj ( s ).

The Chapman-Kolmogrov equations
For all s, t  0, Pij (t  s )   k 0 Pik (t ) Pkj ( s ).

Proof :
Pij (t  s )  P( X (t  s )  j | X (0)  i )
The Chapman-Kolmogrov equations
For all s, t  0, Pij (t  s )   k  0 Pik (t ) Pkj ( s ).

Proof :
Pij (t  s )  P ( X (t  s )  j | X (0)  i )
=


k 0
P( X (t  s )  j , X (t )  k | X (0)  i )
The Chapman-Kolmogrov equations
For all s, t  0, Pij (t  s )   k  0 Pik (t ) Pkj ( s ).

Proof :
Pij (t  s )  P( X (t  s )  j | X (0)  i)

=
=

k 0
P( X (t  s)  j , X (t )  k | X (0)  i)

k 0
P( X (t  s)  j | X (t )  k , X (0)  i)P( X (t )  k | X (0)  i )
The Chapman-Kolmogrov equations
For all s, t  0, Pij (t  s )   k  0 Pik (t ) Pkj ( s ).

Proof :
Pij (t  s )  P ( X (t  s )  j | X (0)  i )

=
=
=

k 0
P( X (t  s )  j , X (t )  k | X (0)  i )

k 0

k 0
P( X (t  s )  j | X (t )  k , X (0)  i )P( X (t )  k | X (0)  i )
P( X (t  s )  j | X (t )  k )P( X (t )  k | X (0)  i)
The Chapman-Kolmogrov equations
For all s, t  0, Pij (t  s )   k  0 Pik (t ) Pkj ( s ).

Proof :
Pij (t  s )  P ( X (t  s )  j | X (0)  i )

=
=
=
=

k 0
P( X (t  s )  j , X (t )  k | X (0)  i )

k 0

k 0

k 0
P( X (t  s )  j | X (t )  k , X (0)  i )P( X (t )  k | X (0)  i )
P( X (t  s )  j | X (t )  k )P( X (t )  k | X (0)  i)
Pkj ( s )Pik (t )
Kolmogrov’s backward equations
For all states i, j and time t  0, Pij' (t )   k  i qik Pkj (t )  vi Pij (t )
Proof :


Pij (h  t )  Pij (t )=

k 0
Pik ( h)Pkj (t )  Pij (t )
k i
Pik ( h)Pkj (t )  [1  Pii ( h)]Pij (t )
Kolmogrov’s backward equations
For all states i, j and time t  0, Pij' (t )   k  i qik Pkj (t )  vi Pij (t )
Proof :


Pij (h  t )  Pij (t )=
 lim
h 0

k 0
k i
Pij (h  t )  Pij (t )
h
Pik (h)Pkj (t )  Pij (t )
Pik ( h)Pkj (t )  [1  Pii ( h)] Pij (t )
Pik ( h)
1  Pii (h)


 lim  k  i
Pkj (t )  [
]Pij (t ) 
h 0
h
h


Kolmogrov’s backward equations
For all states i, j and time t  0, Pij' (t )   k  i qik Pkj (t )  vi Pij (t )
Proof :


Pij (h  t )  Pij (t )=
 lim
h 0

k 0
k i
Pij (h  t )  Pij (t )
h
Pik (h)Pkj (t )  Pij (t )
Pik ( h)Pkj (t )  [1  Pii ( h)] Pij (t )
Pik ( h)
1  Pii ( h)


 lim  k  i
Pkj (t )  [
]Pij (t ) 
h 0
h
h


 Pij' (t )   k  i qik Pkj (t )  vi Pij (t )
Kolmogrov’s forward equations
Under suitable regularity conditions, Pij' (t )   k  j qkj Pik (t )  v j Pij (t )
for all i, j and time t  0.
Limiting probabilities
Pj  lim Pij' (t )
t 
 lim Pij' (t )  lim   k  j qkj Pik (t )  v j Pij (t ) 
t  

t 
  k  j qkj Pk  v j Pj
  k  j qkj Pk  v j Pj  0  v j Pj   k  j qkj Pk
The limiting probabilities can be obtained by solving the following
system of equations:

k j
qkj Pk  v j Pj  0

j
Pj  1
v j Pj  rate at which the process leaves state j

k j
qkj Pk  rate at which the process enters state j
v j Pj   k  j qkj Pk
 rate out of state j  rate into state j
When do the limiting probabilities exist?
The limiting probabilities Pj exist if
(a) all states of the Markov chain communicate (i.e., starting in
state i, there is a positive probability of ever being in state j, for
all i, j and
(b) the Markov is positive recurrent (i.e, starting in any state, the
mean time to return to that state is finite).
The M/M/1 queue

0
1
m
State
0
1
2
n 1


2
m
3
m
rate out of state j  rate into state j
 P0  m P1
(  m ) P1  m P2   P0
(  m ) P2  m P3   P1
(  m ) Pn  m Pn 1   Pn 1
The M/M/1 queue
By adding to each equation the equation preceding it, we obtain
2
 P0  m P1
 P1  m P2
 P2  m P3
n 1
 Pn  m Pn 1
0
1
The M/M/1 queue
Solving in terms of P0 yields
0
P1  ( / m ) P0
1
P2  ( / m )2 P0
2
P3  ( / m )3 P3
n 1
Pn  ( / m ) n P0
The M/M/1 queue
Using the fact that


n0
Pn  1, we obtain
P0  P0  n 1 ( / m ) n  1

 P0 
1
1   n 1 ( / m ) n

1

m
 Pn    / m  (1   / m )
n
Note that we must have  / m  1.