Transcript Q-Layout

Assignment 3
• Chapter 3: Problems 7, 11, 14
• Chapter 4: Problems 5, 6, 14
• Due date: Monday, March 15, 2004
Example
Inventory System: Inventory at a store is reviewed daily. If
inventory drops below 3 units, an order is placed with the
supplier which is delivered the next day. The order size should
bring inventory position to 6 units. Daily demand D is i.i.d.
with distribution
P(D = 0) =1/3
P(D = 1) =1/3
P(D = 2) =1/3.
Let Xn describe inventory level on the nth day. Is the process
{Xn} a Markov chain? Assume we start with 6 units.
Markov Chains
 {Xn: n =0, 1, 2, ...} is a discrete time stochastic process
Markov Chains
 {Xn: n =0, 1, 2, ...} is a discrete time stochastic process
 If Xn = i the process is said to be in state i at time n
Markov Chains
 {Xn: n =0, 1, 2, ...} is a discrete time stochastic process
 If Xn = i the process is said to be in state i at time n
 {i: i=0, 1, 2, ...} is the state space
Markov Chains
 {Xn: n =0, 1, 2, ...} is a discrete time stochastic process
 If Xn = i the process is said to be in state i at time n
 {i: i=0, 1, 2, ...} is the state space
 If P(Xn+1 =j|Xn =i, Xn-1 =in-1, ..., X0 =i0}=P(Xn+1 =j|Xn =i} = Pij,
the process is said to be a Discrete Time Markov Chain
(DTMC).
Markov Chains
 {Xn: n =0, 1, 2, ...} is a discrete time stochastic process
 If Xn = i the process is said to be in state i at time n
 {i: i=0, 1, 2, ...} is the state space
 If P(Xn+1 =j|Xn =i, Xn-1 =in-1, ..., X0 =i0}=P(Xn+1 =j|Xn =i} = Pij,
the process is said to be a Discrete Time Markov Chain
(DTMC).
 Pij is the transition probability from state i to state j
Pij  0,
 P00
P
 10
 .

P .
 Pi 0

 .
 .

i, j  0
P01
P11
.
.
Pi1
.
.
P02
P12
.
.
Pi 2
.
.
...
...
.

.
...

.
. 
P: transition matrix


j 0
Pij  1,
i  0,1,...
 Example 1: Probability it will rain tomorrow depends only
on whether it rains today or not:
P(rain tomorrow|rain today) = a
P(rain tomorrow|no rain today) = b
 Example 1: Probability it will rain tomorrow depends only
on whether it rains today or not:
P(rain tomorrow|rain today) = a
P(rain tomorrow|no rain today) = b
State 0 = rain
State 1 = no rain
 Example 1: Probability it will rain tomorrow depends only
on whether it rains today or not:
P(rain tomorrow|rain today) = a
P(rain tomorrow|no rain today) = b
State 0 = rain
State 1 = no rain
a
P
b
1a 
1  b 
 Example 4: A gambler wins $1 with probability p, loses $1
with probability 1-p. She starts with $N and quits if she reaches
either $M or $0. Xn is the amount of money the gambler has after
playing n rounds.
 Example 4: A gambler wins $1 with probability p, loses $1
with probability 1-p. She starts with $N and quits if she reaches
either $M or $0. Xn is the amount of money the gambler has after
playing n rounds.
 P(Xn=i+1|Xn-1 =i, Xn-2 =in-2, ..., X0 =N}=P(Xn =i+1|Xn-1 =i}=p
(i≠0, M)
 Example 4: A gambler wins $1 with probability p, loses $1
with probability 1-p. She starts with $N and quits if she reaches
either $M or $0. Xn is the amount of money the gambler has after
playing n rounds.
 P(Xn=i+1|Xn-1 =i, Xn-2 =in-2, ..., X0 =N}=P(Xn =i+1|Xn-1 =i}=p
(i≠0, M)
 P(Xn=i-1| Xn-1 =i, Xn-2 = in-2, ..., X0 =N} = P(Xn =i-1|Xn-1 =i}=1–
p
(i≠0, M)
 Example 4: A gambler wins $1 with probability p, loses $1
with probability 1-p. She starts with $N and quits if she reaches
either $M or $0. Xn is the amount of money the gambler has after
playing n rounds.
 P(Xn=i+1|Xn-1 =i, Xn-2 =in-2, ..., X0 =N}=P(Xn =i+1|Xn-1 =i}=p
(i≠0, M)
 P(Xn=i-1| Xn-1 =i, Xn-2 = in-2, ..., X0 =N} = P(Xn =i-1|Xn-1 =i}=1–
p
(i≠0, M)
Pi, i+1=P(Xn=i+1|Xn-1 =i}; Pi, i-1=P(Xn=i-1|Xn-1 =i}
 Pi, i+1= p;
 Pi, i-1=1-p for i≠0, M
 P0,0= 1; PM, M=1 for i≠0, M (0 and M are called
absorbing states)
 Pi, j= 0, otherwise
random walk: A Markov chain whose state space is 0, 1, 2,
..., and Pi,i+1= p = 1 - Pi,i-1 for i=0, 1, 2, ..., and 0 < p < 1 is said
to be a random walk.
Chapman-Kolmogorv Equations
Pijn  P{ X n  m  j | X m  i},
n  0, i, j  0
Chapman-Kolmogorv Equations
Pijn  P{ X n  m  j | X m  i},
Pij1  Pij
n  0, i, j  0
Chapman-Kolmogorv Equations
Pijn  P{ X n  m  j | X m  i},
n  0, i, j  0
Pij1  Pij
nm
ij
P
  k  0 Pikn Pkjm

for all n, m  0, and i, j  0
(Chapman - Kolmogrov equations)
Pijn  m  P{ X n  m  j | X 0  i},
Pijn  m  P{ X n  m  j | X 0  i},
=  k  0 P{ X n  m  j , X n  k | X 0  i}

Pijn  m  P{ X n  m  j | X 0  i},
=  k  0 P{ X n  m  j , X n  k | X 0  i}

  k  0 P{ X n  m  j | X n  k , X 0  i}P{ X n  k | X 0  i}

Pijn  m  P{ X n  m  j | X 0  i},
=  k  0 P{ X n  m  j , X n  k | X 0  i}

  k  0 P{ X n  m  j | X n  k , X 0  i}P{ X n  k | X 0  i}

  k  0 P{ X n  m  j | X n  k}P{ X n  k | X 0  i}

Pijn  m  P{ X n  m  j | X 0  i},
=  k  0 P{ X n  m  j , X n  k | X 0  i}

  k  0 P{ X n  m  j | X n  k , X 0  i}P{ X n  k | X 0  i}

  k  0 P{ X n  m  j | X n  k}P{ X n  k | X 0  i}

  k  0 Pkjm Pikn   k  0 Pikn Pkjm


P ( n ) : the matrix of n transition probabilities Pijn

P ( n ) : the matrix of n transition probabilities Pijn

P( n m)  P( n) × P( m)
P ( n ) : the matrix of n transition probabilities Pijn

P(nm)  P(n) × P(m)
(Note: if A  [aij ] and B  [bij ], then A × B  [  k 1 aik bkj ])
M
 Example 1: Probability it will rain tomorrow depends only
on whether it rains today or not:
P(rain tomorrow|rain today) = a
P(rain tomorrow|no rain today) = b
What is the probability that it will rain four days from today
given that it is raining today? Let a = 0.7 and b = 0.4.
State 0 = rain
State 1 = no rain
What is P004 ?
What is P004 ?
 0.7 0.3 
P

0.4
0.6


What is P004 ?
0.7 0.3
P

0.4
0.6


0.7 0.3 0.7 0.3  0.61 0.39 
(2)
P 
×




0.4
0.6
0.4
0.6
0.52
0.48

 
 

What is P004 ?
0.7 0.3
P

0.4
0.6


0.7 0.3 0.7 0.3  0.61 0.39
(2)
P 
×




0.4
0.6
0.4
0.6
0.52
0.48

 
 

 0.61 0.39  0.61 0.39  0.5749 0.4251
(4)
(2)
(2)
P  P ×P  
×




0.52
0.48
0.52
0.48
0.5668
0.4332

 
 

What is P004 ?
0.7 0.3
P

0.4
0.6


0.7 0.3  0.7 0.3  0.61
(2)
P 
×



 0.4 0.6   0.4 0.6   0.52
 0.61 0.39   0.61
(4)
(2)
(2)
P  P ×P  
×

0.52 0.48  0.52
 P004  0.5749
0.39 
0.48
0.39   0.5749 0.4251


0.48  0.5668 0.4332
Unconditional probabilities
How do we calculate P( X n  j )?
Unconditional probabilities
How do we calculate P( X n  j )?
Let a i  P( X 0  i )
Unconditional probabilities
How do we calculate P( X n  j )?
Let a i  P( X 0  i )
P( X n  j )   i 1 P( X n  j | X 0  i ) P( X 0  i )

Unconditional probabilities
How do we calculate P( X n  j )?
Let a i  P( X 0  i )
P( X n  j )   i 1 P( X n  j | X 0  i ) P ( X 0  i )

  i 1 Pijna i

Classification of States
 State j is accessible from state i if Pijn  0 for some n  0.
 Two states that are accessible to each other are said
to communicate (i  j ).
 Any state communicates with itself since Pii0  1.
Communicating states
 If state i communicates with state j, then state j communicates
with state i.
Communicating states
 If state i communicates with state j, then state j communicates
with state i.
 If state i communicates with state j, and state j communicates
with state k , then state i communicates with state k .
Proof
If i communicates with j and j communicates with k ,
then there exist some m and n for which Pijn  0 and Pjkm  0.

nm
ik
P
  r  0 Pirn Prkm  Pijn Pjkm  0.

Classification of States (continued)
Two states that communicate are said to belong to the same class.
Classification of States (continued)
Two states that communicate are said to belong to the same class.
 Two classes are either identical or disjoint
(have no communicating states).
Classification of States (continued)
Two states that communicate are said to belong to the same class.
 Two classes are either identical or disjoint
(have no communicating states).
 A Markov chain is said to be irreducible if it has only one class
(all states communicate with each other).
1/ 2 1/ 2 0 
P  1/ 2 1/ 2 1/ 4 
 0 1/ 3 2 / 3
1/ 2 1/ 2 0 
P  1/ 2 1/ 2 1/ 4 
 0 1/ 3 2 / 3
The Markov chain with transition probability matrix P is
irreducible.
0 
1/ 2 1/ 2 0
1/ 2 1/ 2 0

0

P
1/ 4 1/ 4 1/ 4 1/ 4 


0
0
1 
 0
0 
1/ 2 1/ 2 0
1/ 2 1/ 2 0

0

P
1/ 4 1/ 4 1/ 4 1/ 4 


0
0
1 
 0
The classes of this Markov chain are {0, 1}, {2}, and {3}.
Recurrent and transient states
• fi: probability that starting in state i, the process will
eventually re-enter state i.
Recurrent and transient states
• fi: probability that starting in state i, the process will
eventually re-enter state i.
• State i is recurrent if fi = 1.
Recurrent and transient states
• fi: probability that starting in state i, the process will
eventually re-enter state i.
• State i is recurrent if fi = 1.
• State i is transient if fi < 1.
Recurrent and transient states
• fi: probability that starting in state i, the process will
eventually re-enter state i.
• State i is recurrent if fi = 1.
• State i is transient if fi < 1.
• Probability the process will be in state i for exactly n
periods is fi n-1(1- fi), n ≥ 1.
State i is recurrent if


P   and transient if
n
n 1 ii
n
P
n1 ii  

Proof
1,
In  
0
if X n  i
if X n  i
Proof
1,
In  
0


if X n  i
if X n  i
I X 0  i : number of periods the process is in state i.
n 0 n
given that it starts in i
Proof
1,
In  
0


if X n  i
if X n  i
I X 0  i : number of periods the process is in state i.
n 0 n
given that it starts in i




E  n  0 I n X 0  i   n  0 E[ I n X 0  i ]


  n  0 P{ X n  i X 0  i}

  n  0 Piin

• Not all states can be transient.
•If state i is recurrent, and state i communicates with state j,
then state j is recurrent.
Proof
Since i  j , there exists k and m for which Pijk >0 and Pjim >0.
Pjjm  n  k  Pjim Piin Pijk , for any n.

 n1 P

mnk
jj
  n 1 P P P  P P

m n k
ji ii ij
m k
ji ij
n
P
 n1 ii  .

• Not all states can be transient.
• If state i is recurrent, and state i communicates with state j,
then state j is recurrent  recurrence is a class property.
• Not all states can be transient.
• If state i is recurrent, and state i communicates with state j,
then state j is recurrent  recurrence is a class property.
• Not all states can be transient.
• If state i is transient, and state i communicates with state j,
then state j is transient  transience is also a class property.
• If state i is recurrent, and state i communicates with state j,
then state j is recurrent  recurrence is a class property.
• Not all states can be transient.
• If state i is transient, and state i communicates with state j,
then state j is transient  transience is also a class property.
• All states in an irreducible Markov chain are recurrent.
0
1
P
0

0
0 1/ 2 1/ 2 
0 0
0 
1 0
0 

1 0
0 
0
1
P
0

0
0 1/ 2 1/ 2 
0 0
0 
1 0
0 

1 0
0 
All states communicate. Therefore all states are recurrent.
0
0 
1/ 2 1/ 2 0
1/ 2 1/ 2 0

0
0


P 0
0 1/ 2 1/ 2 0 


0 1/ 2 1/ 2 0 
 0
1/ 4 1/ 4 0
0 1/ 2 
0
0 
1/ 2 1/ 2 0
1/ 2 1/ 2 0

0
0


P 0
0 1/ 2 1/ 2 0 


0 1/ 2 1/ 2 0 
 0
1/ 4 1/ 4 0
0 1/ 2 
There are three classes {0, 1}, {2, 3} and {4}. The first two are
recurrent and the third is transient