Transcript Lecture 11

11. Conditional Density Functions and
Conditional Expected Values
As we have seen in section 4 conditional probability density
functions are useful to update the information about an
event based on the knowledge about some other related
event (refer to example 4.7). In this section, we shall
analyze the situation where the related event happens to be a
random variable that is dependent on the one of interest.
From (4-11), recall that the distribution function of X given
an event B is
P( X ( )  x )  B 
FX ( x | B)  P X ( )  x | B  
.
P( B )
(11-1)
1
PILLAI
Suppose, we let
B  y1  Y ( )  y2 .
(11-2)
Substituting (11-2) into (11-1), we get
FX ( x | y1  Y  y2 ) 

P  X ( )  x, y1  Y ( )  y2 
P( y1  Y ( )  y2 )
FXY ( x, y2 )  FXY ( x, y1 )
,
FY ( y2 )  FY ( y1 )
(11-3)
where we have made use of (7-4). But using (3-28) and (7-7)
we can rewrite (11-3) as
x
y
f XY (u, v )dudv

  y
FX ( x | y1  Y  y2 ) 
.
y
(11-4)
 y fY (v )dv
To determine, the limiting case FX ( x | Y  y), we can let y1  y
and y2  y  y in (11-4).
2
1
2
1
2
PILLAI
This gives
FX
x


( x | y  Y  y  y ) 

y  y
y

f XY (u, v )dudv
y  y
y


x

fY ( v )dv
and hence in the limit
FX ( x | Y  y )  lim FX
y 0
f XY (u, y )du y

( x | y  Y  y  y ) 
x

(11-5)
fY ( y ) y
f XY (u, y )du
fY ( y )
.
(11-6)
(To remind about the conditional nature on the left hand
side, we shall use the subscript X | Y (instead of X) there).
Thus
x
FX |Y

( x | Y  y) 

f XY (u, y )du
fY ( y )
.
(11-7)
Differentiating (11-7) with respect to x using (8-7), we get
f X |Y ( x | Y  y ) 
f XY ( x, y )
.
fY ( y )
(11-8)
3
PILLAI
It is easy to see that the left side of (11-8) represents a valid
probability density function. In fact
f X |Y ( x | Y  y ) 
and



f X |Y

( x | Y  y )dx 


f XY ( x, y )
0
fY ( y )
f XY ( x, y )dx
fY ( y )

fY ( y )
 1,
fY ( y )
(11-9)
(11-10)
where we have made use of (7-14). From (11-9) - (11-10),
(11-8) indeed represents a valid p.d.f, and we shall refer to it
as the conditional p.d.f of the r.v X given Y = y. We may also
write
f X |Y ( x | Y  y )  f X |Y ( x | y ).
(11-11)
From (11-8) and (11-11), we have
f X |Y ( x | y ) 
f XY ( x, y )
,
fY ( y )
(11-12)
4
PILLAI
and similarly
fY | X ( y | x ) 
f XY ( x, y )
.
f X ( x)
(11-13)
If the r.vs X and Y are independent, then f XY ( x, y)  f X ( x) fY ( y)
and (11-12) - (11-13) reduces to
f X |Y ( x | y )  f X ( x ),
fY | X ( y | x )  fY ( y ),
(11-14)
implying that the conditional p.d.fs coincide with their
unconditional p.d.fs. This makes sense, since if X and Y are
independent r.vs, information about Y shouldn’t be of any
help in updating our knowledge about X.
In the case of discrete-type r.vs, (11-12) reduces to
P X  xi | Y  y j  
P( X  xi , Y  y j )
P(Y  y j )
.
(11-15)
5
PILLAI
Next we shall illustrate the method of obtaining conditional
y
p.d.fs through an example.
1
Example 11.1: Given
k , 0  x  y  1,
f XY ( x, y )  
(11-16)
otherwise ,
0,
determine f X |Y ( x | y ) and fY |X ( y | x ).
1
x
Fig. 11.1
Solution: The joint p.d.f is given to be a constant in the
shaded region. This gives

f XY ( x, y )dxdy 
1

0
y
0
k dx dy 

1
0
k y dy 
k
 1  k  2.
2
Similarly
1
f X ( x)   f XY ( x, y )dy   k dy  k (1  x),
and
x
y
fY ( y )   f XY ( x, y )dx   k dx  k y,
0
0  x  1,
0  y  1.
(11-17)
(11-18)
6
PILLAI
From (11-16) - (11-18), we get
f XY ( x, y ) 1
f X |Y ( x | y ) 
 ,
fY ( y )
y
0  x  y  1,
(11-19)
and
f XY ( x, y )
1
fY | X ( y | x ) 

,
f X ( x)
1 x
0  x  y  1.
(11-20)
We can use (11-12) - (11-13) to derive an important result.
From there, we also have
f XY ( x, y )  f X |Y ( x | y ) fY ( y )  fY | X ( y | x ) f X ( x )
(11-21)
or
fY | X ( y | x ) 
But
f X ( x)  


f X |Y ( x | y ) fY ( y )
f X ( x)
f XY ( x, y )dy  


.
f X |Y ( x | y ) fY ( y )dy
and using (11-23) in (11-22), we get
(11-22)
(11-23)
7
PILLAI
fYX ( y | x ) 
f X |Y ( x | y ) fY ( y )



.
(24)
f X |Y ( x | y ) fY ( y )dy
Equation (11-24) represents the p.d.f version of Bayes’
theorem. To appreciate the full significance of (11-24), one
need to look at communication problems where
observations can be used to update our knowledge about
unknown parameters. We shall illustrate this using a simple
example.
Example 11.2: An unknown random phase  is uniformly
distributed in the interval (0,2 ), and r    n, where
n  N (0, 2 ). Determine f ( | r ).
Solution: Initially almost nothing about the r.v  is known,
so that we assume its a-priori p.d.f to be uniform in the
interval (0,2 ).
8
PILLAI
In the equation r    n, we can think of n as the noise
contribution and r as the observation. It is reasonable to
assume that  and n are independent. In that case
f ( r | θ   )  N ( , 2 )
(11-25)
since it is given that θ   is a constant, r    n behaves
like n. Using (11-24), this gives the a-posteriori p.d.f of 
given r to be (see Fig. 11.2 (b))
f ( | r ) 

2
0
f ( r |  ) f ( )
f ( r |  ) f ( )d
  ( r )e
 (  r ) 2 / 2 2
where
 (r) 

,
e  ( r  )
1
2

0
e
( r  ) 2 / 2 2
0
e
/ 2 2
 ( r  ) 2 / 2 2
0    2 ,
2
2

2
2
d
d
(11-26)
.
9
PILLAI
Notice that the knowledge about the observation r is
reflected in the a-posteriori p.d.f of  in Fig. 11.2 (b). It is
no longer flat as the a-priori p.d.f in Fig. 11.2 (a), and it
shows higher probabilities in the neighborhood of   r.
f |r ( | r )
f ( )
1
2
2

 r

(b) a-posteriori p.d.f of 
(a) a-priori p.d.f of 
Fig. 11.2
Conditional Mean:
We can use the conditional p.d.fs to define the conditional
mean. More generally, applying (6-13) to conditional p.d.fs
10
we get
PILLAI
E  g( X ) | B   


g ( x) f X ( x | B)dx.
(11-27)
and using a limiting argument as in (11-2) - (11-8), we get

 X |Y  E  X | Y  y    x f X |Y ( x | y ) dx
(11-28)

to be the conditional mean of X given Y = y. Notice
that E ( X | Y  y ) will be a function of y. Also

Y |X  E Y | X  x    y fY |X ( y | x) dy.
(11-29)

In a similar manner, the conditional variance of X given
= y is given by
Var( X | Y )   X2 |Y  E X 2 | Y  y   E ( X | Y  y ) 
 E ( X   X |Y )2 | Y  y .
Y
2
(11-30)
we shall illustrate these calculations through an example.
11
PILLAI
Example 11.3: Let
1, 0 | y | x  1,
f XY ( x, y )  
otherwise .
0,
Determine E ( X | Y ) and E (Y | X ).
Solution: As Fig. 11.3 shows, f XY ( x, y)  1
(11-31)
y
in the shaded area, and zero elsewhere.
From there
f X ( x)  
and
fY ( y )  
x
x
1
| y|
f XY ( x, y )dy  2 x,
1 dx  1 | y |,
x
1
0  x  1,
| y|  1,
Fig. 11.3
This gives
and
f X |Y ( x | y ) 
fY | X ( y | x ) 
f XY ( x, y )
1

,
fY ( y )
1 | y |
f XY ( x, y )
1

,
f X ( x)
2x
0 | y | x  1,
0 | y | x  1.
(11-32)
(11-33)
12
PILLAI
E ( X | Y )   x f X |Y ( x | y )dx  
Hence
2 1
1
x

(1 | y |) 2
E (Y | X )   yf Y | X ( y | x )dy  
| y|
x
dx
| y | (1 | y |)
1
1 | y |2
1 | y |


, | y | 1.
2(1 | y |)
2
2 x
y
1 y
dy 
x 2 x
2x 2
x
 0,
0  x  1.
(11-34)
(11-35)
x
It is possible to obtain an interesting generalization of the
conditional mean formulas in (11-28) - (11-29). More
generally, (11-28) gives

(11-36)
E  g ( X ) | Y  y    g ( x ) f X |Y ( x | y )dx .

But
E  g( X ) 




g ( x ) f X ( x )dx 




 



g ( x)
g ( x ) f XY ( x, y )dxdy 


f XY ( x, y )dydx




 
g ( x ) f X |Y ( x | y )dx fY ( y )dy
E  g ( X )|Y  y 




E  g ( X ) | Y  y  fY ( y )dy  E E  g ( X ) | Y  y  .
13
(11-37) PILLAI
Obviously, in the right side of (11-37), the inner
expectation is with respect to X and the outer expectation is
with respect to Y. Letting g( X ) = X in (11-37) we get the
interesting identity
E ( X )  E E ( X | Y  y ),
(11-38)
where the inner expectation on the right side is with respect
to X and the outer one is with respect to Y. Similarly, we
have
E(Y )  E E(Y | X  x).
(11-39)
Using (11-37) and (11-30), we also obtain
Var( X )  E Var( X | Y  y ) .
(11-40)
14
PILLAI
Conditional mean turns out to be an important concept in
estimation and prediction theory. For example given an
observation about a r.v X, what can we say about a related
r.v Y ? In other words what is the best predicted value of Y
given that X = x ? It turns out that if “best” is meant in the
sense of minimizing the mean square error between Y and
its estimate Yˆ , then the conditional mean of Y given X = x,
i.e., E (Y | X  x ) is the best estimate for Y (see Lecture 16
for more on Mean Square Estimation).
We conclude this lecture with yet another application
of the conditional density formulation.
Example 11.4 : Poisson sum of Bernoulli random variables
Let X i , i  1, 2, 3, represent independent, identically
distributed Bernoulli random variables with
P( X i  1)  p,
P( X i  0)  1  p  q
15
and N a Poisson random variable with parameter  that is
independent of all X i . Consider the random variables
N
Y   Xi,
Z  N Y.
(11-41)
i 1
Show that Y and Z are independent Poisson random variables.
Solution : To determine the joint probability mass function
of Y and Z, consider
P(Y  m, Z  n)  P(Y  m, N  Y  n)  P(Y  m, N  m  n)
 P(Y  m N  m  n) P( N  m  n)
N
 P (  X i  m N  m  n) P ( N  m  n)
i 1
m n
 P (  X i  m) P ( N  m  n )
i 1
(11-42)
16
PILLAI
m n
( Note that
 X i ~ B(m  n,
i 1
p) and X i s are independen t of N )
(m  n )! m n      m  n 


p q  e
(m  n )! 
 m!n !
 
  p ( p ) m   q (q ) n 
 e
 e

m!  
n! 

 P(Y  m) P( Z  n).
(11-43)
Thus
Y ~ P( p )
and
Z ~ P(q )
(11-44)
and Y and Z are independent random variables.
Thus if a bird lays eggs that follow a Poisson random
variable with parameter  , and if each egg survives 17
PILLAI
with probability p, then the number of chicks that survive
also forms a Poisson random variable with parameter p .
18
PILLAI