3.6 Conditional distribution 、expectation and regression

Download Report

Transcript 3.6 Conditional distribution 、expectation and regression

Probability
theory
The department of math of central
south university
Probability and Statistics Course group
§3.6 The conditional distribution and conditional
expectation,regression
1、The conditional distribution of continuous
random variable
2、 The conditional expectations of continuous random
variables
3、Regression
1、The conditional distribution of
continuous random variables
In the previous chapter,for discrete random
variable,we have studied the conditional
distribution of ξ with condition η =yj ,which is
marked by P(ξ =xi|η =yj).Similar to the discrete
random variables,continuous random variables
also have conditional distribution problem.
For a two-dimensional continuous random
P{  y}  0,
variable ( ξ ,η ),if
then
P{  x |   y}
is no meaningless.
So we will introduce the conditional distribution
function by using the limit concepts
Definition:Given y,for any small positive number
 ,the probability P{ y- < η y + }>0,
If for any real numberx,the limit
lim P{  x | y      y   }
 0
P  x, y      y   
 lim
 0
Py      y   
exists ,the limit is called the conditional
distribution function of ξ with conditionη= y
denoted by P{ ξ x |η= y },or marked by Fξ|η(x|y).
P  x, y      y   
Actually
Py      y   
F ( x, y   )  F ( x, y   )
 lim
 0
F ( y   )  F ( y   )
F ( x, y )
lim [ F ( x, y   )  F ( x, y   )] / 2
y
 0


d
lim [ F ( y   )  F ( y   )] / 2
F ( y )
 0
dy
y x
x


 

f
(
u
,
y
)
du
f
(
u
,
v
)
dudv




y  

.


f ( y)
f ( y )

Therefore
F | ( x | y)  
x

f (u, y)
du,
f ( y)
is called the conditional distribution of
ξ with condition η= y.
f ( x, y)
f | ( x | y) 
f ( y)
is called the condition density function of ξ
with condition η= y
Similarly
f ( x, y)
f| ( y | x) 
f  ( x)
is called the conditionL density function of η with
condition ξ = x
The natures of conditional probability density
function
Nature 1 For any number x, there is
f | ( x | y )  0
Nature 2


f  | ( x | y ) dx  1

For conditional density function f | ( y | x) , the
above natures are also established .
Example 24
Suppose the probability density function of
random variable( ξ ,η ) is
1, | y | x, 0  x  1,
f ( x, y )  
0, Others.
Derivate and calculate:
(1) f  (x ) ;
f ( y )
(2) f | ( x | y ) ; f | ( y | x)
1
(3) P{  |   0}.
2
Solution :
f ( x) 

 f ( x, y)dy

x
  dy  2x, 0  x  1

x

0,
Others

y
0
yx
1
x
y  x

f ( y ) 
 f ( x,

y
0
yx
1
  dx  1  y,
y
1


y )dx    dx  1  y,
 y

0,



1
x
y  x
(2)
When | y | 1,
1 | y |,

0,
0  y  1,
 1  y  0,
Others .
| y | 1
others.
f ( x, y)
f | ( x | y) 
f ( y)
when
| y | 1,
 1
, | y | x  1
f ( x, y) 
f | ( x | y) 
 1 | y |
f ( y)
0,
Others。

y
0
1, | y | x, 0  x  1
f ( x, y )  
Others.
0,
yx
1
x
y  x
2 x , 0  x  1
f  ( x)  
others.
0,
When
0  x  1,
1
f ( x, y)   ,  x  y  x,
f| ( y | x) 
 2x
f ( x) 0,
Others。

1
P{  ,  0}
1
2
(3) P{  | Y  0} 
P{  0}
2
1 1
(1  )   2
3
2
2


1
4
 1 1
2
y
yx
0 1/2
1
y  x
x
Example 25
Suppose a two –dimension random variable
( ξ ,η ) is subjected to normal distribution.
That is
(ξ,η) ~ N(μ1 ,μ2 ,σ12 ,σ22 ,r)
Then the jointed probability density function
of ( ξ ,η ) is
1
f x, y  
21 2 1  r 2
2
2










1
x


2
r
x


y


y




1
1
2
2
 exp 



2 
2
2
 1 2
 2  

 21  r    1
The marginal density function of Random
variable η is
f  y  
1
2  2

y   2 2

e
2 22
   y  
Then ,for any given y, f  y   0,
1
f x, y 

f  x y  
2
2
f  y 

2  1 1  r 
2

 
 
1
1






 exp 
x



r
y


1
2  
2
2 

2 1 1  r  
2
 





   x  
Conclusion:
The conditional distribution of a two-dimension
random variable ( ξ ,η ) is actually a onedimension normal distribution

1
2
2 
( y  2 ),  1 (1  r ) 
(ξ  y)~ N  1  r
2


2、The conditional expectation of
continuous random variables
Definition If the conditional probability
density function of random variable ξ under
condition η= y is f  ( x | y)  f ( x, y)
|
and



f ( y )
x f  ( x y)dx  

then  x f  ( x y)dx is called the conditional
expectation of random variable ξ under the
condition η= y .

Denoted by

E{ /   y}   xf  ( x y)dx

Natures of conditional expectation
Nature 1 If a≤ξ ≤b,then there exists
E{ /   y}
and satisfies a  E{ /   y}  b
Especially for a constant c, E{c /   y}  c
Natures of conditional expectation
Nature 2 For any real numbers K1、K2 ,
E{2 /   y}
E{1 /   y}
exist, then E{k   k  /   y}
1 1
2
2
 k1E1   y k2 E 2   y
The promotion of nature 2:
 n

E  ai i   y  
 i 1

 a E
n
i 1
i
i
  y
Proof (2)
E{k11  k22 /   y}



 (k x  k x ) f  ( x y, x y, )dx dx
 k   x f  ( x y, x y, )dx dx
 k   x f  ( x y, x y, )dx dx
 k  x f  ( x y)dx  k  x f  ( x
 k E   y k E   y
 

1
1 1
1
2
1 x1 x2  y

1
2

1
2
1
2

 
2
x1 x2  y
1
2

1
1

 
2
x1x2  y
2 2
1
2

1 x1  y
1
1
1
1
2

2
2 x2  y
2
y)dx2
Natures of conditional expectation
E{E{ }}  E
Nature 3

E{E{ }}   E{ } f ( y)dy



f ( x, y)
  ( x
dx) f ( y)dy


f ( y)


 


x( 


 E


f ( x, y)dy)dx
xf ( x)dx
3、Regression
It is generally believed that a person‘s height and
footprint can act as a two-dimensional normal
distribution random variables(ξ , η).We will obtain the
estimation expression of footprints as follows:
1
E (   y )  a1   ( y  a2 )
2
If you put these stippling
(a1  
1
( y  a 2 ), y )
2
at plane
Cartesian coordinate system, it is a straight line, which
shows that footprintsξ depends on the height η to some
extent and called the regression line
Especially, when Points
or
( E ( |   y ), y )
( x, E(   x))
are put on the coordinate system, We can get
two curves, which are known as the regression
curve or referred to as regression.
Once we pointed out that E(   y) is
known as the estimated value or forecast
value of ξ under the condition (η = y) .It is
reasonable Intuitively , however,whether it is
reasonable or not ? What good natures of
the estimate, or prediction can attract our
attention ? These are what we will study
further in the following .
We have already known E (  ) is a function of η,
and now assume that there is another function of η
g (η) that can be the estimate or prediction such
estimate or prediction error | ξ-g (η) | as small as
possible,but is actually a random variable, it is
necessary to calculate the mean
E [| ξ- g(η) | ]=min
E [ξ- g(η) ]=min
But it is not convenient to calculate the mean with
absolute value operating,then another method can
]
be used to deal with the error probem,that
is
2
E [ξ- g(η) ]2=min
If the jointed density function of (ξ,η) is f(x,y),then
E [ξ- g(η)

]2=
 

 
2
[ x  g ( y )] p ( x, y )dxdy




  p ( y)( [ x  g ( y)]2 p | ( x | y)dx)dy
From the nature of variance,when g(y)= E (ξ/η) ),to
minimize
E [ξ- g(η) ]2
equals to obtain the variance of ξ.
=
Summarize the above analysis, E(ξ/η) ) can be
used as the estimate or prediction of ξ under
condition (η=y) ,here the mean square error E [ξg(η)/ η=y)]2 is ths smallest.
The second regression
Now we know that E (ξ / η) as an estimate or
prediction ofξ has a great character of having the
smallest mean variance.However ,it is difficult to get
the density function f (x, y) in some occasions, or it
2
is too complex to get E (ξ / η) and so on .so another
methods to be developped to obtain the estimate
or prediction of ξ. Here we will introduce the second
regression.
(a, b)  E[  (a  b)]  min
2
The second regression
Suppose g(η)=aη+b is the best estimate or
prediction with smallest mean variance , identify
the parameters a and b to minimize
(a, b)  E[  (a  b)]  min
2
2
Calculate the partial derivative of △(a,b) on a and
b.
2
 (a, b)
 2 E[(  (a  b)) ]  0
 a
 (a, b)

 2 E[  (a  b)]  0
b

The equations can be reduced to be
aE  b  E


2
aE

 bE  E

Solve the equations and get
1
Cov( , )

a


2
2
2


b  E  aE  E   1 E

2
Usually the method is
called up for the linear
regression or second type
of regression
休息片刻继续