Some univariate distributions

Download Report

Transcript Some univariate distributions

Some standard univariate probability distributions
•
•
•
•
•
Characteristic function, moment generating function, cumulant generating
functions
Discrete distribution
Continuous distributions
Some distributions associated with normal
References
Characteristic function, moment generating
function, cumulant generating functions
Characteristic function is defined as an expectation value of the function - e(itx)

C (t ) 
 e ( itx ) f ( x ) dx

Moment generating function is defined as (an expectation of e(tx)):

M (t ) 
 e ( tx ) f ( x ) dx

Moments can be calculated in the following way. Obtain derivative of M(t) and take
the value of it at t=0
n
E(x ) 
n
d M (t )
dt
n
t0
Cumulant generting function is defined as logarithm of the characteristic function
c . g . f .  log( C ( t ))
Discrete distributions: Binomial
Let us assume that we carry out experiment and the result of the experiment can be
“success” or “failure”. The probability of “success” in one experiment is p.
Then probability of failure is q=1-p. We carry out experiments n times.
Distribution of k successes is binomial:
n  k
n!
nk
k
nk
p( k )  P ( X  k )   p (1  p)

p (1  p)
k!( n  k )!
k 
Characteristic function:
C ( t )  ( pe ( it )  1  p )

Moment generating function:
M ( t )  ( pe ( t )  1  p )
n
n
Example and mean values
As the number of trials become increases the distribution becomes more symmetric and
dense.
Calculate the probability of 2 or 3 successes if the probability of success is p=0.2 and
the number of trials is n=3. Compare it with the the case when p=0.5 and n=3.
Mean value is np. Variance is npq=np(1-p).
If the number of trials is 10 and p = 0.2 then average number of successes is 2.
P=0.2, n=10
P=0.5, n=10
P=0.2, n=100
P=0.5, n=100
Discrete distributions: Poisson
When the number of the trials (n) is large and the probability of successes (p) is small
and np is finite and tends to  as n goes to infinity then the binomial
distribution converges to Poisson distribution:
p (k )  e(  )

k
, k  0,1,2, , ,   0
k!
Poisson distribution is used to describe the distribution of an event that occurs rarely
(rare events) in a short time period. It is used in counting statistics to describe
the number of registered photons.
Characteristic function is:
C ( t )  e (  ( e ( it )  1))
What is the moment generating function?
Example
λ=1, λ=5 and λ=10. As λ increases the distribution becomes more and more
symmetric.
Expected values is λ and variance is λ. Variance and mean are equal to each
other.
Exercise: Assume that the distribution of the number accidents is Poisson. If the
average number of accidents in one day is 3 then what is the probability of three
accidents happening in one day? What is the probability of at least three
accidents in one day.
λ=1
λ=5
λ=10
Discrete distributions: Negative Binomial
Consider an experiment: Probability of “success” is p and probability of failure is
q=1-p. We carry out the experiment until k-th success. We want to find the
probability of j failures before having kth success. (It is called sequential
sampling. Sampling is carried out until stopping rule - k successes - is
satisfied). If we have j failures then it means that the number of trials is k+j.
Last trial was success. Then the probability that we will have j failures is:
 k  j  1  k 1 j
 k  j  1 k j
p ( j )  P ( X  j )  
 p q p  
 p q , j  0,1,2, , , ,
j
j




It is called negative binomial because coefficients have the same from as those of the
terms of the negative binomial series: p-k=(1-q)-k
Characteristic function is:
C ( t )  p (1  qe ( it ))
k
What is the moment generating function?
k
Example, mean and variance
As the number of required successes increases the distribution becomes more and
more symmetric.
Mean value is kq/p and variance is kq(q+1)/p.
Let us say we have an unfair coin. Probability of throwing head is 0.2. We throw the
coin until we have 2 heads. What is the probability that we will achieve it in 4 trials?
What is the average number of trials before we reach 2 heads?
k=10,p=0.2
k=10,p=0.5
k=50,p=0.2. x axis is
between 0 and 500
k=50,p=0.5
Continuous distributions: uniform
The simplest form of the continuous distribution is the uniform with density:
 1

f (x)   b  a

 0
if a  x  b
otherwise
Cumulative distribution function is:
0
xb
F (x)  
b  a
1
xa
a  xb
xb
Moments and other properties are calculated easily.
Continuous distributions: exponential
Density of random variable with an exponential distribution has the form:
f ( t )   e (  t )
0 t
One of the origins of this distribution:
From Poisson type random processes. If the probability distribution of j(t) events
occurring
 during time interval [0;t) is a Poisson with mean value  t then
probability of time elapsing till the first event occurs has the exponential
distribution. Let Trdenotes time elapsed until r-th event
P ( j ( t )  r )  P (Tr  t )
Putting r=1 we get e(- t). Taking into account that P(T1>t) = 1-F1(t) and getting its
derivative wrt t we arrive to the exponential distribution
Characteristic function is:
c ( u)  (1 
iu

)
1
Example, mean variance
As lambda becomes larger, fall of the distribution becomes sharper.
Mean value is 1/λ and variance is (λ+1)/λ2
If average waiting time is 1min then what is probability that first event will
happen within 1 minute:
P ( t  1)  F (1) 

1
0
x
1e dx  1  e
1
 0 .63
Small exercise: What is the probability that the first event will happen after 2
minutes?
λ=1

Continuous distributions: Gamma
Gamma distribution can be considered as a generalisation of the exponential
distribution. It has the form:
r r 1
 t e( t)
f r (t ) 
, 0t
( r  1)!
It is probability of time - t elapsing before exactly r events happens
Characteristic function of this distribution is:
c ( u )  (1 
iu

)
r
If there are r independently and identically exponentially distributed random variables
then the distribution of their sum is Gamma.
Sometimes for gamma distribution 1/λ instead of λ is written. Implementation in R
uses this form. r is called shape and 1/λ is called scale parameter.
Gamma distribution
As the shape parameter increases the centre of the distribution shifts to the left and it
becomes more symmetric.
Mean value is r/λ and variance is r(λ+1)/λ2
Continuous distributions: Normal
Perhaps the most popular and widely used continuous distribution is the normal
distribution. Main reason for this is that usually an observed random variable is
the sum of many random variables. According to the central limit theorem
under some conditions (for example: random variables are independent. first
and second and third moments exist and finite then distribution of the sum of
these random variables converges to normal distribution)
Density of the normal distribution has the form
f (x) 
1
2 
e(
(x  )
2
There are many tables for the normal distribution.
Its characteristicfunction is:
2
2
t 
c ( t )  e (it  
)
2
2
2
)
Central limit theorem
Let us assume that we have n independent random variables {Xi}, i= 1,..,n. If
first, second and third moments (this condition can be relaxed) are finite
then the sum of these random variables for sufficiently large n will be
approximately normally distributed.
Because of this theorem, in many cases assumption that observations or errors
are distributed with normal distribution is sufficiently good and tests based
on this assumption give satisfactory results.
Exponential family
Exponential family of distributions has the form
f ( x )  e ( A ( )( B ( x )  C ( x )) / G ( )  D ( ,  ))

Many distributions are special case of this family.
Natural exponential family of distributions is the subclass of this family:
f ( x )  e (( A ( ) x  C ( x )) / G ( )  D ( ,  ))
Where A() is natural parameter.
If weuse the fact that distribution should be normalised then characteristic
function of the natural exponential family with natural parameter A() = 
can be derived to be:
1
C ( t )  e(D ( ,  )  D ( A ( A ( )  itG ( )),  ))
Try to derive it. Hint: use the normalisation factor. Find D and then use
expression of characteristic function and D.
This distribution is used for fitting generlised linear models.
Exponential family: Examples
Many well known distributions belong to this family (All distributions mentioned in
this lecture are from the exponential family).
Binomial
A ( p)  ln(
p
1 p
), C ( x )  ln(
n!
x!( n  x )!
), D ( p,  )  n ln( 1  p), G ( )  1
Poisson
A (  )  ln  , C ( x )   ln( x!), D (  ,  )    , G ( )  1

Gamma
A (  )    , C ( t )  ( r  1) ln( t ), D (  ,  )  r ln   ln( r  1)! , G ( )  1

Normal

A ( )  , C ( x )  
x
2
2

, D ( , )  

2
2
2
 ln(
2   ), G ( )  
2
Continuous distributions: 2
Random variables with normal distribution are called standardized if their mean is 0
and variance is 1.
Sum of n standardized, independent normal random variables is 2 with n degrees of
freedom.
Density function is:
f (x) 
1
1
e(
n
2 2 ( )
2
n
1
1
x)x 2
n 1
, 0 x
2
If there are p linear restraints on the random variables then degree of freedom
becomes n-p.
Characteristic function for this distribution is:
C ( t )  (1  2 it )
1
 n
2
2 is used widely in statistics for such tests as goodness of fit of model to experiment.
Continuous distributions: t and F-distributions
Two more distributions are closely related with normal distribution. We will give
them when we will discuss sample and sampling distributions. One of them is
Student’s t-distribution. It is used to test if mean value of the sample is
significantly different from a give value. Another and similar application is for
tests of differences of means of two different samples.
Fisher’s F-distribution is the distribution of the ratio of the variances of two different
samples. It is used to test if their variances are different. One of the important
application is in ANOVA.
Reference
Johnson, N.L. & Kotz, S. (1969, 1970, 1972) Distributions in Statistics, I:
Discrete distributions; II, III: Continuous univariate distributions, IV:
Continuous multivariate distributions. Houghton Mufflin, New York.
Mardia, K.V. & Jupp, P.E. (2000) Directional Statistics, John Wiley &
Sons.
Jaynes, E (2003) The Probability theory: Logic of Science