Transcript Document
Section 10.6
Recall from calculus:
x
1
lim 1 + — = e
x
x
kx
1 = ek
lim 1 + —
x
x
y
k = ek
lim 1 + —
y
y
(Let y = kx in the
previous limit.)
If derivatives of f(x) up to order k are all continuous on an interval
about 0 (zero), then for all x on this interval, we have
(x – 0)2 f [2](0) (x – 0)3 f [3](0)
+ —————– + …
f(x) = f(0) + (x – 0) f [1](0) + —————–
3!
2!
(x – 0)k f [k](h)
+ —————– .
k!
for 0 < h < x .
1. Let X1 , X2 , … , Xn be a random sample from a Bernoulli distribution
with success probability p. The following random variables are
defined:
n
n
Xi
Xi – np
n
i=1
i=1
X
V = i=1 i
X=
W=
n
np(1 – p)
(a) Find the m.g.f. for each of V and X.
From Corollary 5.4-1, we have that
n
(1) the m.g.f. of the random variable V = Xi is
n
i=1
MV (t) = (1 – p + pet) = (1 – p + pet)n
.
i=1
(We recognize that V has a b(n,p) distribution.)
V
(2) the m.g.f. of the random variable X = — is
n
t
MX (t) = MV( — ) = (1 – p + pet / n)n .
n
(b) Find the limiting distribution for V with np equal to a given
constant λ as n tends to infinity, forcing p to go to 0 (zero).
Since np = is fixed, then
lim MV(t) = lim (1 – p +
n
n
et
n
pet)n
npet
n
np
= lim 1 – — + —— =
n
n
n
(et –
1)
lim 1 – — + —— = lim 1 + ————
n
n
n
n
n
n
=
t – 1)
(e
e
The limiting distribution of V is a Poisson() distribution.
Consequently, for small (or large!) values of p, a binomial
distribution can be approximated by a Poisson distribution with
mean = np. This should not be surprising, since the Poisson
distribution was derived as the limit of a sequence of binomial
distributions where p tended to zero.
1.-continued
(c) Find the limiting distribution for V as n tends to infinity, with p a
fixed constant.
lim MV(t) = lim (1 – p + pet)n =
n
n
We cannot find a limiting distribution for V.
(d) Find the limiting distribution for X as n tends to infinity, with p a
fixed constant.
lim MX(t) = lim (1 – p + pet/n)n =
n
n
lim (1 – p + p[1 + t/n + (t/n)2/2! + (t/n)3/3! + …])n =
n
lim (1 + p[t/n + (t/n)2/2! + (t/n)3/3! + …])n =
n
n
n
pt + pt2/(2!n) + pt3/(3!n2) + …
pt
————————————
lim 1 +
= lim 1 + — =
n
n
n
n
It is intuitively obvious that all terms in
the numerator except the first go to 0 as
n , and (from advanced calculus) the
terms going to 0 can be ignored.
ept
This is the moment generating function corresponding to
a distribution where the value p has probability 1 (one).
Suppose X1 , X2 , … , Xn is a random sample from any distribution with
finite mean and finite variance 2. Let M(t) be the common moment
generating function of Xi , that is, for each i = 1, 2, …, n, we have
tX
M(t) = E(e i )
From Corollary 5.4-1(b), we have
that the moment generating function
n
Xi
i=1
of the random variable X =
is
n
n
t
t n
MX (t) = M( — ) = M( — ) .
n
n
i=1
With M(t) and M /(t) both continuous on an interval about 0 (zero), we
have that for all t on this interval,
M(t) = M(0) + t M /(h) = 1 + t M /(h)
for 0 < h < t .
Consequently, we have that for all t on this interval,
n
t
t n
/
M (h) =
MX (t) = M( — ) = 1 + —
n
n
n
t
t
/
/
+ — [M (h) – M (0)]
1+ —
n
n
t
for 0 < h < — .
n
To investigate the limiting distribution of X as n, we consider
n
/
/
t + t [M (h) – M (0)]
lim MX (t) = lim 1 + —————————
n
n
n
It is intuitively obvious that the second
t
= et term in the numerator goes to 0 as n ,
= lim 1 + —
n
n
and (from advanced calculus) this term
can be ignored.
This is the moment generating function corresponding to a
distribution where the value has probability 1 (one).
n
Xi –
For i = 1, 2, …, n, suppose Yi = ——— , and let
n
n
Yi
W=
i=1
n
Xi – n
=
i=1
(n)
X–
=
/ n
.
Let m(t) be the common m.g.f. for each Yi . Then for each i = 1, 2, …,
n, we have E(Yi) = m /(0) = 0 , and Var(Yi) = E(Yi2) = m //(0) = 1 .
From Theorem 5.4-1, we have that the moment generating function of
the random variable W is
n
n
t
t
MW (t) = m( — ) = m( — ) .
i = 1 n
n
With m(t) and m /(t) both continuous on an interval about 0 (zero), we
have that for all t on this interval,
m(t) = m(0) + t m
/(0)
1
2 m //(h)
+—
t
2
=1
1
2 m //(h)
+—
t
2
for 0 < h < t .
Consequently, we have that for all t on this interval,
t n
MW (t) = m( — ) =
n
n
t2
//
1+ —
2n m (h) =
n
t2
t2
//
//
(1) + — [m (h) – m (0)]
1+ —
2n
2n
t
for 0 < h < — .
n
To investigate the limiting distribution of W as n, we consider
n
2
2
//
//
t / 2 + (t / 2) [m (h) – m (0)]
lim MW (t) = lim 1 + ————————————–
n
n
n
It is intuitively obvious that the second
t2 / 2 term in the numerator goes to 0 as n ,
and (from advanced calculus) this term
can be ignored.
This is the moment generating function corresponding to a
standard normal (N(0,1)) distribution.
/2 n
= lim 1 + —— = e
n
n
t2
This proves the following important Theorem in the text:
Central Limit Theorem
Theorem 5.6-1
1.-continued
(e) Find the limiting distribution for W as n tends to infinity, with p a
fixed constant.
For each i, = E(Xi) = p , and 2 = Var(Xi) = p(1 p) .
From the Central Limit Theorem, we have that limiting
n
n
Xi – n
Xi – np
X–p
i=1
i=1
distribution for W =
=
= —————–
p(1 – p) / n
n
np(1 – p)
is a N(0,1) (standard normal) distribution.
2. Suppose Y has a b(400, p) distribution, and we want to approximate
P(Y 3).
(a) If p = 0.001, explain why a Poisson distribution can be expected
to give a good approximation of P(Y ≥ 3), and use the Poisson
approximation to find this probability.
In Class Exercise #1(b), we found that the limiting distribution of
a sequence of b(n, p) distributions as n tends to infinity is Poisson
when np remains fixed, which forces p to tend to 0 (zero). This
suggests that the Poisson approximation to a binomial
distribution is better when p is close to zero (or one).
= np = (400)(0.001) = 0.4 P(Y 3) = 1 – 0.992 = 0.008
(b) What other distribution may potentially be used to approximate a
binomial probability when p is not sufficiently close to zero (or
one)?
The Central Limit Theorem tells us that with a sufficiently
large sample size n, the normal distribution can be used.