Transcript Chapter 7
Applied Statistics and
Probability for Engineers
Sixth Edition
Douglas C. Montgomery
George C. Runger
Chapter 7
Point Estimation of Parameters and Sampling Distributions
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
7
Point Estimation of
Parameters and
Sampling
Distributions
CHAPTER OUTLINE
7-1 Point Estimation
7-3.4 Mean Squared Error of an
7-2 Sampling Distributions and
Estimator
the Central Limit Theorem
7-4 Methods of Point Estimation
7-3 General Concepts of Point
7-4.1 Method of Moments
Estimation
7-4.2 Method of Maximum
7-3.1 Unbiased Estimators
Likelihood
7-3.2 Variance of a Point
7-4.3 Bayesian Estimation of
Estimator
Parameters
7-3.3 Standard Error: Reporting
a Point Estimate
2
Chapter 7 Title and Outline
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Learning Objectives for Chapter 7
After careful study of this chapter, you should be able to do
the following:
1. General concepts of estimating the parameters of a population or a
probability distribution.
2. Important role of the normal distribution as a sampling distribution.
3. The central limit theorem.
4. Important properties of point estimators, including bias, variances,
and mean square error.
5. Constructing point estimators using the method of moments, and
the method of maximum likelihood.
6. Compute and explain the precision with which a parameter is
estimated.
7. Constructing a point estimator using the Bayesian approach.
Chapter 7 Learning Objectives
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
3
Point Estimation
• A point estimate is a reasonable value of a
population parameter.
• X1, X2,…, Xn are random variables.
• Functions of these random variables, x-bar
and s2, are also random variables called
statistics.
• Statistics have their unique distributions
which are called sampling distributions.
Sec 7-1 Point Estimation
4
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Point Estimator
As an example,suppose the random variable X is normally distributed with
an unknown mean μ. The sample mean is a point estimator of the unknown
population mean μ. That is, μ X . After the sample has been selected,
the numerical value x is the point estimate of μ.
Thus if x1 25, x2 30, x3 29, and x4 31, the point estimate of μ is
x
25 30 29 31
28.75
4
Sec 7-1 Point Estimation
5
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Some Parameters & Their Statistics
Parameter
μ
σ2
σ
p
μ1 - μ2
p1 - p2
Measure
Mean of a single population
Variance of a single population
Standard deviation of a single population
Proportion of a single population
Difference in means of two populations
Difference in proportions of two populations
Statistic
x-bar
s2
s
p -hat
x bar1 - x bar2
p hat1 - p hat2
• There could be choices for the point estimator of a parameter.
• To estimate the mean of a population, we could choose the:
– Sample mean.
– Sample median.
– Average of the largest & smallest observations in the sample.
Sec 7-1 Point Estimation
6
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Some Definitions
• The random variables X1, X2,…,Xn are a
random sample of size n if:
a) The Xi ‘s are independent random variables.
b) Every Xi has the same probability
distribution.
• A statistic is any function of the
observations in a random sample.
• The probability distribution of a statistic is
called a sampling distribution.
Sec 7-2 Sampling Distributions and the Central Limit Theorem
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
7
Central Limit Theorem
Sec 7-2 Sampling Distributions and the Central Limit Theorem
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
8
Example 7-2: Central Limit Theorem
Suppose that a random variable X has a continuous
uniform distribution:
1 2, 4 x 6
f x
0,
otherwise
Find the distribution of the sample mean of a random
sample of size n = 40.
By the CLT the distribution X is normal .
ba 64
5
2
2
b a
6 4
2
13
12
12
2
13
1
X2
n
40 120
2
2
Figure 7-5 The distribution
of X and X for Example 7-2.
Sec 7-2 Sampling Distributions and the Central Limit Theorem
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
9
Sampling Distribution of a Difference in Sample Means
• If we have two independent populations with means μ1
and μ2, and variances σ12 and σ22, and
• If X-bar1 and X-bar2 are the sample means of two
independent random samples of sizes n1 and n2 from
these populations:
• Then the sampling distribution of:
is approximately standard normal, if the conditions of the
central limit theorem apply.
• If the two populations are normal, then the sampling
distribution of Z is exactly standard normal.
Sec 7-2 Sampling Distributions and the Central Limit Theorem
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
10
Example 7-3: Aircraft Engine Life
The effective life of a component used
in jet-turbine aircraft engine is a
random variable with mean 5000 and
SD 40 hours and is close to a normal
distribution. The engine manufacturer
introduces an improvement into the
Manufacturing process for this
component that changes the
parameters to 5050 and 30. Random
samples of size 16 and 25 are
selected.
What is the probability that the
difference in the two sample means is
at least 25 hours?
Figure 7-6 The sampling distribution
of X2 − X1 in Example 7-3.
Process
Old (1) New (2) Diff (2-1)
x -bar = 5,000 5,050
50
s=
40
30
n=
16
25
Calculations
s / √n =
10
6
11.7
z=
-2.14
P(xbar2-xbar1 > 25) = P(Z > z) = 0.9840
= 1 - NORMSDIST(z)
Sec 7-2 Sampling Distributions and the Central Limit Theorem
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
11
Unbiased Estimators Defined
Sec 7-3.1 Unbiased Estimators
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
12
Example 7-4: Sample Mean & Variance Are Unbiased-1
• X is a random variable with mean μ and variance σ2. Let
X1, X2,…,Xn be a random sample of size n.
• Show that the sample mean (X-bar) is an unbiased
estimator of μ.
Sec 7-3.1 Unbiased Estimators
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
13
Example 7-4: Sample Mean & Variance Are Unbiased-2
Show that the sample variance (S2) is a unbiased estimator of σ2.
Sec 7-3.1 Unbiased Estimators
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
14
Minimum Variance Unbiased Estimators
• If we consider all unbiased estimators of θ,
the one with the smallest variance is called
the minimum variance unbiased estimator
(MVUE).
• If X1, X2,…, Xn is a random sample of size
n from a normal distribution with mean μ
and variance σ2, then the sample X-bar is
the MVUE for μ.
Sec 7-3.2 Variance of a Point Estimate
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
15
Standard Error of an Estimator
Sec 7-3.3 Standard Error Reporting a Point Estimate
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
16
Example 7-5: Thermal Conductivity
• These observations are 10
measurements of thermal
conductivity of Armco iron.
• Since σ is not known, we use s to
calculate the standard error.
• Since the standard error is 0.2%
of the mean, the mean estimate is
fairly precise. We can be very
confident that the true population
mean is 41.924 ± 2(0.0898) or
between 41.744 and 42.104.
xi
41.60
41.48
42.34
41.95
41.86
42.18
41.72
42.26
41.81
42.04
41.924 = Mean
0.284 = Std dev (s )
0.0898 = Std error
Sec 7-3.3 Standard Error Reporting a Point Estimate
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
17
Mean Squared Error
Conclusion: The mean squared error (MSE) of
the estimator is equal to the variance of the
estimator plus the bias squared.
Sec 7-3.4 Mean Squared Error of an Estimator
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
18
Relative Efficiency
• The MSE is an important criterion for
comparing two estimators.
• If the relative efficiency is less than 1, we
conclude that the 1st estimator is superior
than the 2nd estimator.
Sec 7-3.4 Mean Squared Error of an Estimator
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
19
Optimal Estimator
• A biased estimator can
be preferred than an
unbiased estimator if it
has a smaller MSE.
• Biased estimators are
occasionally used in
linear regression.
• An estimator whose MSE
is smaller than that of
any other estimator is
called an optimal
estimator.
Figure 7-8 A biased estimator that
has a smaller variance than the
unbiased estimator .
Sec 7-3.4 Mean Squared Error of an Estimator
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
20
Moments Defined
• Let X1, X2,…,Xn be a random sample from the
probability distribution f(x), where f(x) can be either
a:
– Discrete probability mass function, or
– Continuous probability density function
• The kth population moment (or distribution
moment) is E(Xk), k = 1, 2, ….
n
The k
th
sample moment is 1 / n X i k , k 1, 2,...
i 1
• If k = 1 (called the first moment), then:
– Population moment is μ.
– Sample moment is x-bar.
• The sample mean is the moment estimator of the
population mean.
Sec 7-4.1 Method of Moments
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
21
Moment Estimators
Sec 7-4.1 Method of Moments
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
22
Example 7-8: Normal Distribution Moment Estimators
Suppose that X1, X2, …, Xn is a random sample
from a normal distribution with parameter μ and
σ2 where E(X) = μ and E(X2) = μ2 + σ2.
1 n
X Xi
n i 1
and
n
1
2 2 X i2
n i 1
1
2
X
n
X
i
n i
2
1 n 2
i 1
X i X 2 i 1
n i 1
n
n
2
n
Xi
n
1
X i2 i 1
n i 1
n
n
n
X
i 1
i
2
X
n
2
(biased)
Sec 7-4.1 Method of Moments
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
23
Example 7-9: Gamma Distribution Moment Estimators-1
Suppose that X1, X2, …, Xn is a random sample from a gamma distribution with
parameter r and λ where E(X) = r/ λ and E(X2) = r(r+1)/ λ2 .
r
E X X is the mean
r
2
2
E
X
E
X
is the variance or
2
r r 1
2
E
X
and now solving for r and :
2
X2
r
n
1/ n X i2 X 2
i 1
X
n
1/ n X i2 X 2
i 1
Sec 7-4.1 Method of Moments
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
24
Example 7-9: Gamma Distribution Moment Estimators-2
Using the time to failure data in
the table. We can estimate the
parameters of the gamma
distribution.
x-bar =
21.646
ΣX 2 = 6645.4247
r
X2
n
1/ n X i2 X 2
xi
11.96
5.03
67.40
16.07
31.50
7.73
11.10
22.38
2
xi
143.0416
25.3009
4542.7600
258.2449
992.2500
59.7529
123.2100
500.8644
21.6462
1.29
2
1 8 6645.4247 21.646
i 1
X
n
1/ n X i2 X 2
21.646
0.0598
2
1 8 6645.4247 21.646
i 1
Sec 7-4.1 Method of Moments
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
25
Maximum Likelihood Estimators
• Suppose that X is a random variable with
probability distribution f(x;θ), where θ is a single
unknown parameter. Let x1, x2, …, xn be the
observed values in a random sample of size n.
Then the likelihood function of the sample is:
L(θ) = f(x1;θ) ∙ f(x2; θ) ∙…∙ f(xn; θ)
• Note that the likelihood function is now a
function of only the unknown parameter θ. The
maximum likelihood estimator (MLE) of θ is the
value of θ that maximizes the likelihood function
L(θ).
Sec 7-4.2 Method of Maximum Likelihood
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
26
Example 7-10: Bernoulli Distribution MLE
Let X be a Bernoulli random variable. The probability mass function is
f(x;p) = px(1-p)1-x, x = 0, 1 where P is the parameter to be estimated.
The likelihood function of a random sample of size n is:
L p p x1 1 p
1 x1
p x2 1 p
1 x2
... p xn 1 p
1 xn
n
n
p xi 1 p
1 xi
xi
p i1
1 p
n
n
xi
i 1
i 1
n
n
ln L p xi ln p n xi ln 1 p
i 1
i 1
n
n
x
x
i
i
d ln L p
i 1
i 1
dp
p
1 p
n
Sec 7-4.2 Method of Maximum Likelihood
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
27
Example 7-11: Normal Distribution MLE for μ
Let X be a normal random variable with unknown mean μ and
known variance σ2. The likelihood function of a random sample of
n
size n is:
1 x 2
L
i 1
2
2
1
2
2 n2
2
i
e
1
e 2
2
n
xi
2
i 1
n
1 n
2
2
ln L ln 2 2 xi
2
2 i 1
d ln L 1 n
2 xi
d
i 1
Sec 7-4.2 Method of Maximum Likelihood
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
28
Example 7-12: Exponential Distribution MLE
Let X be a exponential random variable with parameter λ.
The likelihood function of a random sample of size n is:
n
L e
xi
e
n
n
xi
i 1
i 1
n
ln L n ln xi
d ln L n n
xi
d
i 1
i 1
Equating the above to zero we get
n
n
x
i 1
i
1 X (same as moment estimator)
Sec 7-4.2 Method of Maximum Likelihood
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
29
Example 7-13: Normal Distribution MLEs for μ & σ2
Let X be a normal random variable with both unknown mean μ
and variance σ2. The likelihood function of a random sample of
n
x 2
1
size n is:
2
L ,
e
2
2
i 1
1
1
2
2 n2
e 2
2
n
xi
n
1
ln L ,
ln 2 2 2
2
2
ln L , 2
2
1
2
2
i 1
2
ln L , 2
2
i
n
xi
i 1
n
x 0
i 1
i
n
1
2 2 2 4
n
x
i 1
i
n
2
X and
Sec 7-4.2 Method of Maximum Likelihood
2
2
0
xi X
2
i 1
n
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
30
Properties of an MLE
Notes:
• Mathematical statisticians will often prefer MLEs because of
these properties. Properties (1) and (2) state that MLEs are
MVUEs.
• To use MLEs, the distribution of the population must be known
or assumed.
Sec 7-4.2 Method of Maximum Likelihood
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
31
Invariance Property
This property is illustrated in Example 7-13.
Sec 7-4.2 Method of Maximum Likelihood
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
32
Example 7-14: Invariance
For the normal distribution, the MLEs were:
Sec 7-4.2 Method of Maximum Likelihood
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
33
Complications of the MLE Method
The method of maximum likelihood is an
excellent technique, however there are two
complications:
1. It may not be easy to maximize the likelihood
function because the derivative function set
to zero may be difficult to solve algebraically.
2. It may not always be possible to use
calculus methods directly to determine the
maximum of L(ѳ).
The following example illustrate this.
Sec 7-4.2 Method of Maximum Likelihood
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
34
Example 7-16: Gamma Distribution MLE-1
Let X1, X2, …, Xn be a random sample from a gamma
distribution. The log of the likelihood function is:
n r xir 1e x
ln L r , ln
i
i 1
r
n
n
i 1
i 1
nr ln r 1 ln xi n ln r xi
n
ln L r ,
'r
n ln ln xi n
r
r
i 1
ln L r , nr n
xi
i 1
Equating the above derivative to zero we get
Sec 7-4.2 Method of Maximum Likelihood
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
35
Example 7-16: Gamma Distribution MLE-2
Figure 7-11 Log likelihood for the gamma distribution using the
failure time data. (a) Log likelihood surface. (b) Contour plot.
Sec 7-4.2 Method of Maximum Likelihood
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
36
Bayesian Estimation of Parameters-1
• The moment and likelihood methods interpret
probabilities as relative frequencies and are called
objective frequencies.
• The random variable X has a probability
distribution of parameter θ called f(x|θ).
• Additional information about θ is that it can be
summarized as f(θ), the prior distribution, with
mean μ0 and variance σ02. Probabilities associated
with f(θ) are subjective probabilities.
• The joint distribution is f(x1, x2, …, xn|θ).
• The posterior distribution is f(θ|x1, x2, …, xn) is our
degree of belief regarding θ after observing the
sample data.
7-4.3 Bayesian Estimation of Parameters
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
37
Bayesian Estimation of Parameters-2
• Now the joint probability distribution of the sample is
f(x1, x2, …, xn, θ) = f(x1, x2, …, xn |θ) ∙ f(θ)
• The marginal distribution is:
f x1 , x2 ,..., xn ,θ , for θ discrete θ
f x1 , x2 ,..., xn
f x1 , x2 ,..., xn ,θ dθ, for θ continuous
• The desired posterior distribution is:
f x1 , x2 ,..., xn ,θ
f θ | x1 , x2 ,..., xn
f x1 , x2 ,..., xn
The Bayesian estimator of θ is θ, the mean of the posterior distribution.
7-4.3 Bayesian Estimation of Parameters
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
38
Example 7-16: Bayes Estimator for the mean of a Normal Distribution -1
Let X1, X2, …, Xn be a random sample from a normal distribution unknown mean μ and
known variance σ2. Assume that the prior distribution for μ is:
2
1
0
f μ
e
2 0
2 02
1
e
2 2 0 02
2 02
2 02
The joint probability distribution of the sample is:
f x1 , x2 ,..., xn |
1
2
2
2
1 2
e
n2
1
2
2
2 n2
1 2
e
( x )
n
2
i
i 1
n 2
xi 2
i1
xi n 2
i 1
n
7-4.3 Bayesian Estimation of Parameters
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
39
Example 7-17: Bayes Estimator for the mean of a Normal Distribution-2
Now the joint probability distribution of the sample and μ is:
f x1 , x2 ,..., xn , f x1 , x2 ,..., xn | f μ
2
e
1
2 n2
2 0
1
1
1/2 2 2 2
0
n
e
1
xi
n
1/2 2 2 2 2 02 2
0
0
2 0 x
2 2
0
n
xi2 02
2 02
h ( x , x ,..., x , 2 , , 2 )
n
0 0
1 1 2
Upon completing the square in the exponent,
2
f
x1 , x2 ,..., xn , e
1
x 02
1 2 ( 2 / n ) 0
1/ 2 2 2
2
h2 ( x1 , x2 ,..., xn , 2 , 0 , 02 )
2
2
2
0 / n
0 / n 0 / n
where hi ( x1 , x2 ,..., xn , 2 , 0 , 02 )is a function of the observed
values and the parameters 2 , 0 and 02 .
Since f
x1 , x2 ,..., xn ,
f | x1 , x2 ,..., xn e
does not depend on
1
1 2 ( 2 / n ) 0 x 02
1/ 2 2 2
02 2 / n
0 / n
h3 ( x1 , x2 ,..., xn , 2 , 0 , 02 )
7-4.3 Bayesian Estimation of Parameters
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
40
Example 7-17: Bayes Estimator for the mean of a Normal Distribution-3
which is recognized as a normal probability density
function with posterior mean
and posterior variance
0 n
1
1
V 2 2 2
2
n
n
0
0
1
2
2
7-4.3 Bayesian Estimation of Parameters
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
41
Example 7-17: Bayes Estimator for the mean of a Normal Distribution-4
To illustrate:
– The parameters are: μ0 = 0, σ02= 1
– Sample: n = 10, x-bar = 0.75, σ2 = 4
n
2
2
0
0 x
02 2 n
4 10 0 1 0.75
0.536
1 4 10
7-4.3 Bayesian Estimation of Parameters
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
42
Important Terms & Concepts of Chapter 7
Bayes estimator
Bias in parameter estimation
Central limit theorem
Estimator vs. estimate
Likelihood function
Maximum likelihood estimator
Mean square error of an
estimator
Minimum variance unbiased
estimator
Moment estimator
Normal distribution as the
sampling distribution of the:
– sample mean
– difference in two sample
means
Parameter estimation
Point estimator
Population or distribution
moments
Posterior distribution
Prior distribution
Sample moments
Sampling distribution
An estimator has a:
– Standard error
– Estimated standard error
Statistic
Statistical inference
Unbiased estimator
Chapter 7 Summary
43
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.