Transcript Chapter 11
Sampling distributions
BPS chapter 11
© 2006 W. H. Freeman and Company
Objectives (BPS chapter 11)
Sampling distributions
Parameter versus statistic
The law of large numbers
What is a sampling distribution?
The sampling distribution of xBar.
The central limit theorem
Reminder:
Parameter versus statistic
Population: the entire group of
individuals in which we are
interested but can’t usually
assess directly.
A parameter is a number
describing a characteristic of
the population. Parameters
are usually unknown.
Sample: the part of the population
we actually examine and for which
we do have data.
A statistic is a number describing
a characteristic of a sample. We
often use a statistic to estimate
an unknown population
parameter.
Population
Sample
Statistics are Random Variables
Recall: A random variable is a variable whose value is a
numerical outcome of a random phenomenon.
Therefore when we compute a statistic such as xBar or s from a
random sample, the numerical result is a random variable.
Statistics
are
random,
but
knowable
It’s random, because it depends on the sample, which is random.
If we took a new random sample, the xBar and s would be different.
Parameters are fixed, but unknown
Even though statistics are random, at least we can know what they
are: just take a sample, and compute!
Parameters are not random – they are fixed numbers.
We’d like to use the statistics to
But parameters are generally unknown in practical problems. Why?
estimate the unknown parameters
Because in order to find the value of a parameter, you need to know the
whole population. This is usually not feasible.
Example
Suppose we want to know the true percentage of adult Americans
who support a national system of health insurance.
We can’t survey all adult Americans. So we take an SRS of (say)
1000 adult Americans, and ask these 1000 whether they support a
national system of health insurance.
What is the parameter? What is the statistic?
Population: all adult Americans
Sample: the n = 1000 adult Americans surveyed (assuming that all
respond)
We expect the statistic (pHat in this case) to be a reasonable
Parameter:
estimate of the parameter (p, the true percentage), although
p = true
percentage
of all equal
adult Americans
who support national health
probably
not exactly
to it.
insurance.
Statistic:
How good is the estimate? If we wanted a better estimate,
could we do?
pHat what
= percentage
of the 1000 people sampled who support national
health insurance
Sampling distribution of x
(the sample mean)
We take many random samples of a given fixed size n from a
population with mean m and standard deviation s.
Some sample means will be above the population mean m and some
will be below, making up the sampling distribution.
Sampling distribution of “x bar”
Histogram
of some
sample
averages
What is a sampling distribution? (page 276)
The sampling distribution of a statistic is the distribution of all possible
values taken by the statistic when all possible samples of a fixed size n
are taken from the population.
The Big Ideas:
Averages (xBar) are less variable than individual observations.
The Law of Large Numbers says that as the sample size n gets larger and
larger, it becomes highly likely that xBar is close to the population mean m.
Averages are more normal than individual observations.
The Central Limit Theorem says that as the sample size n gets larger and
larger, the distribution of xBar becomes more and more normal.
Note: When sampling randomly from a given population,
The sampling distribution describes what happens when we take all possible
random samples of a fixed size n.
The Law of Large Numbers and the Central Limit Theorem describe what
happens when the sample size n is gradually increased.
Example: 11.6 page 277
Population:
student
score
0
1
2
3
4
5
6
7
8
9
82
62
80
58
72
73
65
66
74
62
Distribution:
3
2
Histogram of the 10
individual scores
2
2
1
60
70
80
Mean = 71.4, Std. Dev = 8.4748, Median = 72.5, IQR = 15
Are these parameters or statistics?
Example: 11.6 page 277
Choose 10 samples of size n = 4, and calculate xBar for each:
Sample
number
SRS
Sample mean
(n = 4)
1
1, 4, 5, 9
67.25
2
2, 6, 0, 5
75
3
6, 3, 1, 4
64.25
4
2, 4, 8, 0
77
5
3, 7, 1, 6
62.75
6
5, 1, 0, 3
68.75
7
6, 2, 5, 3
69
8
5, 0, 4, 9
72.25
9
0, 6, 1, 8
70.75
10
1, 3, 8, 6
64.75
Example: 11.6 page 277
Sample
number
SRS
(n = 4)
Sample
mean
1
1, 4, 5, 9
67.25
2
2, 6, 0, 5
75
3
6, 3, 1, 4
64.25
4
2, 4, 8, 0
77
5
3, 7, 1, 6
62.75
6
5, 1, 0, 3
68.75
7
6, 2, 5, 3
69
8
5, 0, 4, 9
72.25
9
0, 6, 1, 8
70.75
10
1, 3, 8, 6
64.75
Look at the Sampling Distribution
Frequency Table
x
= Sample Mean
Range
Count
60 x 65
3
65 x 70
3
70 x 75
2
75 x 80
2
Note:
Histogram of the
10 sample means
3
3
2
60
70
2
80
Mean = 69.175, Std. Dev = 4.6682, Median = 68.875, IQR = 8.5
Population mean: m = 71.4
Avg. sample mean: 69.175
On average, the sample means
are close to the true mean
x
Note: the distribution of the
sample means has a smaller
spread than the population.
Mean and Std. Dev. Of xBar
x
Sampling distribution of
s/√n
m
x
In English:
1
2
1. The mean of the sample means is the population mean.
2. The standard deviation of the sample means is the population
standard deviation divided by the square root of the sample size.
What do 1. and 2. say about the sampling distribution of xBar?
More Discussion:
Mean of the sampling distribution of xBar
:
The what the equation m (xBar) = m says that the sampling distribution of
xBar is centered on the population mean m . Thus, on average, we expect
xBar to be equal to the population mean m . Not the we expect xBar to equal
m in individual instances– sometimes it will be larger, sometimes smaller.
But since the average value of xBar is m , we say that xBar is an unbiased
estimate of the population mean m —it will be “correct on average” in many
samples..
Standard deviation of the sampling distribution of xBar:
The standard deviation of the sampling distribution measures how much the
sample statistic xBar varies from sample to sample. It is smaller than the
standard deviation of the population by a factor of √n. This means that
averages are less variable than individual observations.
The law of large numbers (page 273)
Law of large numbers: As the number of randomly-drawn observations
(n) in a sample increases:
for quantitative data
for categorical data
the mean of the sample (xBar) gets
closer and closer to the population
mean m
the sample proportion (pHat) gets
closer and closer to the population
proportion p
For normally distributed populations:
When a variable in a population is normally distributed, then the
sampling distribution of xBar for all possible samples of size n is
also normally distributed.
Amazingly, the sampling distribution
of xBar is approximately normal
regardless of whether the
population is normal or not. This
remarkable fact is known as the the
Central Limit Theorem:
If the population is
Sampling distribution for xBar
Sample means
Suppose X = “odor
threshold” is normally
distributed in some
population
Population
N(m,s), then the sample
means distribution is
N(m,s/√n).
The central limit theorem (page 281)
Central Limit Theorem: When randomly sampling from any population
with mean m and standard deviation s, when n is large enough, the
sampling distribution of
x
is approximately normal: N(m,s/√n).
Sampling
distribution of
x for n = 2
observations
Population with
strongly skewed
distribution
(Figure 11.5
page 283)
Sampling
distribution of
x for n = 10
observations
Sampling
distribution of
x for n = 25
observations
Averages are more normal than individualmeasurements!
How large is “large enough” for the CLT?
It depends on the population distribution. More observations are
required if the population distribution is far from normal.
A sample size of 25 is generally enough to obtain a normal sampling
distribution from a strong skewness or even mild outliers.
A sample size of 40 will typically be good enough to overcome extreme
skewness and outliers.
In many cases, n = 25 isn’t a huge sample. Thus,
even for strange population distributions we can
assume a normal sampling distribution of the
mean, and work with it to solve problems.
IQ scores: population vs. sample
In a large population of adults, IQ scores have mean 112 with standard
deviation 20. Suppose 200 adults are randomly selected for a market research
campaign.
The distribution of the sample mean IQ is
A) exactly normal, mean 112, standard deviation 20.
B) approximately normal, mean 112, standard deviation 20.
C) approximately normal, mean 112 , standard deviation 1.414.
D) approximately normal, mean 112, standard deviation 0.1.
C) approximately normal, mean 112, standard deviation 1.414.
Population distribution: N (m = 112; s = 20)
Sampling distribution for n = 200 is N (m = 112; s /√n = 1.414)
What if IQ scores are normally distributed for the population??
Application
Hypokalemia is diagnosed when blood potassium levels are low, below
3.5mEq/dl. Let’s assume that we know a patient whose measured
potassium levels vary daily according to a normal distribution N(m = 3.8,
s = 0.2).
If only one measurement is made, what's the probability that this patient
will be misdiagnosed hypokalemic?
If instead measurements are taken on four separate days and then
averaged, what is the probability of such a misdiagnosis?
Let’s Work Some Problems!
Problem 11.9, page 280
Problem 11.11, page 285
Problem 11.13, page 286
Practical note
Large samples are not always attainable.
Sometimes the cost, difficulty, or preciousness of what is studied limits
drastically any possible sample size.
Blood samples/biopsies: no more than a handful of repetitions
acceptable. Often we even make do with just one.
Opinion polls have a limited sample size due to time and cost of
operation. During election times, though, sample sizes are increased
for better accuracy.
Not all variables are normally distributed.
Income is typically strongly skewed for example.
Is x still a good estimator of m then?
Income distribution
Let’s consider the very large database of individual incomes from the Bureau of
Labor Statistics as our population. It is strongly right-skewed.
We take 1000 SRSs of 100 incomes, calculate the sample mean for
each, and make a histogram of these 1000 means.
We also take 1000 SRSs of 25 incomes, calculate the sample mean for
each, and make a histogram of these 1000 means.
Which histogram
corresponds to the
samples of size
100? 25?
Further properties
The Central Limit Theorem is valid as long as we are sampling many
small random events, even if the events have different distributions (as
long as no one random event has an overwhelming influence).
Why is this important?
It explains why so many variables are normally distributed.
Example: Height seems to be determined by a large number of
genetic and environmental factors, like nutrition.
So height is very much like our sample mean x.
The “individuals” are genes and environmental
factors. Your height is a mean.
Now we have a better idea of why
the density
curve for height has this shape.
Statistical process control
Industrial processes tend to have normally distributed variability, in part
as a consequence of the central limit theorem applying to the sum of
many small influential factors. Random samples taken over time can
thus be used to easily verify that a given process is not getting out of
“control.”
What is statistical control?
A variable that continues to be described by the same distribution when
observed over time is said to be in statistical control, or simply in
control.
Process-monitoring
What are the required conditions?
We measure a quantitative variable x that has a normal distribution.
The process has been operating in control for a long period, so that we
know the process mean µ and the process standard deviation σ that
describe the distribution of x as long as the process remains in control.
An
x
control chart displays the average of samples of size n taken at
regular intervals from such a process. It is a way to monitor the process
and alert us when it has been disturbed so that it is now out of control.
This is a signal to find and correct the cause of the disturbance.
x
control charts
For a process with known mean µ standard deviation σ, we calculate
the mean
x
of samples of constant size n taken at regular intervals.
Plot x (vertical axis)
against time (horizontal axis).
Draw a horizontal center
line at µ.
Draw two horizontal
control limits at µ ± 3σ/√n
(UCL and LCL).
An
x
value that does not fall within the two control limits is evidence
that the process is out of control.
A machine tool cuts circular pieces. A sample of four pieces is
taken hourly, giving these average measurements (in 0.0001
inches from the specified diameter).
Because measurements are made from the specified diameter,
we have a given target µ = 0 for the process mean. The process
standard deviation σ = 0.31. What is going on?
x
xx x
x
x
For the
x
chart, the
center line is 0 and
the control limits are
±3σ/√4 = ± 0.465.
Sample
x
1
−0.14
2
0.09
3
0.17
4
0.08
5
−0.17
6
0.36
7
0.30
8
0.19
9
0.48
10
0.29
11
0.48
12
0.55
13
0.50
14
0.37
15
0.69
16
0.47
17
0.56
18
0.78
19
0.75
20
0.49
21
0.79
The process mean has drifted. Maybe the cutting blade is getting dull, or a
screw got a bit loose.
Summary: the Big Ideas from Ch. 11
Population
Random
sampling
Sample
Numerical
description
parameter
m s
Numerical
description
statistic
x s
Parameters are fixed, but unknown (usually).
Statistics are random, but known.
We want to know the parameters.
We use statistics to estimate
the parameters.
Summary: the Big Ideas from Ch. 11
Chapter 11 focuses on the following problem:
How can we use x to estimate m ?
Three Big Ideas give the answer:
Law of Large Numbers : as the sample size n gets larger and larger,
the sample mean gets closer and closer to the population mean.
2-number summary for the sample mean : Let X be the basic
measurement, with a given (population) mean m (X) and standard
deviation s(X). Then the 2-number summary for the sample mean (with
sample size n) is
m x m X
,
s x
s X
n
Central Limit Theorem : as the sample size n gets larger and larger, the
distribution of the sample mean becomes normal:
s X
x ~ N m m X , s
n