Chapter 8, part B
Download
Report
Transcript Chapter 8, part B
Chapter 8, continued...
III. Interpretation of Confidence
Intervals
Remember, we don’t know the population mean. We
take a sample to estimate µ, then construct a
confidence interval (CI) to provide some measure
of accuracy for that estimate.
An accurate interpretation for a 95% CI:
“Before sampling, there is a 95% chance that the
will include µ.
interval:
x 196
.
n
More interpretation.
In other words, if 100 samples are taken, each of size
n, on average 95 of these intervals will contain µ.
Important: this statement can only be made before
we sample, when x-bar is still an undetermined
random variable. After we sample, x-bar is no
longer a random variable, thus there is no
probability.
An example of interpretation.
Suppose that the CJW company samples 100
customers and finds this month’s customer service
mean is 82, with a population standard deviation
of 20. We wish to construct a 95% confidence
interval. Thus, =.05 and z.025=1.96.
Before vs. After sampling
• Before we sample, there is a 95% chance that µ
will be in the interval:
x 196
.
n
• After sampling we create an interval:
82 ± 3.92, or (78.08 to 85.92).
We can only say that under repeated sampling, 95%
of similarly constructed intervals would contain
the true µ . This one particular interval may or
may not contain µ .
IV. Interval Estimate of µ: Small
Sample
A small sample is one in which n<30. If the
population has a normal probability distribution,
we can use the following methods. However, if
you can’t assume the normal population, you must
increase n30 so the Central Limit Theorem can
be invoked.
A. The t-distribution
William Sealy Gosset (“student”) founded the tdistribution. An Oxford graduate in math and
chemistry, he worked for Guinness Brewing in
Dublin and developed a new small-sample theory
of statistics while working on small-scale
materials and temperature experiments. “The
probable error of a mean” was published in 1908,
but it wasn’t until 1925 when Sir Ronald A. Fisher
called attention to it and its many applications.
The idea behind the t.
Specific t-distributions are associated with a different
degree of freedom.
Degree of freedom: the # of observations allowed to
vary in calculating a statistic = n-1.
As the degrees of freedom increase (n), the closer
the t-distribution gets to the standard normal
distribution.
B. An Example.
Suppose n=20 and you are constructing a 99%
(=.01) confidence interval.
x t /2
s
n
First we need to be able to read a t-table to find t.005.
See Table 8.3 in the text.
The t-table.
I see where
you’re going!
/2
0
t/2
We need to find t.005 with 19 degrees of freedom in a ttable like Table 8.3.
Our Example
D.F.
.
.
.
.
18
.10
.05
.01
.025
.005
How do I get back to
the brewery?
19 1.328 1.729 2.093 2.539 2.861