Estimation/Confidence Intervals for Popn Mean

Download Report

Transcript Estimation/Confidence Intervals for Popn Mean

One Sample Means Test: What if  is
unknown? (sampling from normal population)
We know that:
What is the
distribution of:
y  0
~ N(0,1)

n
y  0
t
s
n
If sample came from
a normal distribution,
t has a
t-distribution with
n-1 degrees of
freedom.
1) Symmetric about 0.
2) Looks like a standard normal density, only more spread out.
3) The spread of the distribution is indexed to a parameter called the
degrees of freedom (df).
4) As the degrees of freedom increase, the t-distribution gets closer to the
standard normal distribution. (Safe to use z instead of t when n>30.)
One Sample Inf-1
See Table 3 Ott & Longnecker
Tail probabilities of the
t-distribution
95th
percentiles:
N(0,1), t(5), t(2).
Df
.1
.05
.025
.01
.005
.001
1
3.078
6.314
12.706
31.821
63.657
318.309
5
1.476
2.015
2.571
3.365
4.032
5.893
10
1.372
1.812
2.228
2.764
3.169
4.144
15
1.341
1.753
2.131
2.602
2.947
3.733
20
1.325
1.725
2.086
2.528
2.845
3.552
25
1.316
1.708
2.060
2.485
2.787
3.450
30
1.310
1.697
2.042
2.457
2.750
3.385
40
1.303
1.684
2.021
2.423
2.704
3.307
NORMAL
(0,1)
1.282
1.645
1.960
2.326
2.576
3.090
One Sample Inf-2
Rejection Regions for hypothesis tests using
t-distribution critical values
H0:
HA:
 = 0
1.  > 0
For Pr(Type I error) = , df = n - 1
y  0
T .S.: t 
s n
Reject H0 if t > t,n-1
2.  < 0
t < -t,n-1
3.   0
| t | > t/2,n-1
One Sample Inf-3
Degrees of Freedom
Why are the degrees of freedom only n - 1 and not n?
We start with n independent pieces of information with which we
n
estimate the sample mean. y  1 y
n
Now consider the sample variance:

i 1
i
s
n
1
n 1
2
(y

y)
 i
i 1
Because the sum of the deviations yi  y are equal to zero, if we
know n-1 of these deviations, we can figure out the nth deviation.
Hence there are only n-1 independent deviations that are available to
estimate the variance (and standard deviation). That is, there are only
n-1 pieces of information available to estimate the standard deviation
after we “spend” one to estimate the sample mean. The t-distribution is
a normal distribution adjusted for unknown standard deviation hence it
is logical that it would have to accommodate the fact that only n-1
pieces of information are available.
One Sample Inf-4
Confidence Interval for  when  unknown (samples
are assumed to come from a normal population)
y  t / 2, n 1
s
n
with df = n - 1 and confidence
coefficient (1 - ). (Can use
z/2 if n>30.)
Example: Compute 95% CI for  given
y  40.1
df  n  1  8
y  t .025 ,8
s
n
s  5. 6
n9
t .025 ,8  2.306
40.1  2.306
  .05
z.025  1.960
40.1  3.659
40.1  4.304
 2  .025
5.6
9
One Sample Inf-5
One Sample Median Confidence Interval
If the data is not normal (maybe skewed) and we have a small sample,
then a nonparametric method can be used to make inferences about
the median. First order the data from smallest to largest:
y1,, yn sort
 y(1)    y(n)
Then a 100(1-)% CI for the population median is:
( y( L ) , y(U ) )
where L  C ( 2),n  1, and U  n  C ( 2 ),n
and C ( 2),n is obtainedfromT able 4
One Sample Inf-6
The Sign Test: One Sample Median Test
A corresponding nonparametric test for the population median (M) can
be developed along similar lines. To test:
H0 :
1. M  M0
vs.
HA:
M > M0
2. M  M0
M < M0
3. M = M0
M  M0
The test statistic is B: the number of data points greater than M0.
(If the null is true then B should be approximately n/2.)
With values obtained
from Table 4, reject the
null hypothesis if:
1. B  n  C (1),n
2. B  C (1),n
3. B  C ( 2 ),n or B  n  C ( 2 ), n
One Sample Inf-7
The Level of Significance of a Statistical Test (p-value)
• Suppose the result of a statistical test you carry out is to reject the Null.
• Someone reading your conclusions might ask: “How close were you to not
rejecting?”
• Solution: Report a value that summarizes the weight of evidence in favor of Ho,
on a scale of 0 to 1. This is the p-value. The larger the p-value, the more
evidence in favor of Ho.
Formal Definition: The p-value of a test is the probability of observing a
value of the test statistic that is as extreme or more extreme (toward Ha)
than the actually observed value of the test statistic, under the assumption
that Ho is true. (This is just the probability of a Type I error for the observed
test statistic.)
Rejection Rule: Having decided upon a Type I error
probability , reject Ho if p-value < .
One Sample Inf-8
Equivalence between confidence intervals and
hypothesis tests
Rejecting the two-sided null Ho:  = 0
is equivalent to
0 falling outside a (1-)100% C.I. for .
Rejecting the one-sided null Ho:   0
is equivalent to
0 being greater than the upper endpoint of a (1-2)100% C.I. for ,
or 0 falling outside a one-sided (1-)100% C.I. for  with –infinity as
lower bound.
Rejecting the one-sided null Ho:   0
is equivalent to
0 being smaller than the lower endpoint of a (1-2)100% C.I. for ,
or 0 falling outside a one-sided (1-)100% C.I. for  with +infinity as
upper bound.
One Sample Inf-9
Example: Practical Significance vs. Statistical Significance
Dr. Quick and Dr. Quack are both in the business of selling diets, and they
have claims that appear contradictory. Dr. Quack studied 500 dieters and
claims,
A statistical analysis of my dieters shows a significant weight loss for my
Quack diet.
The Quick diet, by contrast, shows no significant weight loss by its dieters. Dr.
Quick followed the progress of 20 dieters and claims,
A study shows that on average my dieters lose 3 times as much weight
on the Quick diet as on the Quack diet.
So which claim is right? To decide which diets achieve a significant weight loss
we should test:
Ho:   0
vs.
Ha:  < 0
where  is the mean weight change (after minus before) achieved by dieters on
each of the two diets. (Note: since we don’t know  we should do a t-test.)
One Sample Inf-10
MTB output for Quick diet analysis (Stat  Basic Stats  1 - Sample t)
One-Sample T: Quick
Test of mu = 0 vs < 0
Variable
Quick
N
20
Mean
-3.02119
StDev
34.16614
SE Mean
7.63978
95%
Upper
Bound
10.18901
T
-0.40
P
0.348
Calculating power for mean = null + difference
Alpha = 0.05 Assumed standard deviation = 35
Difference
3
Sample
Size
20
Power
0.0219603
Stat  Nonparametrics  1 – Sample Sign
Sign Test for Median: Quick
Sign test of median = 0.00000 versus < 0.00000
Quick
N
20
Below
11
Equal
0
Above
9
P
0.4119
Median
-5.036
Sign confidence interval for median
Confidence
Achieved
Interval
N Median Confidence
Lower Upper
Quick 20 -5.036
0.8847 -13.129 4.038
0.9500 -24.126 4.219
0.9586 -27.509 4.274
Position
7
NLI
6
One Sample Inf-11
Boxplot of Quick
(with Ho and 95% t-confidence interval for the mean)
_
X
Ho
-50
-25
0
25
50
75
Quick
Histogram of Quick
(with Ho and 95% t-confidence interval for the mean)
8
Frequency
6
4
2
0
_
X
Ho
-40
0
40
Quick
80
One Sample Inf-12
100
R output for Quack diet analysis (Read 500 values into vector “quack”)
> t.test(quack,alternative=c("less"),mu=0,conf.level=0.95)
One Sample t-test
data: quack
t = -1.7806, df = 499, p-value = 0.03779
alternative hypothesis: true mean is less than 0
95 percent confidence interval:
-Inf -0.09036075
sample estimates:
mean of x
-1.212730
> power.t.test(n=500,delta=1,sd=15,type="one.sample",
alternative="one.sided")
n = 500, delta = 1, power = 0.438
One Sample Inf-13
Summary
1.
Quick’s average weight loss of 3.02 is almost 3 times as much as the 1.21
weight loss reported by Quack.
2.
However, Quack’s small weight loss was significant, whereas Quick’s larger
weight loss was not! So Quack might not have a better diet, but he has more
evidence, 500 cases compared to 20.
Remarks
1.
Significance is about evidence, and having a large sample size can make
up for having a small effect.
2.
If you have a large enough sample size, even a small difference can be
significant. If your sample size is small, even a large difference may not be
significant.
3.
Quick needs to collect more cases, and then he can easily dominate the
Quack diet (though it seems like even a 3 pound loss may not be enough of
a practical difference to a dieter).
4.
Both the Quick & Quack statements are somewhat empty. It’s not enough to
report an estimate without a measure of its variability. Its not enough to
report a significance without an estimate of the difference. A confidence
interval solves these problems.
One Sample Inf-14
A confidence interval shows both statistical and practical significance.
Quack two & one-sided 95% CIs
y  z.025
s
15.2
 1.21 1.96
 (2.54,0.12)
n
500
(, y  z.05 s
n )  (,0.09)
One-sided CI says
mean is sig. less than
zero.
Quick two & one-sided 95% CIs
y  t.025,19
s
34.17
 3.02  2.093
 (19.01,12.97)
n
20
(, y  t.05,19 s
n )  (,10.19)
One-sided CI says
mean is NOT sig. less
than zero.
One Sample Inf-15