Chapter 24 Comparing Means

Download Report

Transcript Chapter 24 Comparing Means

One sample T Interval
Example: speeding
•
•
•
•
•
90% confidence interval
n=23
Check conditions
Model: tn-1
Confidence interval: 31.0±1.52 = (29.48,
32.52)
• STAT TESTS 8 TInterval
Input: Data or Stats
One-sample t-test
• Null hypothesis
• test statistic
• Model: tn-1
• P-value depends the alternative
hypothesis
One sample T Test
Example: speeding
• t-statistic = 1.13
• P-value = 0.136 (one-sided)
• STAT TESTS 2: T-TEST (for one sample)
Chapter 24 Comparing Means
Math2200
Example: AA battery
• Brand name vs. generic batteries
• The same CD player, the same CD, volume at 5
• 6 pairs of AA alkaline batteries, randomized run
order
Brand name
Generic
194.0
190.7
205.5
203.5
199.2
203.5
172.4
206.5
184.0
222.5
169.5
209.4
Plot the data
• Boxplot
– Generic batteries
lasted longer and were
more consistent
– Two outliers?
– Is this difference really
large enough?
Comparing two means
• Parameter of interest
• Standard error
Comparing Two Means (cont.)
• Because we are working with means and
estimating the standard error of their
difference using the data, we shouldn’t be
surprised that the sampling model is a
Student’s t.
– The confidence interval we build is called a
two-sample t-interval (for the difference in
means).
– The corresponding hypothesis test is called a
two-sample t-test.
Sampling Distribution for the
Difference Between Two Means
• When the conditions are met, the standardized
sample difference between the means of two
independent groups
y1  y2    1  2 

t
SE  y1  y2 
can be modeled by a Student’s t-model with a
number of degrees of freedom found with a special
formula.
• We estimate the standard error with
s12 s22
SE  y1  y2  

n1 n2
A two-sample t-interval
• Margin of error
–
– What degrees of freedom?
• Confidence interval
What is df?
Between
and
Assumptions and Conditions
• Independence
– Randomization
– 10% condition
• Normal population assumption
– Nearly normal condition
– n<15, do not use these methods if seeing severe
skewness
– n<40, mildly skewness is OK. But should remark
outliers
– n>40, the CLT works well. The skewness does not
matter much.
• Independent group assumption
– Think about how the data are collected
Example: AA battery
• Parameter of interest
• Check conditions
Histogram of y
3
1
2
Frequency
1.0
0.5
0
0.0
Frequency
1.5
4
2.0
Histogram of x
160
170
180
190
x
200
210
190
200
210
y
220
230
Example: AA battery
• The sampling distribution is t with df=8.98
– Critical value for 95% CI
– 95% CI: (206.0-187.4)±16.5 = (2.1, 35.1)
Testing the difference between two means
260
300
250
260
175
300
130
255
200
275
225
290
240
250
275
200
Buying from a stranger
150
Buying from a friend
300
• Price offered for a used camera buying from a friend vs.
buying from a stranger. Does friendship has a
measurable effect on pricing?
300
x
y
Two-sample t-test
• Null hypothesis
• T-test statistic
• Standard error
• df (by the complicated formula)
• P-value (one-sided or two-sided)
Example: friend vs. stranger
• Specify hypotheses
• Check conditions (boxplots)
• When conditions are satisfied, do a twosample t-test
– Observed difference 281.88-211.43 = 70.45
– se = 18.70
– Df = 7.622948
– P-value = 0.00600258 (two-sided)
Back Into the Pool
• Remember that when we know a
proportion, we know its standard deviation.
– Thus, when testing the null hypothesis that
two proportions were equal, we could assume
their variances were equal as well.
– This led us to pool our data for the hypothesis
test.
Back Into the Pool (cont.)
• For means, there is also a pooled t-test.
– Like the two-proportions z-test, this test
assumes that the variances in the two groups
are equal.
– But, be careful, there is no link between a
mean and its standard deviation…
Back Into the Pool (cont.)
• If we are willing to assume or we are told that
the variances of two means are equal, we can
pool the data from two groups to estimate the
common variance and make the degrees of
freedom formula much simpler.
• We are still estimating the pooled standard
deviation from the data, so we use Student’s tmodel, and the test is called a pooled t-test.
The Pooled t-Test
• Estimate of the common variance
• se of the sample mean difference
• t-statistic
The Pooled t-Test
• Df = n1 + n2 – 2
• Confidence interval
 y1  y2   t

df
 SE pooled  y1  y2 
When should we pool?
• Most of the time, the difference is slight
• There is a test that can test this condition, but it
is very sensitive to failure of assumptions and
does not work well for small samples.
• In a comparative randomized experiment,
experiment units are usually selected from the
same population. If you think the treatment only
changes the mean but not the variance, we can
assume equal variances.
T-83 Plus
– STAT TESTS + 0: 2-SampTInt
• Data: 2 Lists or STATS: Mean, sd, size of each
sample
• Whether to pool the variance
– STAT TESTS + 4: 2-SampTTest
• One-sided or two-sided
• Two-tail, lower-tail, upper-tail
• Whether to pool the variance
What Can Go Wrong?
• Watch out for paired data.
– The Independent Groups Assumption
deserves special attention.
– If the samples are not independent, you can’t
use two-sample methods.
• Look at the plots.
– Check for outliers and non-normal
distributions by making and examining
boxplots.
What have we learned?
• We’ve learned to use statistical inference
to compare the means of two independent
groups.
– We use t-models for the methods in this chapter.
– It is still important to check conditions to see if our
assumptions are reasonable.
– The standard error for the difference in sample means
depends on believing that our data come from
independent groups, but pooling is not the best
choice here.
• The reasoning of statistical inference
remains the same; only the mechanics
change.