Relationships Between Quantitative Variables

Download Report

Transcript Relationships Between Quantitative Variables

Chapter 13
Testing
Hypotheses
about
Means
Copyright ©2011 Brooks/Cole, Cengage Learning
Hypothesis testing about:
• a population mean
• the difference between means of two populations
Three Cautions:
1. Inference is only valid if the sample is representative
of the population for the question of interest.
2. Hypotheses and conclusions apply to the larger
population(s) represented by the sample(s).
3. If the distribution of a quantitative variable is highly
skewed, consider analyzing the median rather than the
mean – called nonparametric methods (Topic 2 on CD).
Copyright ©2011 Brooks/Cole, Cengage Learning
2
13.1 Introduction to
Hypothesis Tests for Means
Steps in Any Hypothesis Test
1. Determine the null and alternative hypotheses.
2. Verify necessary data conditions, and if met,
summarize the data into an appropriate test statistic.
3. Assuming the null hypothesis is true,
find the p-value.
4. Decide whether or not the result is statistically
significant based on the p-value.
5. Report the conclusion in the context of the situation.
Copyright ©2011 Brooks/Cole, Cengage Learning
3
13.2: Testing Hypotheses about
One Mean
Step 1: Determine null and alternative hypotheses
1. H0: m = m0 versus Ha: m  m0 (two-sided)
2. H0: m = m0 versus Ha: m < m0 (one-sided)
3. H0: m = m0 versus Ha: m > m0 (one-sided)
Remember a p-value is computed assuming H0 is true,
and m0 is the value used for that computation.
Copyright ©2011 Brooks/Cole, Cengage Learning
4
Step 2: Verify Necessary Data Conditions …
Situation 1: Population of measurements of interest
is approximately normal, and a random sample of
any size is measured. In practice, use method if
shape is not notably skewed or no extreme outliers.
Situation 2: Population of measurements of interest
is not approximately normal, but a large random
sample (n  30) is measured. If extreme outliers or
extreme skewness, better to have a larger sample.
Copyright ©2011 Brooks/Cole, Cengage Learning
5
Continuing Step 2: The Test Statistic
The t-statistic is a standardized score for measuring
the difference between the sample mean and the null
hypothesis value of the population mean:
samplemean  null value x  m 0
t

s
standarderror
n
This t-statistic has (approx) a t-distribution with df = n - 1.
Copyright ©2011 Brooks/Cole, Cengage Learning
6
Step 3: Assuming H0 true, Find the p-value
• For Ha less than, the p-value is the area below t,
even if t is positive.
• For Ha greater than, the p-value is the area above t,
even if t is negative.
• For Ha two-sided, p-value is 2  area above |t|.
Copyright ©2011 Brooks/Cole, Cengage Learning
7
Steps 4 and 5: Decide Whether or Not the
Result is Statistically Significant based on
the p-value and Report the Conclusion in
the Context of the Situation
These two steps remain the same for all of the
hypothesis tests considered in this discussion.
Choose a level of significance a, and reject H0
if the p-value is less than (or equal to) a.
Otherwise, conclude that there is not enough
evidence to support the alternative hypothesis.
Copyright ©2011 Brooks/Cole, Cengage Learning
8
Example 13.1 Normal Body Temperature
What is normal body temperature? Is it actually
less than 98.6 degrees Fahrenheit (on average)?
Step 1: State the null and alternative hypotheses
H0: m = 98.6
Ha: m < 98.6
where m = mean body temperature in human population.
Copyright ©2011 Brooks/Cole, Cengage Learning
9
Example 13.1 Normal Body Temp (cont)
Data: random sample of n = 16 normal body temps
98.4, 98.6, 98.8, 98.8, 98.0, 97.9, 98.5, 97.6,
98.4, 98.3, 98.9, 98.1, 97.3, 97.8, 98.4, 97.4
Step 2: Verify data conditions …
Boxplot shows no outliers
nor strong skewness.
Sample mean of 98.2
is close to sample median
of 98.35.
Copyright ©2011 Brooks/Cole, Cengage Learning
10
Example 13.1 Normal Body Temp (cont)
Step 2: … Summarizing data with a test statistic
Key elements:
Sample statistic: x = 98.200 (under “Mean”)
s
0.497
Standard error: s.e.x  

 0.124 (under “SE Mean”)
n
16
x  m0 98.2  98.6
t

 3.22 (under “T”)
s
0.124
n
Copyright ©2011 Brooks/Cole, Cengage Learning
11
Example 13.1 Normal Body Temp (cont)
Step 3: Find the p-value
From output: p-value = 0.003
From Table A.3: p-value is less than 0.004.
Copyright ©2011 Brooks/Cole, Cengage Learning
12
Example 13.1 Normal Body Temp (cont)
Step 4: Decide whether or not the result is
statistically significant based on the p-value
Using a = 0.05 as the level of significance criterion,
the results are statistically significant because 0.003,
the p-value of the test, is less than 0.05. In other
words, we can reject the null hypothesis.
Step 5: Report the Conclusion
We can conclude, based on these data, that the mean
temperature in the human population is actually less
than 98.6 degrees.
Copyright ©2011 Brooks/Cole, Cengage Learning
13
Rejection Region Approach (Optional)
Replaces Steps 3 and 4 with:
Substitute Step 3: Find the critical value and rejection
region for the test.
Substitute Step 4: If the test statistic is in the rejection
region, conclude that the result is statistically
significant and reject the null hypothesis. Otherwise,
do not reject the null hypothesis.
Note: Rejection region method and p-value method will always
arrive at the same conclusion about statistical significance.
Copyright ©2011 Brooks/Cole, Cengage Learning
14
Rejection Region Approach
Summary (use row of Table A.2 corresponding to df)
For Example 13.1 Normal Body Temperature?
Alternative was one-sided to the left, df = 15, and a = 0.05.
Critical value from table A.2 is –1.75.
Rejection region is t  – 1.75. The test statistic was –3.22 so
the null hypothesis is rejected. Same conclusion is reached.
Copyright ©2011 Brooks/Cole, Cengage Learning
15
13.4: Testing Hypotheses about
Difference between Two Means
Lesson 1: the General (Unpooled) Case
Step 1: Determine null and alternative hypotheses
H0: m1 – m2 = 0 versus
Ha: m1 – m2  0
or Ha: m1 – m2 < 0
or Ha: m1 – m2 > 0
Watch how Population 1 and 2 are defined.
Copyright ©2011 Brooks/Cole, Cengage Learning
16
Step 2: Verify data conditions
and compute the test statistic.
Both n’s are large or no extreme outliers
or skewness in either sample.
Samples are independent.
The t-test statistic is:
samplemean  null value x1  x2   0
t

standarderror
s12 s22

n1 n2
Steps 3, 4 and 5: Similar to t-test for one mean.
Copyright ©2011 Brooks/Cole, Cengage Learning
17
Example 13.4 Effect of Stare on Driving
Randomized experiment: Researchers either stared
or did not stare at drivers stopped at a campus stop
sign; Timed how long (sec) it took driver to proceed
from sign to a mark on other side of the intersection.
Question: Does stare speed up crossing times?
Step 1: State the null and alternative hypotheses
H0: m1 – m2 = 0 versus Ha: m1 – m2 > 0
where 1 = no-stare population and 2 = stare population.
Copyright ©2011 Brooks/Cole, Cengage Learning
18
Example 13.3 Effect of Stare (cont)
Data: n1 = 14 no stare and n2 = 13 stare responses
Step 2: Verify data conditions …
No outliers nor extreme skewness for either group.
Copyright ©2011 Brooks/Cole, Cengage Learning
19
Example 13.3 Effect of Stare (cont)
Step 2: … Summarizing data with a test statistic
Sample statistic: x1  x2 = 6.63 – 5.59 = 1.04 seconds
Standard error: s.e.( x1  x2 ) 
t
x1  x2   0  1.04  0  2.41
s12 s22

n1 n2
s12 s22
1.36 2 0.822 2



 0.43
n1 n2
14
13
0.43
Copyright ©2011 Brooks/Cole, Cengage Learning
20
Example 13.3 Effect of Stare (cont)
Steps 3, 4 and 5: Determine the p-value and make
a conclusion in context.
The p-value = 0.013, so we reject the null hypothesis,
the results are “statistically significant”.
The p-value is determined using a t-distribution with
df = 21 (df using Welch approximation formula) and
finding area to right of t = 2.41.
Table A.3  p-value is between 0.009 and 0.015.
We can conclude that if all drivers were stared at,
the mean crossing times at an intersection would
be faster than under normal conditions.
Copyright ©2011 Brooks/Cole, Cengage Learning
21
Lesson 2: Pooled Two-Sample t-Test
Based on assumption that the two populations have
equal population standard deviations:  1   2  
Pooled standard deviation s p 
Pooleds.e.( x1  x2 )  s p
n1  1s12  n2  1s22
n1  n2  2
1 1

n1 n2
samplemean  null value x1  x2   0
t

pooledstandarderror
1 1
2
sp

n1 n2
Note: Pooled df = (n1 – 1) + (n2 – 1) = (n1 + n2 – 2).
Copyright ©2011 Brooks/Cole, Cengage Learning
22
Guidelines for Using Pooled t-Test
• If sample sizes are equal, pooled and unpooled standard
errors are equal and so t-statistic is same. If sample standard
deviations are similar, assumption of common population
variance is reasonable and pooled procedure can be used.
• If sample sizes are very different, pooled test can be
quite misleading unless sample standard deviations similar.
If sample sizes very different and smaller standard deviation
accompanies larger sample size, do not recommend using
pooled procedure.
• If sample sizes are very different, standard deviations are
similar, and larger sample size produced the larger standard
deviation, pooled t-test is acceptable and will be conservative.
Copyright ©2011 Brooks/Cole, Cengage Learning
23
Example 13.7 Male and Female Sleep Times
Q: Is there a difference between how long female
and male students slept the previous night?
Data: The 83 female and 65 male responses from
students in an intro stat class.
The null and alternative hypotheses are:
H0: m1 – m2 = 0 versus Ha: m1 – m2  0
where 1 = female population and 2 = male population.
Note: Sample sizes similar, sample standard deviations
similar. Use of pooled procedure is warranted.
Copyright ©2011 Brooks/Cole, Cengage Learning
24
Example 13.5 Male and Female Sleep Times
Two-sample T for sleep [without “Assume Equal Variance” option]
Sex
Female
Male
N
83
65
Mean StDev SE Mean
7.02
1.75
0.19
6.55
1.68
0.21
95% CI for mu(f) – mu(m): (-0.10, 1.02)
T-Test mu (f) = mu(m) (vs not =): T-Value = 1.62 P = 0.11 DF = 140
Two-sample T for sleep [with “Assume Equal Variance” option]
Sex
Female
Male
N
83
65
Mean
7.02
6.55
StDev
1.75
1.68
SE Mean
0.19
0.21
95% CI for mu(f) – mu(m): (-0.10, 1.03)
T-Test mu (f) = mu(m) (vs not =): T-Value = 1.62 P = 0.11 DF = 146
Both use Pooled StDev = 1.72
Copyright ©2011 Brooks/Cole, Cengage Learning
25
13.5 Relationship Between Tests
and Confidence Intervals
For two-sided tests (for one or two means):
H0: parameter = null value and Ha: parameter  null value
• If the null value is covered by a (1 – a)100%
confidence interval, the null hypothesis is not rejected
and the test is not statistically significant at level a.
• If the null value is not covered by a (1 – a)100%
confidence interval, the null hypothesis is rejected and
the test is statistically significant at level a.
Note: 95% confidence interval  5% significance level
99% confidence interval  1% significance level
Copyright ©2011 Brooks/Cole, Cengage Learning
26
Confidence Intervals and One-Sided Tests
When testing the hypotheses:
H0: parameter = null value versus a one-sided alternative,
compare the null value to a (1 – 2a)100% confidence interval:
• If the null value is covered by the interval, the test is
not statistically significant at level a.
• For the alternative Ha: parameter > null value, the test is
statistically significant at level a if the entire interval
falls above the null value.
• For the alternative Ha: parameter < null value, the test is
statistically significant at level a if the entire interval
falls below the null value.
Copyright ©2011 Brooks/Cole, Cengage Learning
27
Example 13.9 Ear Infections and Xylitol
95% CI for p1 – p2 is 0.020 to 0.226
Reject H0: p1 – p2 = 0 and accept Ha: p1 – p2 > 0
with a = 0.025, because the entire confidence
interval falls above the null value of 0.
Note that the p-value for the test was 0.01,
which is less than 0.025.
Copyright ©2011 Brooks/Cole, Cengage Learning
28
13.6 Choosing an Appropriate
Inference Procedure
• Confidence Interval or Hypothesis Test?
Is main purpose to estimate the numerical value
of a parameter? …
or to make a “maybe not/maybe yes” conclusion about
a specific hypothesized value for a parameter?
Copyright ©2011 Brooks/Cole, Cengage Learning
29
13.6 Choosing an Appropriate
Inference Procedure
• Determining the Appropriate Parameter
Is response variable categorical or quantitative?
Is there one sample or two?
If two, independent or paired?
Copyright ©2011 Brooks/Cole, Cengage Learning
30
13.7 Effect Size
Effect size is a measure of how much the truth
differs from chance or from a control condition.
m1  m 0
Effect size for a single mean: d 

Effect size for comparing two means:
m1  m 2
d

Copyright ©2011 Brooks/Cole, Cengage Learning
31
Estimating Effect Size
Estimated effect size for a single mean:
x  m0
ˆ
d
s
Estimated effect size for comparing two means:
x1  x2
ˆ
d
s
Relationship:
Test statistic = Size of effect  Size of study
Copyright ©2011 Brooks/Cole, Cengage Learning
32
13.8 Evaluating Significance
in Research Reports
1. Is the p-value reported? If know p-value, can make own
decision, based on severity of Type 1 error and p-value.
2. If word significant is used, determine whether used in
everyday sense or in statistical sense only. Statistically
significant just means that a null hypothesis has been
rejected, no guarantee the result has real-world importance.
3. If you read “no difference” or “no relationship” has been
found, determine whether sample size was small. Test may
have had very low power because not enough data were
collected to be able to make a firm conclusion.
Copyright ©2011 Brooks/Cole, Cengage Learning
33
13.8 Evaluating Significance
in Research Reports
4. Think carefully about conclusions based on extremely large
samples. If very large sample size, even weak relationship or
small difference can be statistically significant.
5. If possible, determine what confidence interval should
accompany a hypothesis test. Intervals provide information
about magnitude of effect as well as information about
margin of error in sample estimate.
6. Determine how many hypothesis tests were conducted in
study. Sometimes researchers perform multitude of tests, but
only few achieve statistical significance. If all null
hypotheses true, then ~1 in 20 tests will achieve statistical
significance just by chance at the .05 level of significance.
Copyright ©2011 Brooks/Cole, Cengage Learning
34