Transcript April 25

BHS 204-01
Methods in Behavioral
Sciences I
April 25, 2003
Chapter 6 (Ray)
The Logic of Hypothesis Testing
Degrees of Freedom


Degrees of freedom (df) – how many numbers
can vary and still produce the observed result.
Population statistics include the degrees of
freedom.


Calculated differently depending upon the
experimental design – based on the number of
groups.
T-Test df = (Ngroup1 -1) + (Ngroup2 -1)
Reporting T-Test Results


Include a sentence that gives the direction of
the result, the means, and the t-test results.
Example:



The experimental group showed significantly
greater weight gain (M = 55) compared to the
control group (M = 21), t(12) = 3.97, p=.0019,
two-tailed.
Give the exact probability of the t value.
Underline all statistics.
When to Use a T-Test





When two independent groups are compared.
When sample sizes are small (N< 30).
When the actual population distribution is
unknown (not known to be normal).
When the variances within the two groups are
unequal.
When sample sizes are unequal.
Using Error Bars in Graphs


Error bars show the standard error of the
mean for the observed results.
To visually assess statistical significance, see
whether:


The mean (center point of error bar) for one
group falls outside the error bars for the other
group.
Also compare how large the error bars are for the
two groups.
Figure 5.8. (p. 124)
Graphic illustration of cereal experiment.
Sources of Variance

Systematic variation – differences related to
the experimental manipulation.


Can also be differences related to uncontrolled
variables (confounds) or systematic bias (e.g.
faulty equipment or procedures).
Chance variation – nonsystematic differences.


Cannot be attributed to any factor.
Also called “error”.
F-Ratio




A comparison of the differences between
groups with the differences within groups.
Between-group variance = treatment effect +
chance variance.
Within-group variance = chance variance.
If there is a treatment effect, then the
between-group variance should be greater
than the within-group variance.
Testing the Null Hypothesis


Between-group variance (treatment effect)
must be greater than within-group variance
(chance variation), F > 1.0.
How much greater?


Normal curve shows that 2 SD, p <.05 is likely to
be a meaningful difference.
The p value is a compromise between the
likelihood of accepting a false finding and the
likelihood of not accepting a true hypothesis.
Box 6.1. (p. 135)
Type I and Type II Errors.

Type I error – likelihood of rejecting the null
when it is true and accepting the alternative
when it is false (making a false claim).


This is the p value -- .05 is probability of making
a Type I error.
Type II error – likelihood of accepting null
when it is false and rejecting the alternative
when it is true.

Probability is b, the power of a statistic is 1-b.
Reporting the F-Ratio


ANOVA is used to calculate the F-Ratio.
Example:


The experimental group showed significantly
greater weight gain (M = 55) compared to the
control group (M = 21), F(1, 12) = 4.75, p=.05.
Give the degrees of freedom for the numerator
and denominator.
When to Use ANOVA




When there are two or more independent
groups.
When the population is likely to be normally
distributed.
When variance is similar within the groups
compared.
When group sizes (N’s) are close to equal.
Threats to Internal Validity



It is the experimenter’s job to eliminate as
many threats to internal validity as possible.
Such threats constitute sources of systematic
variance that can be confused with an effect,
resulting in a Type I error.
Potential threats to validity must be evaluated
in the Discussion section of the research
report.