Tests for Significance
Download
Report
Transcript Tests for Significance
The Student’s T-test and other
tests for significance
• The t-test assesses
whether the means
of two groups are
statistically different
from each other.
This analysis is
appropriate
whenever you want
to compare the
means of two
groups.
• What does it mean to say that the
averages for two groups are
statistically different?
• Notice that the three situations is
that the difference between the
means is the same in all three.
• But, the three situations don't look
the same -- they tell very different
stories.
• The top example shows a case with
moderate variability of scores within
each group.
• The second situation shows the high
variability case.
• the third shows the case with low
variability.
• The two groups appear most
different or distinct in the bottom or
low-variability case.
• Why? Because there is relatively
little overlap between the two bellshaped curves.
• The formula for the
t-test is a ratio. The
top part of the ratio
is just the difference
between the two
means or averages.
The bottom part is a
measure of the
variability or
dispersion of the
scores.
• The top part of the formula is easy to compute - just find the difference between the means.
The bottom part is called the standard error of
the difference.
– To compute it, we take the variance for each group
and divide it by the number of samples in that
group. We add these two values and then take their
square root
• The formula is:
Formula for the Standard error of the difference between the means
• Remember, the variance is simply the square
of the standard deviation.
• The final formula for the t-test is:
Formula for the t-test.
• Cedar-apple rust is a (non-fatal) disease that affects
apple trees. Its symptom is rust-colored spots on
apple leaves. Red cedar trees are the immediate
source of the fungus that infects the apple trees. If you
could remove all red cedar trees within a few miles of
the orchard, you should eliminate the problem. In the
first year of this experiment the number of affected
leaves on 8 trees was counted; the following winter all
red cedar trees within 100 yards of the orchard were
removed and the following year the same trees were
examined for affected leaves. The results are
recorded on the next panel:
Data
•
Tree number of rusted number of rusted
leaves: year 1
leaves: year 2
1
2
3
4
5
6
7
8
38
10
84
36
50
35
73
48
average
46.8
standard dev 23
32
16
57
28
55
12
61
29
36.2
19
difference: 1-2
6
-6
27
8
-5
23
12
19
10.5
12
Determine whether there was a significant change in the number of rusted
leaves between years 1 and 2. Did the treatment cure the problem?
How are these two experiments different even though the groups have the same mean?
Experiment I
Grp A Grp B
38
32
10
16
84
57
36
28
50
55
35
12
73
61
48
29
avg
stdev
46.75 36.25
23.21 19.02
Experiment II
Grp A Grp B
46.75 36.25
46.25 36.25
46.75
36
47.25 36.25
46.75
36.5
46.25 36.25
46.75
36
47.25
36.5
avg
stdev
46.75 36.25
0.378 0.189
Significant?????
• Sometimes, when the statistical In a scientific study, a theory is
proposed, then data is collected and analyzed. The statistical
analysis of the data will produce a number that is statistically
significant if it falls below 5%, which is called the confidence level. In
other words, if the likelihood of an event is statistically significant,
the researcher can be 95% confident that the result did not happen
by chance.
• Significance of an experiment is very important, such as the safety
of a drug meant for humans, the statistical significance must fall
below 3%. In this case, a researcher could be 97% sure that a
particular drug is safe for human use. This number can be lowered
or raised to accommodate the importance and desired certainty of
the result being correct.
• Statistical significance is used to reject or accept what is called the
null hypothesis. A hypothesis is a statement of the theory that a
researcher is trying to prove. The null hypothesis holds that the
factors a researcher is looking at have no effect on differences in the
data. Statistical significance is usually written, for example, t=.02,
p<.05. Here, "t" stands for the statistic test score and "p<.05" means
that the probability of an event occurring by chance is less than 5%.
These numbers would cause the null hypothesis to be rejected,
therefore affirming that the particular theory is true.
Other Statistical Tests
www.wikipedia.org
Standard Deviation
• The standard deviation of a probability distribution,
random variable, or population or multiset of values is a
measure of the spread of its values (wiki).
• The standard deviation is the most common measure of
statistical dispersion, measuring how widely spread the
values in a data set are. If the data points are close to
the mean, then the standard deviation is small. As well, if
many data points are far from the mean, then the
standard deviation is large. If all the data values are
equal, then the standard deviation is zero.
• www.wiki.com
68-95-99.7 rule
Dark blue is less than one standard deviation from the
mean. For the normal distribution, this accounts for 68.27 %
of the set; while two standard deviations from the mean
(medium and dark blue) account for 95.45 %; and three
standard deviations (light, medium, and dark blue) account
for 99.73 %.
Why???
• The standard deviation can also help you
evaluate the worth of all those so-called
"studies" that seem to be released to the
press everyday. A large standard deviation
(greater than 10% of the mean) in a study
that claims to show a relationship between
eating Twinkies and better SAT scores, for
example, might tip you off that the study's
claims aren't all that trustworthy.
Coefficient of Variance
• The coefficient of variance (CV) measures the
precision of the person/s during a set of individual
tests (replicates) performed for one specific water
quality parameter. As an example, if a team has just
finished collecting data on 5 replicates of dissolved
oxygen data, the team can use the coefficient of
variance formula to determine how precisely they
performed the data. The higher the precision (the
lower the %), the higher the likelihood that there was
no difference in the way each replicate/ individual
test was performed. In other words, data within a
data set which is collected consistently should
hypothetically have a Coefficient of Variance equal to
zero percent!
The Formula
• Calculating the Coefficient of Variance
• s = standard deviation
• X = average
• You can calculate the CV for the 3-5
replicates for a single sampling.
• Distributions with CV < 1 are considered lowvariance (that’s good), while those with CV >
1 are considered high-variance (that’s bad).
Percent Error
• The percent error can be determined when the
true value is compared to the observed value
according to the equation below:
• % error = | your result - accepted value |
accepted value
• Less than 5% error is acceptable
x 100 %
• Additional Slides
A Tale of Two Tails
• Directional hypotheses are called onetailed
– We are only interested in deviations at one tail
of the distribution
• Non-directional hypotheses are called twotailed
–We are usually interested in any significant
deviations from the null hypothesis
How do you decide to use a oneor two-tailed approach?
One Tail or Two? The moderate
approach:
• If there’s a strong, prior, theoretical expectation
that the effect will be in a particular direction
(A>B), then you may use a one-tailed approach.
Otherwise, use a two-tailed test.
• Because only an A>B result is interesting,
concentrate your attention on whether there is
evidence for a difference in that direction.
– E.G. does this new educational reform improve
students’ test scores?
– Does this drug reduce depression?
One tail or two? The more
conservative approach:
• The problem with the moderate approach
is that you probably would actually find it
interesting if the result went the other way,
in many cases.
– If the new educational reform leads to
worse test scores, we’d want to know!
– If the new drug actually
increases symptoms of depression,
we’d want to know!
One tail or two? The more
conservative approach:
• Only use a one-tailed test if you have a
strong hypothesis about the directionality
of the results (A>B) AND it could also be
argued that a result in the “wrong tail”
(A<B) is meaningless, and might as well
be due to chance.
One tail or two: The most
conservative approach
• Always use two-tailed tests!
• Correcting for one-vs. two-tailed tests
– If you think a researcher has run the wrong kind
of test, it’s easy to recalculate the p-value
yourself:
– P (one-tailed) = ½ P (two-tailed)
Was the result significant?
• There is no true sharp dividing line between
probable and improbable results.
• There’s little difference between p=0.051 and
p=0.049, except that some journals will not
publish results at p=0.051, and some readers
will accept results at p=0.049 but not at
p=0.051.
• In any case it does not tell us if the result is
IMPORTANT!
Decision theory and tradeoffs
between types of errors
• Think of a household smoke detector.
• Sometimes it goes off and there’s no fire (you
burn some toast, or take a shower).
–A false alarm.
–A Type I error.
• Easy to avoid this type of error: take out the
batteries!
• However, this increases the chance of a Type
II error: there’s a fire, but no alarm.
Decision theory and tradeoffs
between types of errors
• Similarly, one could reduce the chances of a
Type II error by making the alarm
hypersensitive to smoke.
• Then the alarm will very likely go off in a fire.
• But you’ll increase your chances of a false
alarm = Type I error. (The alarm is more likely
to go off because someone sneezed.)
• There is typically a tradeoff of this sort between
Type I and Type II errors.