Power And Sample Size
Download
Report
Transcript Power And Sample Size
Statistical Power And
Sample Size Calculations
Minitab calculations
Manual calculations
1
Monday, 13 April 2015
6:45 AM
When Do You Need Statistical Power
Calculations, And Why?
A prospective power analysis is used
before collecting data, to consider
design sensitivity .
2
When Do You Need Statistical Power
Calculations, And Why?
A retrospective power analysis is
used in order to know whether the
studies you are interpreting were well
enough designed.
3
When Do You Need Statistical Power
Calculations, And Why?
In Cohen’s (1962) seminal power analysis of the
journal of Abnormal and Social Psychology he
concluded that over half of the published
studies were insufficiently powered to result in
statistical significance for the main hypothesis.
Cohen, J. 1962 “The statistical power of
abnormal-social psychological research: A
review” Journal of Abnormal and Social
Psychology 65 145-153.
4
What Is Statistical Power?
Essential concepts
• the null hypothesis Ho
• significance level, α
• Type I error
• Type II error
5
What Is Statistical Power?
Essential concepts
Recall that a null hypothesis (Ho) states that
the findings of the experiment are no
different to those that would have been
expected to occur by chance. Statistical
hypothesis testing involves calculating the
probability of achieving the observed results if
the null hypothesis were true. If this
probability is low (conventionally p < 0.05), the
null hypothesis is rejected and the findings are
said to be “statistically significant” (unlikely)
at that accepted level.
6
Statistical Hypothesis Testing
When you perform a statistical
hypothesis test, there are four
possible outcomes
7
Statistical Hypothesis Testing
• whether the null hypothesis (Ho) is true
or false
• whether you decide either to reject, or
else to retain, provisional belief in Ho
8
Statistical Hypothesis Testing
Decision
Retain Ho
Reject Ho
Ho is really
Ho is really false
true i.e., there i.e., there really
is really no
is an effect to
effect to find
be found
correct
Type
II
error:
decision:
prob = β
prob = 1 - α
Type I error: correct decision:
prob = α
prob = 1 - β
9
When Ho Is True And You Reject It,
You Make A Type I Error
• When there really is no effect, but the
statistical test comes out significant by
chance, you make a Type I error.
• When Ho is true, the probability of making
a Type I error is called alpha (α). This
probability is the significance level
associated with your statistical test.
10
When Ho is False And You Fail To
Reject It, You Make A Type II Error
• When, in the population, there really is an
effect, but your statistical test comes out
non-significant, due to inadequate power
and/or bad luck with sampling error, you
make a Type II error.
• When Ho is false, (so that there really is an
effect there waiting to be found) the
probability of making a Type II error is
called beta (β).
11
The Definition Of
Statistical Power
• Statistical power is the probability of
not missing an effect, due to sampling
error, when there really is an effect
there to be found.
• Power is the probability (prob = 1 - β)
of correctly rejecting Ho when it
really is false.
12
Calculating Statistical Power
Depends On
1. the sample size
2. the level of statistical significance
required
3. the minimum size of effect that it is
reasonable to expect.
13
How Do We Measure Effect Size?
• Cohen's d
• Defined as the difference between
the means for the two groups, divided
by an estimate of the standard
deviation in the population.
• Often we use the average of the
standard deviations of the samples as
a rough guide for the latter.
14
Cohen's Rules Of Thumb For Effect Size
Effect size
Correlation
coefficient
“Small effect”
r = 0.1
“Medium
effect”
r = 0.3
“Large
effect”
r = 0.5
Difference
between means
d = 0.2
standard
deviations
d = 0.5
standard
deviations
d = 0.8
standard
deviations
15
Calculating Cohen’s d
Notation
d
x1 x2
s Pooled
d
Cohen’s d effect size
x
Mean
s
Standard deviation
Subscript refers to the two conditions being compared
Cohen, J., (1977). Statistical power analysis for the behavioural
sciences. San Diego, CA: Academic Press.
Cohen, J., (1992). A Power Primer. Psychological Bulletin 112
155-159.
16
Calculating Cohen’s d
17
Calculating Cohen’s d from a t test
Interpreting Cohen's d effect size: an interactive visualization
18
Conventions And Decisions About
Statistical Power
• Acceptable risk of a Type II error is often
set at 1 in 5, i.e., a probability of 0.2 (β).
• The conventionally uncontroversial value for
“adequate” statistical power is therefore set
at 1 - 0.2 = 0.8.
• People often regard the minimum acceptable
statistical power for a proposed study as
being an 80% chance of an effect that really
exists showing up as a significant finding.
Understanding Statistical Power and Significance Testing — an Interactive Visualization 19
6 Steps to determine to determine an
appropriate sample size for my study?
1. Formulate the study. Here you detail your
study design, choose the outcome summary,
and you specify the analysis method.
2. Specify analysis parameters. The analysis
parameters, for instance are the test
significance level, specifying whether it is a
1 or 2-sided test, and also, what exactly it
is you are looking for from your analysis.
20
6 Steps to determine to determine an
appropriate sample size for my study?
3. Specify effect size for test. This could be
the expected effect size (often a best
estimate), or one could use the effect size
that is deemed to be clinically meaningful.
4. Compute sample size or power. Once you
have completed steps one through three
you are now in a position to compute the
sample size or the power for your study.
21
6 Steps to determine to determine an
appropriate sample size for my study?
5. Sensitivity analysis. Here you compute your sample
size or power using multiple scenarios to examine
the relationship between the study parameters on
either the power or the sample size. Essentially
conducting a what-if analysis to assess how
sensitive the power or required sample size is to
other factors.
22
6 Steps to determine to determine an
appropriate sample size for my study?
6. Choose an appropriate power or sample size, and
document this in your study design protocol.
However other authors suggest 5 steps (a, b, c or d)!
Other options are also available!
23
A Couple Of Useful Links
For an article casting doubts on scientific precision and
power, see The Economist 19 Oct 2013. “I see a train
wreck looming,” warned Daniel Kahneman. Also an
interesting read The Economist 19 Oct 2013 on the
reviewing process.
A collection of online power calculator web pages for
specific kinds of tests.
Java applets for power and sample size, select the
analysis.
24
Next Week
Statistical Power Analysis In Minitab
25
Statistical Power Analysis In Minitab
Minitab is available via RAS
Stat > Power and Sample Size >
26
Statistical Power Analysis In Minitab
Note that you might
find web tools for
other models.
The alternative
normally involves
solving some very
complex equations.
Recall that a comparison of two proportions
equates to analysing a 2×2 contingency table.
27
Statistical Power Analysis In Minitab
Note that you might
find web tools for
other models.
The alternative
normally involves
solving some very
complex equations.
Simple statistical
correlation analysis online
See Test 28 in the Handbook of
Parametric and Nonparametric
Statistical Procedures, Third
Edition by David J Sheskin
28
Factors That Influence Power
•
•
•
Sample Size
alpha
the standard deviation
29
Using Minitab To Calculate Power
And Minimum Sample Size
• Suppose we have two samples, each
with n = 13, and we propose to use the
0.05 significance level
• Difference between means is 0.8
standard deviations (i.e., Cohen's
d = 0.8), so a t test
• All key strokes in printed notes
30
Using Minitab To Calculate Power
And Minimum Sample Size
Note that all
parameters, bar
one are required.
Leave one field
blank.
This will be
estimated.
31
Using Minitab To Calculate Power
And Minimum Sample Size
• Power and Sample Size
• 2-Sample t Test
• Testing mean 1 = mean 2 (versus not =)
• Calculating power for mean 1 = mean 2 + difference
• Alpha = 0.05 Assumed standard deviation = 1
•
• Difference
•
0.8
Sample
Size
13
Power
0.499157
Power will be
0.4992
• The sample size is for each group.
32
Using Minitab To Calculate Power
And Minimum Sample Size
If, in the population, there really is a
difference of 0.8 between the
members of the two categories that
would be sampled in the two groups,
then using sample sizes of 13 each will
have a 49.92% chance of getting a
result that will be significant at the
0.05 level.
33
Using Minitab To Calculate Power
And Minimum Sample Size
• Suppose the difference between the
means is 0.8 standard deviations (i.e.,
Cohen's d = 0.8)
• Suppose that we require a power of
0.8 (the conventional value)
• Suppose we intend doing a one-tailed
t test, with significance level 0.05.
• All key strokes in printed notes
34
Using Minitab To Calculate Power
And Minimum Sample Size
Select “Options” to set a
one-tailed test
35
Using Minitab To Calculate Power
And Minimum Sample Size
36
Using Minitab To Calculate Power
And Minimum Sample Size
• Power and Sample Size
Target power
of at least 0.8
• 2-Sample t Test
• Testing mean 1 = mean 2 (versus >)
• Calculating power for mean 1 = mean 2 + difference
• Alpha = 0.05 Assumed standard deviation = 1
•
• Difference
•
0.8
Sample
Size
21
Target
Power
0.8
Actual Power
0.816788
• The sample size is for each group.
37
Using Minitab To Calculate Power
And Minimum Sample Size
• Power and Sample Size
At least 21 cases
in each group
• 2-Sample t Test
• Testing mean 1 = mean 2 (versus >)
• Calculating power for mean 1 = mean 2 + difference
• Alpha = 0.05 Assumed standard deviation = 1
•
• Difference
•
0.8
Sample
Size
21
Target
Power
0.8
Actual Power
0.816788
• The sample size is for each group.
38
Using Minitab To Calculate Power
And Minimum Sample Size
• Power and Sample Size
Actual power
0.8168
• 2-Sample t Test
• Testing mean 1 = mean 2 (versus >)
• Calculating power for mean 1 = mean 2 + difference
• Alpha = 0.05 Assumed standard deviation = 1
•
• Difference
•
0.8
Sample
Size
21
Target
Power
0.8
Actual Power
0.816788
• The sample size is for each group.
39
Using Minitab To Calculate Power
And Minimum Sample Size
Suppose you are about to undertake an
investigation to determine whether or not 4
treatments affect the yield of a product using 5
observations per treatment. You know that the
mean of the control group should be around 8,
and you would like to find significant differences
of +4. Thus, the maximum difference you are
considering is 4 units. Previous research suggests
the population σ is 1.64. So an ANOVA.
40
Using Minitab To Calculate Power
And Minimum Sample Size
41
Using Minitab To Calculate Power
And Minimum Sample Size
Power
Power and Sample Size
One-way ANOVA
Alpha = 0.05
Assumed standard deviation = 1.64
Number of Levels = 4
SS Sample
Maximum
Means
Size
Power Difference
8
5 0.826860
4
The sample size is for each level.
0.83
42
Using Minitab To Calculate Power
And Minimum Sample Size
To interpret the results, if you assign five
observations to each treatment level, you have
a power of 0.83 to detect a difference of 4
units or more between the treatment means.
Minitab can also display the power curve of all
possible combinations of maximum difference in
mean detected and the power values for oneway ANOVA with the 5 samples per treatment.
43
Next Week
Manual Calculations of Power
44
Sample Size Equations
Five different sample size equations
are presented in the printed notes.
For obvious reasons, only one is
explored in detail here.
45
Determining The Necessary Sample Size For Estimating A
Single Population Mean Or A Single Population Total With A
Specified Level Of Precision.
Calculate an initial sample size using the following equation:
recall
22 22
Z 2 s 2
x
z
z z
n
n
2
2
n
x 2n
B
B
n
Zα
s
The uncorrected sample size estimate.
The standard normal coefficient from the table
on a later slide
The standard deviation.
46
Determining The Necessary Sample Size For Estimating A
Single Population Mean Or A Single Population Total With A
Specified Level Of Precision.
Calculate an initial sample size using the following equation:
n
B
Z 2 s 2
B2
The desired precision level expressed as
half of the maximum acceptable
confidence interval width. This needs to
be specified in absolute terms rather
than as a percentage.
47
Determining The Necessary Sample Size For Estimating A
Single Population Mean Or A Single Population Total With A
Specified Level Of Precision.
Confidence
level
80%
Alpha (α)
level
0.20
Zα
1.28
90%
0.10
1.64
95%
0.05
1.96
99%
0.01
2.58
48
Determining The Necessary Sample Size For Estimating A
Single Population Mean Or A Single Population Total With A
Specified Level Of Precision.
To obtain the adjusted sample size estimate,
consult the correction table in the printed notes.
n is the uncorrected sample size value from the
sample size equation. n* is the corrected sample
size value.
See the example below.
49
Determining The Necessary Sample Size For Estimating A
Single Population Mean Or A Single Population Total With A
Specified Level Of Precision.
Additional correction for sampling finite
populations.
The above formula assumes that the population is
very large compared to the proportion of the
population that is sampled. If you are sampling
more than 5% of the whole population then you
should apply a correction to the sample size
estimate that incorporates the finite population
correction factor (FPC). This will reduce the
sample size.
50
Determining The Necessary Sample Size For Estimating A
Single Population Mean Or A Single Population Total With A
Specified Level Of Precision.
n
n'
n*
n*
1
N
The new FPC-corrected sample size.
n* The corrected sample size from the
sample size correction table.
N The total size of the population.
51
Example
• Objective: Restore the population of species Y in
population Z to a density of at least 30
• Sampling objective: Obtain estimates of the mean
density and population size of 95% confidence
intervals within 20% (±) of the estimated true value.
• Results of pilot sampling:
Mean (x) = 25
Standard deviation (s) = 7
52
Example
Given: The desired confidence level is
95% so the appropriate Za from the
table above is 1.96. The desired
confidence interval width is 20%
(±0.20) of the estimated true value.
Since the estimated true value is 25,
the desired confidence interval (B) is
25 x 0.20 = 5.
53
Example
Calculate an unadjusted estimate of
the sample size needed by using the
sample size formula:
n
Z 2 s 2
B
2
1.962 7 2
5
2
7.53
Round 7.53 up to 8 for the unadjusted
sample size.
54
Example
To adjust this preliminary estimate,
go to the sample size correction table
and find n = 8 and the corresponding
n* value in the 95% confidence level
portion of the table. For n = 8, the
corresponding value is n* = 15.
55
Example
Confidence Level
80%
90%
95%
99%
n
n*
n
n*
n
n*
n
n*
8
14
8
15
8
15
8
16
56
Example
The corrected estimated sample size
needed to be 95% confident that the
estimate of the population mean is
within 20% (±5) of the true mean is
15.
57
Example
Additional correction for sampling finite
populations.
If the pilot data described above was
gathered using a 1m x 10m (10 m2)
quadrat and the total population being
sampled was located within a 20m x 50m
macroplot (1000 m2) then
N = 1000m2/10m2 = 100.
58
Example
The corrected sample size would then
be:
n
*
15
n
13
.
04
n * 1 15
1
100
N
The new, FPC-corrected, estimated sample size to
be 95% confident that the estimate of the
population mean is within 20% (±5) of the true mean
is 13.
59
Text
Sample size calculations in clinical research
edited by Shein-Chung Chow, Jun Shao,
Hansheng Wang
New York : Marcel Dekker, 2003
Long loan Robinson Books Level 4
610.72 SAM
see also the conventional notes
60
Bibliography 1 of 4
Also see Tutorial in Quantitative Methods for
Psychology Volume 3, no 2 (2007): Special issue on
statistical power.
Editors note: The Uncorrupted Statistical Power
Statistical Power: An Historical Introduction
A Short Tutorial of GPower
61
Bibliography 2 of 4
Understanding Power and Rules of Thumb for Determining
Sample Sizes
Carmen R. Wilson VanVoorhis and Betsy L. Morgan
Tutorials in Quantitative Methods for Psychology 2007 3(2)
43-50.
This article addresses the definition of power and its
relationship to Type I and Type II errors. We discuss the
relationship of sample size and power. Finally, we offer
statistical rules of thumb guiding the selection of sample sizes
large enough for sufficient power to detecting differences,
associations, chi-square, and factor analyses.
62
Bibliography 3 of 4
Computing the Power of a t Test
Also see
Non-central t Distribution and the Power of the t
Test: A Rejoinder
Understanding Statistical Power Using Non-central
Probability Distributions: Chi-squared, G-squared,
and ANOVA
Power Estimation in Multivariate Analysis of Variance
A Power Primer
63
Bibliography 4 of 4
A Power Primer Cohen J. Tutorials in Quantitative Methods for
Psychology 2007 3(2) 79.
One possible reason for the continued neglect of statistical power
analysis in research in the behavioral sciences is the inaccessibility of
or difficulty with the standard material. A convenient, although not
comprehensive, presentation of required sample sizes is provided here.
Effect-size indexes and conventional values for these are given for
operationally defined small, medium, and large effects. The sample
sizes necessary for .80 power to detect effects at these levels are
tabled for eight standard statistical tests: (a) the difference between
independent means, (b) the significance of a product–moment
correlation, (c) the difference between independent rs, (d) the sign
test, (e) the difference between independent proportions, (f) chisquare tests for goodness of fit and contingency tables, (g) one-way
analysis of variance, and (h) the significance of a multiple or multiple
partial correlation.
A Power Primer Cohen J. Psychological Bulletin 1992 112(1) 155-159
DOI: 10.1037/0033-2909.112.1.155
64
Caveat
It is well known that statistical power
calculations can be valuable in
planning an experiment. There is also
a large literature advocating that
power calculations be made whenever
one performs a statistical test of a
hypothesis and one obtains a
statistically non-significant result.
65
Caveat
Advocates of such post-experiment
power calculations claim the
calculations should be used to aid in
the interpretation of the
experimental results. This approach,
which appears in various forms, is
fundamentally flawed.
66
Caveat
The paper documents that the problem
is extensive and presents arguments to
demonstrate the flaw in the logic.
The abuse of power: The pervasive
fallacy of power calculations for data
analysis, Hoenig J.M. and Heisey D.M.
American Statistician, 55(1), 19-24,
2001.
67
Have You Done Enough?
“This technical note provides guidance on
how to critique the statistical analysis of
… studies to maximise the chance that
the paper will be declined.”
Ten ironic rules for non-statistical
reviewers
Karl Friston
NeuroImage 2012 61 1300–1310.
68