Hands-on session: Cox proportional hazard analysis
Download
Report
Transcript Hands-on session: Cox proportional hazard analysis
Advanced Statistics
for Interventional
Cardiologists
What you will learn
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Introduction
Basics of multivariable statistical modeling
Advanced linear regression methods
Hands-on session: linear regression
Bayesian methods
Logistic regression and generalized linear model
Resampling methods
Meta-analysis
Hands-on session: logistic regression and meta-analysis
Multifactor analysis of variance
Cox proportional hazards analysis
Hands-on session: Cox proportional hazard analysis
Propensity analysis
Most popular statistical packages
Conclusions and take home messages
1st day
2nd day
What you will learn
• Multifactor Analysis of Variance
– ANOVA Basics
•
•
•
•
•
•
•
•
•
Regression versus ANOVA
Model assumptions
Test principle – F-test
GLM approach
Contrasts
Multiple comparisons
Power and Sample size
Diagnostics
Non-parametric alternative
– Two-Factor ANOVA
• Interaction effect
– Analysis of Covariance
– Repeated Measures
– MANOVA
Use of Analysis of Variance
• ANOVA models basically are used to
analyze the effect of qualitative explanatory
variables (independent, factors) on a
quantitative response variable (dependent).
• In multifactor studies, ANOVA models are
employed to determine key factors and
whether the different factors interact.
Regression versus Analysis of Variance
Simple Regression model:
Simple ANOVA model :
Fitting a mean that changes as a
function of a quantitative variable.
Regression allows predictions
(extrapolations) of the response
variable.
Comparing means of groups made
by a qualitative variable.
No specification of the nature of
the statistical relationship with the
response variable.
Source: Statistics in Practice, Moore en McCabe, 2006
Single-Factor ANOVA
Example
The graph below contains the results of a study that measured the
response of 30 subjects to treatments and placebo.
Let’s evaluate if there are significant differences in mean response.
Single-Factor ANOVA
Basic Ideas and Assumptions
• Used to simultaneously compare two or more group
means based on independent samples from each
group.
• We assume that the samples are from normally
distributed populations, all with the same variance.
• The larger the variation among sample group means
relative to the variation of individual measurements
within the groups, the greater the evidence that the
hypothesis of equal group means is untrue.
Single-Factor ANOVA
Normality check
Kolmogorov-Smirnov or Shapiro Wilk Test
Graphically, with a Q-Q Plot (with Kurtosis en Skewness)
Norm al Q-Q Plot of IPGLOB
6
2,5
5
2,0
N=17
4
1,5
-1,392
2
1
Expected Normal Value
3
-0,174
Std. Dev = ,65
Mean = ,94
N = 16,00
0
0,00
,25
,50
,75
1,00
1,25
1,50
1,75
2,00
1,0
,5
0,0
-,5
-,5
Kurtosis
(Vorm)
IPGLOB
Skewness
(Symmet
,5
1,0
1,5
2,0
2,5
Observed Value
Normal Q-Q Plot of IPGLOB
16
2,5
14
2,0
12
0,0
1,5
8
6
4
Std. Dev = ,63
2
Mean = ,86
N = 55,00
0
-1,00
IPGLOB
-,50
0,00
,50
1,00
1,50
2,00
-0,182
Expected Normal Value
-0,395
10
N=56
1,0
,5
0,0
-,5
-1,0
-1,0
-,5
Obs erv ed Value
0,0
,5
1,0
1,5
2,0
2,5
Single-Factor ANOVA
Assumptions
Observations
i.i.d
Identically Distributed
Comparing
is not
possible !
Comparing is ...
Independent
We cannot
predict an
observation
from another
observation.
0
0
10
20
30
40
10
20
30
40
50
50
And as we have tests for normality (Kolmogorov-Smirnov,
Shapiro Wilk), there exist …
Tests for equal variances (eg. Levene)
(execute before we start with Anova tests)
Single-Factor ANOVA
Test Principle
Anova
Model
H0: 1 = 2 = … = k
Ha: not all µi are equal
Gr1
Gr2
Gr3
X11
X12
X13
X j
SSTotal SSGroups SS Error
Xn3,3
Xn1,1
Xn2,2
X,1
X,2
X,3
X
X,
n
(X
i 1
k
i
k
nj
X ) n j ( X j X ) ( X i X j ) 2
2
2
j 1
j 1 i 1
Principle of the global test:
If the variability between the groups is significantly greater
than the variability within the groups: reject H0
Single-Factor ANOVA
F-Test
• Null hypothesis: μ1 = μ2 = … μk
• Alternative hypothesis: not all μi are equal
• Test statistic: F = MSG/MSE
– MSG: estimate for the variability among groups (per df)
– MSE: estimate for the variability within groups (per df)
• Decision rule: reject H0 if
F FNk 1k ( )
• Demonstration: http://bcs.whfreeman.com/ips5e/
Single-Factor ANOVA
Anova table
Variability between
groups
Effect of Indep Var
Fob s= MStreat/MSerror
Mean square =
Sum of Squares / df
MS = SS / df
Sum of
Squares
Groups
Reject H0 when Fobs > Fcrit
df
Mean
Square
F
Signif.
SSTreat
k-1
MSTreat
MSTreat/MSErr
p-value
Error
SSErr
N-k
MSErr
Total
SSTot
N-1
Variability within groups
Effect of unknown Indep Var
or measurement errors.
Residual Variance
Reject H0
if p < .05
Interpretation of rejection of H0 :
At least one of the group means is different from another group mean.
Single-Factor ANOVA
Example
So what’s the verdict for
the drug effect ?
Oneway Anov a
Summary of Fit
RSquare
0,227826
RSquare Adj
0,170628
Root Mean Square Error
6,070878
Mean of Response
F value of 3,98 is
significant with a p-value
of 0,03, which confirms
that there is a significant
difference in the means.
7,9
Observ ations (or Sum Wgts)
30
Analy sis of Variance
Source
DF
Sum of Squares
Mean Square
Model
2
293,6000
146,800
3,9831
Error
27
995,1000
36,856
Prob>F
C Total
29
1288,7000
44,438
0,0305
Means f or Oneway Anov a
Level
Number
Mean
Std Error
a
10
5,3000
1,9198
d
10
6,1000
1,9198
placebo
10
12,3000
1,9198
Std Error uses a pooled estimate of error variance
F Ratio
The F test does not give
any specifics about which
means are different, only
that there is at least one
pair of means that is
statistically different.
The R-square is the
proportion of variation
explained by the model.
Regression Approach (GLM)
Example
Response:
LBS
Summary of Fit
RSquare
0,227826
RSquare Adj
0,170628
Root Mean Square Error
6,070878
Mean of Response
7,9
Observ ations (or Sum Wgts)
30
Parameter Estimates
Term
Estimate
Std Error
t Ratio
Prob>|t|
7,9
1,108386
7,13
<,0001
Drug[a-placebo]
-2,6
1,567494
-1,66
0,1088
Drug[d-placebo]
-1,8
1,567494
-1,15
0,2609
Intercept
Ef fect Test
Source
Drug
Nparm
2
DF
2
Sum of Squares
293,60000
F Ratio
Prob>F
3,9831
0,0305
From linear regression to the
general linear model.
Coding scheme for the categorical
variable defines the interpretation of
the parameter estimates.
Regression Approach (GLM)
Example - Regressor construction
• Terms are named according to how the regressor variables were
constructed.
• Drug[a-placebo] means that the regressor variable is coded as 1
when the level is “a”, - 1 when the level is “placebo”, and 0
otherwise.
• Drug[d-placebo] means that the regressor variable is coded as 1
when the level is “d”, - 1 when the level is “placebo”, and 0
otherwise.
• You can write the notation for Drug[a-placebo] as ([Drug=a][Drug=Placebo]), where [Drug=a] is a one-or-zero indicator of
whether the drug is “a” or not.
• The regression equation then looks like:
Y = b0 + b1*((Drug=a)-(Drug=placebo)) + b2*(Drug=d)-(Drug=placebo)) + error
Regression Approach (GLM)
Example – Parameters and Means
• With this regression equation, the predicted values for the levels
“a”, “d” and “placebo” are the means for these groups.
• For the “a” level:
Pred y = 7.9 + -2.6*(1-0) + -1.8*(0-0) = 5.3
For the “d” level:
Pred y = 7.9 + -2.6*(0-0) + -1.8(1-0) = 6.1
For the “placebo” level:
Pred y = 7.9 + -2.6(0-1) + -1.8*(0-1) = 12.3
• The advantage of this coding system is that the regression
parameter tells you how different the mean for that group is from
the means of the means for each level (the average response
across all levels).
• Other coding schemes result in different interpretations of the
parameters.
Example
Example
CAMELOT Study, JAMA 2004
What you will learn
• Multifactor Analysis of Variance
– ANOVA Basics
•
•
•
•
•
•
•
•
•
Regression versus ANOVA
Model assumptions
Test principle – F-test
GLM approach
Contrasts
Multiple comparisons
Power and Sample size
Diagnostics
Non-parametric alternative
– Two-Factor ANOVA
• Interaction effect
– Analysis of Covariance
– Repeated Measures
– MANOVA
Single-Factor ANOVA
Contrasts
• Contrasts are often used to analyze (a priori or post-hoc)
which group (or factor level) means are different.
• A contrast L is a comparison involving two or more factor
level means and is defined as a linear combination of the
factor level means µi where the coefficients ci sum to
zero.
L = c1µ1+c2µ2+…+ckµk with c1 + c2 + …+ ck = 0
• Examples
L = µ1 - µ2 or L = µ1 - 1/3 µ2 - 1/3 µ3- 1/3 µ4
Single-Factor ANOVA
Contrasts
t-Test for a linear contrast
Hypothesis : H0: L = c11+c22+…+ckk = 0
tobs =
SEL
ob
H1: L≠0
Lobs c1 X 1 c2 X 2 ... ck X k
Estimation of the contrast :
Lobs
versus
Lobs
=
√ MS
err
∑
c²j
nj
We decide to reject H0 when ׀tobs > ׀t N-k, 1-a/2
and accept that L is not equal to zero.
Single-Factor ANOVA
Multiple comparisons
•
In a study we often want to make several
comparisons, such as comparing many pairs of
means.
•
Making multiple comparisons increases the
possibility of committing a Type 1 error (declaring
something significant that is not in fact significant).
•
The more tests you do, the more likely you are to
find a significant difference that occurred by
chance alone.
•
If you are comparing all possible pairs of means in
a large ANOVA lay-out, there are many possible
tests, and a Type 1 error becomes very likely.
Single-Factor ANOVA
Adjusting for Multiple comparisons
•
There are many methods that modify tests to
control for an overall error rate when doing
simultaneous comparisons.
•
With the method of Bonferroni the overall error rate
is divided by the total number of comparisons you
want to make. So we test differences between
means at a significance level α* = α / c.
•
Other multiple comparison methods such as
Tukey-Kramer, Sidak or Gabriel are less
conservative than Bonferroni. This means that they
are more powerful and able to detect smaller
differences.
Single-Factor ANOVA
Adjusting for Multiple comparisons
What can we conclude about the differences between the groups
using the comparison circles and the tables on the next slide ?
Single-Factor ANOVA
Adjusting for Multiple comparisons
Means Comparisons
Dif=Mean[i]-Mean[j]
placebo
placebo
d
a
0,00000
6,20000
7,00000
d
-6,20000
0,00000
0,80000
a
-7,00000
-0,80000
0,00000
Alpha=
Both “a” and “d” appear
significantly different than
“placebo” with unadjusted
tests.
0,05
Comparisons for each pair using Student's t
t
2,05181
Abs(Dif )-LSD
placebo
placebo
d
a
-5,57063
0,62937
1,42937
d
0,62937
-5,57063
-4,77063
a
1,42937
-4,77063
-5,57063
Only Drug “a” is
significantly different than
“placebo” with the TukeyKramer adjusted t-tests
Positiv e v alues show pairs of means that are signif icantly dif f erent.
Comparisons for all pairs using Tukey-Kramer HSD
q*
2,47942
Abs(Dif )-LSD
placebo
d
a
placebo
-6,73157
-0,53157
0,26843
d
-0,53157
-6,73157
-5,93157
a
0,26843
-5,93157
-6,73157
Positiv e v alues show pairs of means that are signif icantly dif f erent.
The difference in significance
occurs because the quantile that is
multiplied with the SE to create a
Least Significant Difference has
grown from 2.05 to 2.47 between
Student’s test and the TukeyKramer test
SPSS: ANOVA
T test with P-value
no longer 0.05 but
0.05/n of tests performed
Single-Factor ANOVA
Power and Sample Size
• Power is the probability of achieving a certain
significance when the true means and variances are
specified.
• You can use the power concept to help choose a sample
size that is likely to give significance for certain effect
sizes and variances.
• Power has the following ingredients
–
–
–
–
The effect size – that is the seperation of the means
The standard deviation of the error or the variance
Alpha, the significance level
The number of observations, the sample size
Single-Factor ANOVA
Power and Sample size
• Increase the effect size. Larger differences are
easier to detect. For example, when designing
an experiment to test a drug, administer as large
a difference in doses as possible. Also, use
balanced designs.
• Decrease Residual Variance. If you have less
noise it is easier to find differences. Sometimes
this can be done by blocking or testing within
subjects of by selecting a more homogeneous
sample.
Single-Factor ANOVA
Power and Sample size
• Increase the sample size. With larger samples
the standard error of the estimate of effect size
is smaller. The effect is estimated with more
precision. Roughly, the precision increases in
proportion to the square root of the sample size.
• Accept less protection. Increase alpha. There is
nothing magic about alpha=0.05. A larger alpha
lowers the cut-off value. A statistical test with
alpha=0.20 declares significant differences more
often (and also leads to false conclusions more
often).
Single-Factor ANOVA
Power and Sample size
Test
Power Details Dialog
Power
1-way Anov a
Alpha
Click and Enter 1, 2 or a sequence of v alues f or each:
Alpha
From:
0,050
To:
0,010
1-way Anov a
Sigma
6,070878
Delta
3,128365
Sigma
Delta
Number
Power
Number
0,0100
6,070878
3,128365
30
0,3961
30
0,0100
6,070878
3,128365
40
0,5761
?
?
100
0,0100
6,070878
3,128365
50
0,7220
?
?
10
0,0100
6,070878
3,128365
60
0,8276
Solv e f or Power
0,0100
6,070878
3,128365
70
0,8981
Solv e f or Least Signif icant Number
0,0100
6,070878
3,128365
80
0,9421
Solv e f or Least Signif icant Value
0,0100
6,070878
3,128365
90
0,9683
Adjusted Power and Conf idence Interval
0,0100
6,070878
3,128365
100
0,9831
0,0500
6,070878
3,128365
30
0,6633
0,0500
6,070878
3,128365
40
0,8064
0,0500
6,070878
3,128365
50
0,8949
0,0500
6,070878
3,128365
60
0,9455
0,0500
6,070878
3,128365
70
0,9728
0,0500
6,070878
3,128365
80
0,9869
0,0500
6,070878
3,128365
90
0,9938
0,0500
6,070878
3,128365
100
0,9972
By
?
Done
Cancel
Help
Calculations will be done on all combinations
If you want 90% probability (power) of
achieving a significance of 0.01, then the
sample size needs to be slightly above 70.
For the same power at 0.05 significance,
the sample size only needs to be 50
ANOVA Diagnostics
Residuals
• As in regression, residuals, studentized residuals and
studentized deleted residuals are used for diagnosing
ANOVA model departures.
• Plots of residuals against fitted values, residual dot plots
and normal probability plots are helpful in diagnosing
following departures from the ANOVA model:
•
•
•
•
Nonnormality of error terms
Nonconstancy of error variance
Outliers and Influential observations
Nonindependence of error terms
ANOVA Diagnostics
Unequal Variances
• ANOVA assumes the variance is the same for all groups. Various Fbased methods test for equality of the variances.
Tests that the Variances are Equal
Level
Count
Std Dev
a
10
4,643993
MeanAbsDif to Mean
3,900000
MeanAbsDif to Median
3,900000
d
10
6,154492
5,120000
4,700000
placebo
10
7,149981
5,700000
5,700000
Test
F Ratio
DF Num
DF Den
Prob>F
O'Brien[.5]
1,1395
2
27
0,3349
Brown-Forsy the
0,5998
2
27
0,5561
Levene
0,8904
2
27
0,4222
Bartlett
0,7774
2
?
0,4596
Welch Anov a testing Means Equal, allowing Std's Not Equal
F Ratio
3,3942
DF Num
2
DF Den
Prob>F
17,406
0,0569
• If unequal variances are of concern, you can consider Welch Anova
(test in which the observations are weighted by the reciprocals of the
esimated variances) or a nonparametric approach or a transformation
of the response variable such as the square root or the log.
Single-Factor ANOVA
Nonparametric Alternative
• Nonparametric procedures do not depend on the
distribution of the error term, often the only
requirement is that the distribution is continuous.
• They are based on the ranks of the data, thus
ignoring the spacing information between the data.
• Kruskal-Wallis test statistic (h) has an approximate
chi-square distribution with k-1 degrees of freedom.
• Decision rule: reject H0 if
h
2
( k 1)
( )
Kruskal-Wallis test
Example
Wilcoxon / Kruskal-Wallis Tests (Rank Sums)
Level
Count
Score Sum
Score Mean
(Mean-Mean0)/Std0
a
10
122
12,2000
-1,433
d
10
132,5
13,2500
-0,970
placebo
10
210,5
21,0500
2,425
1-way Test, Chi-Square Approximation
ChiSquare
6,0612
DF
2
Prob>ChiSq
0,0483
What is your conclusion from the KruskalWallis test ? Compare with the Anova results.
Analysis of Variance
Demonstration
How to do an analysis of variance with the
EXCEL data analysis option ?
What you will learn
• Multifactor Analysis of Variance
– ANOVA Basics
•
•
•
•
•
•
•
•
•
Regression versus ANOVA
Model assumptions
Test principle – F-test
GLM approach
Contrasts
Multiple comparisons
Power and Sample size
Diagnostics
Non-parametric alternative
– Two-Factor ANOVA
• Interaction effect
– Analysis of Covariance
– Repeated Measures
– MANOVA
Two-Factor ANOVA
Introduction
• A method for simultaneously analyzing two
factors affecting a response.
– Group effect: treatment group or dose level
– Blocking factor whose variation can be separated
from the error variation to give more precise group
comparisons: study center, gender, disease
severity, diagnostic group, …
• One of the most common ANOVA methods
used in clinical trial analysis.
• Similar assumptions as for single-factor
anova.
• Non-parametric alternative : Friedman test
Two-Factor ANOVA
Example
Treatment
Blocking factor
Female (j=1)
Male (j=2)
A (i=1)
44
56
38
46
B (i=2)
62
54
70
48
C (i=3)
46
52
58
50
D (i=4)
58
48
62
40
52
60
44
58
54
56
66
62
74
86
76
82
64
56
68
50
Do different treatments cause differences in mean response ?
Is there a difference in mean response for males and females ?
Is there an interaction between treatment and gender ?
Two-Factor ANOVA
Interaction Effect
Two-way Anova allows to
evaluate the effect of the
individual factors on the
response (main effects) and
to evaluate interaction effects.
Interaction: treatment affects
the response differently
depending on the level of the
other factor (block).
Source:
Common Statistical Methods for Clinical
Research, 1997, Glenn A. Walker
Two-Factor ANOVA
The Model
Response score of
subject k in column
i and row j
Effect of treatment
factor (a levels or i
columns )
Interaction
effect
X ijk i j ( )ij ijk
Overall
Mean
Effect of blocking
factor (b levels
or j rows)
Error or
Effect of not
measured
variables
Two-Factor ANOVA
Anova Table
Effect of
treatment
Sum of
Squares
Treatment
Effect of
Blocking
factor
df
Mean
Square
F
Signif.
SSA
a-1
MSA
MSA/MSErr
p-value
Blocking
SSB
b-1
MSB
MSB/MSErr
p-value
Interaction
SSAB
ab-a-b+1
MSAB
MSAB/MSErr
p-value
Error
SSErr
N-ab
MSErr
Total
SSTot
N-1
MSTot
Variation Source
Between groups
Within groups
Error or
residual
variance
Interaction
Two-Factor ANOVA
Example (GLM approach)
Response:
Score
Summary of Fit
RSquare
0,479646
RSquare Adj
0,402556
Root Mean Square Error
8,971147
Mean of Response
Questions
How much of the variation of the
response is explained by the model ?
57,5
Observ ations (or Sum Wgts)
32
Lack of Fit
Source
DF
Lack of Fit
Sum of Squares
Mean Square
F Ratio
3
827,0000
275,667
4,9153
Pure Error
24
1346,0000
56,083
Prob>F
Total Error
27
2173,0000
0,0084
Max RSq
0,6777
What do you conclude from the Lack
of Fit test ?
Which of the factors have a significant
effect on the response ?
Parameter Estimates
Term
Estimate
Std Error
t Ratio
Prob>|t|
Intercept
57,5
1,58589
36,26
<,0001
Gender[F-M]
-5,5
1,58589
-3,47
0,0018
Treatm[A-D]
-7,75
2,746842
-2,82
0,0089
Treatm[B-D]
1,5
2,746842
0,55
0,5895
Treatm[C-D]
8
2,746842
2,91
0,0071
What is the mean response for the
males ?
What is the mean response for
subjects treated with D ?
Ef fect Test
Source
Nparm
DF
Sum of Squares
F Ratio
Prob>F
Gender
1
1
968,0000
12,0276
0,0018
Treatm
3
3
1035,0000
4,2867
0,0134
What can you do to improve the fit ?
Two-Factor ANOVA
Example: Leverage Plots
Least Squares Means
Least Squares Means
Level
Least Sq Mean
Std Error
Mean
Level
Least Sq Mean
Std Error
Mean
F
52,00000000
2,242786792
52,0000
A
49,75000000
3,171779498
49,7500
M
63,00000000
2,242786792
63,0000
B
59,00000000
3,171779498
59,0000
C
65,50000000
3,171779498
65,5000
D
55,75000000
3,171779498
55,7500
Two-Factor ANOVA
Example with Interaction
Response:
Questions
Score
How much of the variation of
the response is explained by
the model ?
Summary of Fit
RSquare
0,677682
RSquare Adj
0,583673
Root Mean Square Error
7,488881
Mean of Response
57,5
Observ ations (or Sum Wgts)
32
Parameter Estimates
Term
Estimate
Std Error
t Ratio
Prob>|t|
Intercept
57,5
1,32386
43,43
<,0001
Gender[F-M]
-5,5
1,32386
-4,15
0,0004
Treatm[A-D]
-7,75
2,292992
-3,38
0,0025
Treatm[B-D]
1,5
2,292992
0,65
0,5192
Treatm[C-D]
8
2,292992
3,49
0,0019
Gender[F-M]*Treatm[A-D]
1,75
2,292992
0,76
0,4528
Gender[F-M]*Treatm[B-D]
5
2,292992
2,18
0,0393
Gender[F-M]*Treatm[C-D]
-8,5
2,292992
-3,71
0,0011
Ef fect Test
Source
Nparm
DF
Sum of Squares
F Ratio
Prob>F
Gender
1
1
968,0000
17,2600
0,0004
Treatm
3
3
1035,0000
6,1516
0,0030
Gender*Treatm
3
3
827,0000
4,9153
0,0084
What can you conclude from
the effect test table ?
What is the mean response for
males treated with A ?
An interesting phenomenon, which is
true only for balanced designs, is that
the estimates and SS for the main
effects is the same as in the fit without
interaction. The F tests are different.
Why ? The interaction effect test is
identical to the lack-of-fit test in the
previous model.
Two-Factor ANOVA
Example with Interaction
Two-Factor ANOVA
Example with Interaction
Two-Factor ANOVA
Example Profile plot
Least Squares Means
Least Squares Means
Level
Least Sq Mean
Std Error
Mean
Level
Least Sq Mean
Std Error
Mean
F
52,00000000
1,872220162
52,0000
A
49,75000000
2,647719144
49,7500
M
63,00000000
1,872220162
63,0000
B
59,00000000
2,647719144
59,0000
C
65,50000000
2,647719144
65,5000
D
55,75000000
2,647719144
55,7500
Two-Factor ANOVA
Example Interaction Plot
Least Squares Means
Level
Least Sq Mean
Std Error
F,A
46,00000000
3,744440323
F,B
58,50000000
3,744440323
F,C
51,50000000
3,744440323
F,D
52,00000000
3,744440323
M,A
53,50000000
3,744440323
M,B
59,50000000
3,744440323
M,C
79,50000000
3,744440323
M,D
59,50000000
3,744440323
The plot visualizes that treatment D has a different effect on
mean response of males compared to females.
Two-Factor ANOVA
Example with Excel
ANOVA can easily be done with the data
analysis module from Excel
ANOVA table from Excel
ANALYSIS of VARIANCE
Source of Variation Sum of Squares
Treatment
1035
Blocking
968
Interaction
827
Error
1346
DF
3
1
3
24
Total
31
4176
Mean Squares
F
P-value F critical
345
6,15
0,003
3,009
968
17,26
0,0003
4,259
276
4,92
0,008
3,008
56
What can you conclude from this Anova table ?
What you will learn
• Multifactor Analysis of Variance
– ANOVA Basics
– Two-Factor ANOVA
– Analysis of Covariance
– Repeated Measures
– MANOVA
Analysis of Covariance
ANCOVA
• Method for comparing response means among two or
more groups adjusted for a quantitative concomitant
variable, or “covariate”, thought to influence the
response.
• The response variable is explained by independent
quantitative variable(s) and qualitative variable(s).
• Combination of ANOVA and regression.
• Increases the precision of comparison of the group
means by decreasing the error variance.
• Widely used in clinical trials
Analysis of Covariance
The model
• The covariance model for a single-factor with
fixed levels adds another term to the ANOVA
model, reflecting the relationship between the
response variable and the concomitant variable.
Yij i ( X ij X ) ij
• The concomitant variable is centered around the
mean so that the constant µ represents the
overall mean in the model.
Analysis of Covariance
Model assumptions
• The single factor Ancova model on the previous
slide assumes :
– Normality of error terms
– Equality of error variances for different treatments
– Equality of slopes of the different treatment regression
lines
– Linearity of regression relation with concomitant
variable
– Uncorrelatedness of error terms
Analysis of Covariance
Example
Let’s look again at the response (LBS=bacteria count) of 30 subjects to
one of three treatments by adding the continuous effect
(LBI=bacteria count at baseline) to the model.
Analysis of Covariance
Example
Response:
Adding the covariate LBI to the model
raises the RSquare form 22.78% to
67.62%.
LBS
Summary of Fit
RSquare
0,676261
RSquare Adj
0,638906
Root Mean Square Error
4,005778
Mean of Response
Lack of fit ? Tests whether anything you
have left out of the model is significant.
7,9
Observ ations (or Sum Wgts)
30
Lack of Fit
Source
DF
Lack of Fit
Sum of Squares
Mean Square
F Ratio
18
254,86926
14,1594
0,6978
Pure Error
8
162,33333
20,2917
Prob>F
Total Error
26
417,20260
0,7507
Max RSq
0,8740
Parameter Estimates
Term
Estimate
Std Error
t Ratio
Prob>|t|
Intercept
-2,695773
1,911085
-1,41
0,1702
Drug[a-placebo]
-1,185037
1,060822
-1,12
0,2742
Drug[d-placebo]
-1,076065
1,041298
-1,03
0,3109
LBI
0,9871838
0,164498
6,00
<,0001
Ef fect Test
Source
Nparm
DF
Sum of Squares
F Ratio
Prob>F
Drug
2
2
68,55371
2,1361
0,1384
LBI
1
1
577,89740
36,0145
<,0001
The parameter estimate for LBI is 0.987,
which is not unexpected because the
response is the bacteria count, and LBI is
the baseline count before treatment. With
a coefficient of nearly 1 for LBI, the model
is really fitting the difference in bacteria
counts.
Drug is no longer significant in this model.
How could this be ? The error in the
model has been reduced, so it should be
easier for differences to be detected.
Or could there be a relationship between
LBI and Drug ?
Analysis of Covariance
Example
Aha !
The drugs have not been randomly
assigned. The toughest cases with
the most bacteria tended to be given
the “placebo”. The drugs “a” and “d”
were given a head start at reducing
the bacteria count until LBI was
brought into the model.
So it is important to control for all the
factors as the significance of one
depends on what else is in the model.
Analysis of Covariance
Prediction Equation
We can calculate the prediction equation
from the parameter estimates :
Predicted LBS =
- 2.695 + 0.987 * LBI + - 1.185 when “a”
- 1.076 when “d”
2.261 when “placebo”
Analysis of Covariance
Leverage Plots
Interpretation and conclusions ?
Analysis of Covariance
Least Squares Means
It is not correct to compare raw cell means in the ANCOVA case
as raw cell means do not compensate for different covariate
values in the model.
Instead we construct predicted values (least squares means,
adjusted means), which are the expected value of the
observation from some level of the categorical factor when all the
other factors (covariates) are set to neutral values.
The role of least-squares means is that they allow comparisons of
levels with the control of other factors being held fixed.
We use the prediction equation to calculate these adjusted
means.
Analysis of Covariance
Least Squares Means
Least Squares
Means for Drug
example
Least Squares Means
Level
a
d
placebo
Least Sq Mean
Std Error
Mean
6,71496346
1,288494280
5,3000
6,82393479
1,272468995
6,1000
10,16110174
1,315923424
12,3000
Analysis of Covariance
Interactions
• When an Ancova model includes a main effect
and a covariate regressor, the analysis uses a
separate intercept for the covariate regressor for
each level of the main effect.
• If the intercepts are different, might not the
slopes of the lines also be different ? To find out
we need a way to capture the interaction of the
regression slope with the main effect. This is
done by introducing a crossed term, the
interaction of Drug and LBI into the model.
Analysis of Covariance
Interactions
Response:
LBS
Summary of Fit
RSquare
0,691505
RSquare Adj
0,627235
Root Mean Square Error
4,070002
7,9
Mean of Response
30
Observ ations (or Sum Wgts)
Lack of Fit
F Ratio
Mean Square
Sum of Squares
Source
DF
Lack of Fit
16
235,22462
14,7015
0,7245
Pure Error
8
162,33333
20,2917
Prob>F
Total Error
24
397,55795
0,7231
Max RSq
0,8740
Parameter Estimates
Std Error
t Ratio
Prob>|t|
Intercept
-3,108215
2,06108
-1,51
0,1446
Drug[a-placebo]
1,4776417
2,672093
0,55
0,5854
Drug[d-placebo]
-1,477269
2,650789
-0,56
0,5825
LBI
1,0027452
0,171762
5,84
<,0001
Drug[a-placebo]*LBI
-0,257522
0,237815
-1,08
0,2896
Drug[d-placebo]*LBI
0,0658032
0,227523
0,29
0,7749
Estimate
Term
Ef fect Test
F Ratio
Prob>F
Drug
2
2
8,50258
0,2566
0,7757
LBI
1
1
564,56753
34,0821
<,0001
Drug*LBI
2
2
19,64465
0,5930
0,5606
Source
Nparm
DF
Sum of Squares
What is your conclusion ?
Analysis of Covariance
Interactions
Illustration of Covariance
with Separate Slopes.
Example
Powell et al, Circ 2008
Example
Powell et al, Circ 2008
Example
Powell et al, Circ 2008
What you will learn
• Multifactor Analysis of Variance
– ANOVA Basics
– Two-Factor ANOVA
– Analysis of Covariance
– Repeated Measures
– MANOVA
Repeated-Measures
Basic concepts
• ‘Repeated-measures’ are measurements taken from the same
subject (patient) at repeated time intervals.
• Many clinical studies require:
– multiple visits during the trial
– response measurements made at each visit
• A repeated measures study may involve several treatments or
only a single treatment.
• ‘Repeated-measures’ are used to characterize a response
profile over time.
• Main research question:
– Is the mean response profile for one treatment group the same as
for another treatment group or a placebo group ?
• Comparison of response profiles can be tested with a single Ftest.
Repeated-Measures
Comparing profiles
Source:
Common Statistical Methods
for Clinical Research, 1997,
Glenn A. Walker
Repeated-Measures Designs
• Advantages
– Provide good precision for comparing treatments since
between subjects variability is excluded from the
residual error.
– Allows to lower the number of subjects (patients)
needed.
• Disadvantages (if several treatments per subject)
– The order of the treatments might have an effect on the
response : order effect
– The preceding treatment(s) might influence the
response : carry-over effect
Repeated Measures ANOVA
Single-Factor Model
• Response may vary
– among treatment groups
– among patients within groups
– among the different measurement times
• Therefore we include in the model
–
–
–
–
GROUP (between subject) fixed effect
SUBJECT (within group) random effect
TIME (within subject) effect
GROUP-by-TIME interaction
Repeated-Measures ANOVA
Single-Factor summary table
Source
df
SS
MS
F
Group
g-1
SSG
MSG
FG= MSG/MSP(G)
Subject (group)
N-g
SSP(G)
MSP(G)
--
Time
t-1
SST
MST
FT= MST/MSE
Group*time
(g-1)(t-1)
SSGT
MSGT
FGT = MSGT/MSE
Error
(N-g)(t-1)
SSE
MSE
--
N.t - 1
TOT(SS)
Total
Repeated Measures ANOVA
Approaches
• You can analyse repeated measures data with a
‘univariate’ approach using GLM. In addition to normality
and variance homogeneity, this approach requires the
assumption of ‘compound symmetry’, which means that
correlations between each pair of observations are the
same. In most repeated measures data, this assumption
is not valid.
• A ‘multivariate’ approach can be used to circumvent this
problem: repeated measurements become multivariate
response vectors (MANOVA).
Repeated Measures
Simple Example
• Six animals from two species were
tracked, and the diameter of the area that
each animal wandered was recorded.
Each animal was measured four times,
once per season.
• Is there a significant difference in mean
wandering area between the two species ?
Repeated Measures ANOVA
Random Effects – Mixed Model
Response:
miles
Tests wrt Random Eff ects
Summary of Fit
RSquare
Source
MS Num
DF Num
F Ratio
Prob>F
subject[species]
17,1667
4,29167
4
2,8879
0,0588
1,219062
season
47,4583
15,8194
3
10,6449
0,0005
4,458333
species
51,0417
51,0417
1
11,8932
0,0261
0,838417
RSquare Adj
0,75224
Root Mean Square Error
Mean of Response
Observ ations (or Sum Wgts)
SS
24
Parameter Estimates
Term
Std Error
t Ratio
Prob>|t|
Intercept
4,4583333
Estimate
0,24884
17,92
<,0001
species[COY OTE]:subject[1-3]
-0,666667
0,49768
-1,34
0,2003
species[COY OTE]:subject[2-3]
-0,666667
0,49768
-1,34
0,2003
species[FOX]:subject[1-3]
-1
0,49768
-2,01
0,0628
species[FOX]:subject[2-3]
0,25
0,49768
0,50
0,6227
-0,625
0,431003
-1,45
0,1676
0,431003
3,96
0,0012
0,431003
2,03
0,0605
0,24884
5,86
<,0001
season[fall-winter]
season[spring-winter]
season[summer-winter]
species[COY OTE-FOX]
1,7083333
0,875
1,4583333
Prediction
Formula
What are your conclusions
about the between subjects
species effect and the within
subjects season effect ?
Repeated Measures ANOVA
Correlated Measurements – Multivariate Model
Response
Profiles
Multi-variate
F-tests
All Between
Test
Exact F
DF Num
DF Den
Prob>F
Wilks' Lambda
0,2516799
Value
11,8932
1
4
0,0261
Pillai's Trace
0,7483201
11,8932
1
4
0,0261
Hotelling-Lawley
2,973301
11,8932
1
4
0,0261
Roy's Max Root
2,973301
11,8932
1
4
0,0261
SPSS: Repeated measure GLM
Questions?
Multi-factor ANOVA
Take-home messages
• Use ANOVA to compare group means; to analyze the
effect of one or more qualitative variables on a continuous
response variable.
• Use ANCOVA to analyze concomitantly the effect of a
quantitative independent variable (covariate).
• Significances of differences in means are tested with the
F-statistic, comparing between-group variation with withingroup variation.
• Always use graphics to look at the data and to investigate
the model assumptions.
• Carefully analyze the interaction effects.
• Analyse repeated measures by comparing profile plots
using the GLM or the multivariate MANOVA approach.
And now a brief break…
For further slides on these topics
please feel free to visit the
metcardio.org website:
http://www.metcardio.org/slides.html