AHRQ Slide Template 2009

Download Report

Transcript AHRQ Slide Template 2009

Systematic Review Module 5:
When to Select Observational Studies as
Evidence for Comparative Effectiveness
Reviews
Dan Jonas, MD, MPH
Meera Viswanathan, PhD
Karen Crotty, PhD, MPH
RTI-UNC Evidence-based Practice Center
1
Sources
 AHRQ Methods Guide, Chapters 4 and 8,
http://www.effectivehealthcare.ahrq.gov/repFil
es/2007_10DraftMethodsGuide.pdf.
 Draft manuscript, Norris et al., Observational
studies in systematic reviews of comparative
effectiveness.
 Chou R, Aronson N, Atkins D, et al. Assessing
harms when comparing medical interventions:
AHRQ and the Effective Health Care Program.
J Clin Epidemiol 2008 Sep 25.
2
Learning Objectives
 Why should reviewers consider including
observational studies (OSs) in CERs?
 When should OSs be included in CERs?
 What are the differences in considering
inclusion of OSs for benefits as opposed
to OSs of harms?
3
Current Perspective
 CERs should consider including
observational studies
– this should be the default strategy
 Reviewers should explicitly state the
rationale for including or excluding OSs
Comparative Effectiveness
Reviews (CERs)
 Systematic reviews that compare the
relative benefits and harms among a
range of available treatments or
interventions for a given condition
4
5
CER Process Overview
Search for and
select
studies:
Prepare topic:
·
· Refine key
questions
· Develop
analytic
frameworks
Identify
eligibility
criteria
· Search
for relevant
studies
· Select
evidence for
inclusion
Abstract data:
· Extract
evidence
from studies
· Construct
evidence
tables
Analyze and
synthesize data:
· Assess quality of
studies
· Assess
applicability of
studies
· Apply qualitative
methods
· Apply
quantitative
methods (metaanalyses)
· Rate the strength
of a body of
evidence
Present
findings
6
Hierarchy of Evidence
Lowest risk of bias
Systematic Reviews
RCTs
Applicability?
Controlled Clinical Trials
and Observational Studies
Uncontrolled Observational Studies
Case reports and case series
Expert Opinions
Danger of Overreliance on
RCTs
 May be unnecessary, inappropriate,





inadequate, or impractical
May be too short in duration
May report intermediate outcomes rather than
main health outcomes of interest
Often not available for vulnerable populations
Generally report efficacy rather than
effectiveness
AHRQ Evidence-based Practice Centers
(EPCs) include variety of study designs (not
only RCTs)
7
8
Observational Studies (OSs)
 Definition: studies where the
investigators did not assign the
exposure/intervention
– That is, nonexperimental studies
– Cohort, case-control, cross-sectional
 We present considerations for including
OSs to assess benefits and harms
separately
9
OSs to Assess Benefits (I)
 Often insufficient evidence from trials to
answer all key questions in CERs (think
PICOTS)
– Population: may not be available for
–
–
–
–
subpopulations and vulnerable populations
Interventions: may not be able to assign highrisk interventions randomly
Outcomes: may report intermediate outcomes
rather than main health outcomes of interest
Timing: may be too short in duration
Setting: may not represent typical practice
10
OSs to Assess Benefits (II)
 Reviewers should consider two
questions:
1. Are there gaps in trial evidence for the
review questions under consideration?
2. Will observational studies provide valid
and useful information to address key
questions?
11
Systematic review
question
Always consider:
Controlled Trials
(including PICOTS)
Are there gaps in
trial evidence?
Yes
No
Confine review to
Controlled Trials
Consider OS
Will OS provide valid
and useful information?
Refocus the review
question on gaps
Assess whether OS
address the review
question
Assess the suitability of OS:
Natural history of the disease
or exposure
Potential biases
Gaps in Trial Evidence:
PICOTS
 Trial data may be insufficient for a
number of reasons
– PICOTS
– Populations included (missing certain
–
–
–
–
groups)
Interventions included
Outcomes reported (only intermediate)
Duration
All trials may be efficacy studies
12
Are Trial Data Sufficient?
PICOTS and Beyond (I)
 Risk of bias (internal validity)
– Degree to which the findings may be attributed to
factors other than the intervention under review
 Consistency
– Extent to which effect size and direction vary within
and across studies
– Inconsistency may be due to heterogeneity across
PICOTS
 Directness
– Degree to which outcomes that are important to users
of the CER (patients, clinicians, or policymakers) are
encompassed by trial data
– Health outcomes generally most important
13
Are Trial Data Sufficient?
PICOTS and Beyond (II)
 Precision
– Includes sample size, number of studies, and
heterogeneity within or across studies
 Reporting bias
– Extent to which trial authors appear to have reported
all outcomes examined
 Applicability
– Extent to which the trial data are likely to be applicable
to populations, interventions, and settings of interest to
the user
– The review questions should reflect the PICOTS
characteristics of interest
14
When to Identify Gaps in Trial
Evidence
 Identification of gaps in trial evidence
available to answer review questions
can occur at a number of points in the
review
– When first scoping the review
– Consultation with Technical Expert Panel
– Initial review of titles and abstracts
– After detailed review of trial data
15
16
Iterative Process of
Identifying Gaps in Evidence
Search for and
select
studies:
Prepare topic:
·
· Refine key
questions
· Develop
analytic
frameworks
Identify
eligibility
criteria
· Search
for relevant
studies
· Select
evidence for
inclusion
Abstract data:
· Extract
evidence
from studies
· Construct
evidence
tables
Analyze and
synthesize data:
· Assess quality of
studies
· Assess
applicability of
studies
· Apply qualitative
methods
· Apply
quantitative
methods (metaanalyses)
· Rate the strength
of a body of
evidence
Present
findings
17
Gaps in Trial Evidence
 Operationally, may perform initial
searches broadly to identify both OSs
and trials, or may do searches
sequentially and search for OSs after
reviewing trials in detail to identify gaps
in evidence
Will OSs Provide Valid and
Useful Information?
Reviewers should:
 Refocus the study question on gaps in
trial evidence
– Specify the PICOTS characteristics for
gaps in trial evidence
 Assess whether available OSs may
address the review questions (applicable
to PICOTS?)
 Assess suitability of OSs to answer the
review questions
18
19
Assess Suitability of OSs to
Answer the Review Questions
 After gaps have been identified in trial
literature and that OSs potentially fill
those gaps
– Consider the clinical context and natural
history of the condition under study
– Assess how potential biases may influence
the results of OSs
20
Clinical Context
 Fluctuating or intermittent conditions are
more difficult to assess with OSs
– Especially if there is no comparison group
 OSs may be more useful for conditions
with steady progression or decline
21
Potential Biases
 Selection bias (and confounding by
indication)
 Performance bias
 Detection bias
 Attrition bias
22
Confounding by Indication
 Confounding by indication
– A type of selection bias
– When different diagnoses, severity of illness, or
comorbid conditions are important reasons for
physicians to assign different treatments
– Common problem in pharmacoepidemiology
studies comparing beneficial effects of
interventions
 Generally would not include this information
because of a high risk of bias (poor internal
validity), unless studies had a good way to
adjust for severity of disease
23
Harms
 Assessing harms can be difficult
– Trials often focus on benefits, with little
effort to balance assessment of benefits
and harms
– OSs are almost always necessary to
assess harms adequately
 There are trade-offs between increasing
comprehensiveness of reviewing all
possible harms data and decreasing
quality (increasing risk of bias) for harms
data
24
Trials to Assess Harms (I)
 RCTs = gold standard for evaluating efficacy
 But relying solely on RCTs to evaluate harms
in CERs is problematic
– Most lack prespecified hypotheses for harms
because they are designed to evaluate benefits
– Assessment of harms is often a secondary
consideration
– Quality and quantity of reporting of harms is
frequently inadequate
– Few have sufficient sample sizes or duration to
adequately assess uncommon or long-term harms
25
Trials to Assess Harms (II)
 Most RCTs are “efficacy” trials
– They assess benefits and harms in ideal,
homogenous populations and settings
– Patients who are more susceptible to
harms are often underrepresented
 Few RCTs directly compare alternative
treatment strategies
 Publication bias and selective outcome
reporting bias
 RCTs may not be available
26
Trials to Assess Harms (III)
 Nevertheless, head-to-head RCTs provide the
most direct evidence on comparative harms
 In addition, placebo-controlled RCTs can
provide important information
 In general, CERs should routinely include both
head-to-head and placebo-controlled trials for
assessment of harms
– In lieu of placebo-controlled RCTs, CERs may
incorporate findings of well-conducted systematic
reviews if they evaluated the specific harms of
interest
Unpublished Supplemental
Trials Data
 Consider including results of completed
or terminated unpublished RCTs and
unpublished results from published trials
– FDA website, www.ClinicalTrials.gov, etc.
– Must contemplate ability to fully assess risk
of bias
 When significant number of published
trials fails to report an important adverse
event, CER authors should report this
gap in the evidence and consider efforts
to obtain unpublished data
27
28
OSs to Assess Harms
 OSs are almost always necessary to assess
harms adequately
 Exception is when data from RCTs are
sufficient to estimate harms reliably
 May provide best or only data for assessing
harms in minority or vulnerable populations
who are underrepresented in trials
 Types of OSs included in a CER will vary;
different types of OSs might be included or
rendered irrelevant by availability of data from
stronger study types
Hypothesis Testing vs.
Hypothesis Generating
 Important consideration in determining
which OSs to include
– Case reports are hypothesis generating
– Cohort and case-control studies are well
suited for testing hypotheses of whether
one intervention is associated with a
greater risk for an adverse event than
another and for quantifying the risk*
*Chou et al., JCE 2008
29
30
Hierarchy of Evidence
Lowest Risk of Bias
Systematic Reviews
Hypothesis
Testing
Hypothesis
Generating
RCTs
Applicability?
Controlled Clinical Trials
and Observational Studies
Uncontrolled Observational Studies
Case reports and case series
Expert Opinions
31
OSs to Assess Harms (I)
 Cohort and case-control studies
– CERs should routinely search for and include,
except when RCT data are sufficient and valid
 OSs based on patient registries
 OSs based on analyses of large databases
 Case reports and postmarketing surveillance
– New medications
 Other OSs
32
OSs to Assess Harms (II)
 Criteria to select OSs for inclusion
– There are often many more OSs than trials;
evaluating a large number of OSs can be
impractical when conducting a CER
– Several criteria commonly used in CERs to screen
OSs for inclusion (empirical data lacking):
 Minimum duration of followup
 Minimum sample size
 Defined threshold for risk of bias
 Study design (cohort and case-control)
 Specific population of interest
33
Key Take-home Points
 Often insufficient evidence from trials to answer all
key questions in CERs
 CERs should consider including OSs as the default
strategy
 Should explicitly state the rationale for including or
excluding OSs
 For OSs to assess benefits, reviewers should
consider two questions:
1. Are there gaps in trial evidence for the review questions
under consideration?
2. Will observational studies provide valid and useful
information to address key questions?
 For harms, should routinely search for and include
cohort and case-control studies