Centerphase - Mayo Clinic

Download Report

Transcript Centerphase - Mayo Clinic

Area 4 SHARP Face-to-Face Conference
Phenotyping Team – Centerphase Project
Assessing the Value of
Phenotyping Algorithms
June 30, 2011
Topics
 Centerphase Background
 Project Overview
 Hypothesis
 Research Design
 Results to-date
 Next Steps
Centerphase: Background
 CENTERPHASE SOLUTIONS, INC. is a technology-driven services
company formed through a collaboration with Mayo Clinic in 2010
 The goal is to leverage electronic medical records (EMRs) and
clinical expertise from academic medical centers and other research
sites to address a broad array of healthcare opportunities
 The initial focus is to support enhanced design, planning and
execution of clinical trials
 Future areas include comparative effectiveness,
pharmacoeconomics, compliance and epidemiological studies
 Centerphase’s role on the Phenotyping Team is to evaluate the
effectiveness (cost and time) of using phenotyping algorithms for
identifying patient cohorts
It’s About Speed AND Accuracy…
Hypothesis
The development of phenotyping algorithms and tools
can reduce time and cost while maintaining or enhancing
quality, associated with identifying patient cohorts for
multiple secondary uses including clinical trials and care
management.
Approach
1. Choose a use case that can provide valuable
insights into a real world application
2. Develop a phenotyping methodology (“flowchart”)
to identify the patient cohort
3. Generate a random sample of patients from Mayo
EMR system based on ICD9-code
4. Conduct algorithm-driven and manual processes
in parallel on the sample
5. Compare the time, cost and accuracy of results
from the algorithm-driven to manual process
Initial Use Cases
Diabetes is a growing epidemic in this country: 25.8 million (8.3% of
population) have diabetes. Last year, 1.9 million new cases alone in
population of 20 years or older (CDC).
Diagnosed and Undiagnosed Diabetes
Type II Diabetes Mellitus (T2DM):
90-95% of all adult cases of diabetes
Multi-stage phenotype
representing a combined adaptation of:
– The eMERGE Northwestern T2DM
algorithm for clinical trial selection and
– The group practice reporting options
(GPRO) as defined under NCQA for
population management under the
Southeast Minnesota Beacon project
Source: 2005–2008 National Health and Nutrition Examination
Survey
Use Cases
Case 1: Care Management
Identify all high risk patients in a pool of 500
cases
Case 2: Clinical Trial
Identify patients that are good candidates for a
study
Phenotype Methodology
T2DM
ICD9 Code
eMERGE
Algorithm for T2DM
Screen 1:
Age
Screen 2:
Medications
Identified as T2DM patient
Blood Glucose: HbA1c > or = 9
Screen 3:
Labs & Vitals
Beacon Criteria for
categorizing patient
risk
Patient Cohort
Cholesterol: LDL > 130
Blood pressure: Systolic > 160
& Diastolic > 100
If ANY of the most recent values
exceed allowable levels OR ANY of
these elements has not been captured
in the measurement period, patient is
classified as high risk or RED
Identified as high risk or
“RED” patient
Note: All screens based on two-year measurement
period 1/1/09 – 12/31/10
Research Design
Randomly generate ONE sample set of patient records from database:
Based on T2DM ICD9 codes from at least 2 visits during measurement period
Manual
Process
Study coordinator
(SC) conducts
manual review of
patient charts,
and monitors
activity time
Sample Patient
Records
Algorithm-Driven
Process
Screens 1 -3
Screens 1 -3
Patient
Result Set
Patient
Result Set
Compare time, cost and accuracy of results
Programmer
develops and
runs algorithm
to query
records, and
monitors
development
and run time
Validation and Evaluation Process
Step 1
Step 2
“Dry Run”
20 Charts
500 Charts
• Review each chart for
manual and algorithm
processes to identify
any screening errors
• Confirm approaches
are consistent
• Refine procedures as
appropriate
Step 1 Completed
• Start with 50; review results
and adjust if necessary
• Complete manual reviews
• Collect time, cost and patient
result sets
• Conduct data queries
• Analyze results and evaluate
/ compare performance of
methods
Step 2 Underway
And How Did We Do…
Initial Results
• 50 Charts reviewed
• Manual process*
• Identified 10 “Red” (high risk) patients
• Required 11.5 total hours**
• Algorithm-driven process*
• Identified 8 “Red” patients
- All 8 were identified in manual process
- Missed 2 patients (false negatives)
• Required 7.4 hours**
• For the purposes of this presentation, the following analysis
extrapolated these results to evaluate the impact on 500
patients…. Actual findings will be reported upon completion of
500 charts
* Currently evaluating accuracy of both manual and algorithm-driven processes
** Includes time for manual validation of all Red charts
Preliminary Analysis: Case 1 - Care Management
Extrapolated to 500 Charts based upon Initial 50 Charts
Note: Costs and hours reflect time for secondary manual validation of all Red charts identified through both processes
Average Time to Find 1 "Red" Patient
Average Cost to Find 1 "Red" Patient
$100
140.0
Algorithm
80%
$80
Less
Costly
$60
Algorithm
120.0
90%
Faster
100.0
80.0
60.0
$40
"Manual Cost Per
Patient"
$20
"Algorithm Cost
Per Patient"
$-
"Manual Hours
Applied"
40.0
20.0
"Algorithm Hours
Applied"
Method
Method
Preliminary Analysis: Case 2 – Clinical Trials
Extrapolated to 500 Charts based upon Initial 50 Charts
Note: Costs and hours reflect time for secondary manual validation of all Red charts identified through both processes
Preliminary Comparison: Algorithm-driven to Manual process
 80% fewer charts to review
 Over 30 hours saved
Charts Reviewed
Time To Review (minutes)
300
3,000
200
2,000
100
Cost to Review
$4,000
$2,000
1,000
$-
0
0
 Almost 50% cost savings
1
101
201
301
Manual Method
Algorithm Method
401
1
101
201
301
401
Preliminary Conclusions and Next Steps
Initial takeaways:
Manual
Process
Sample Patient
Records
Algorithm-Driven
Process
Screens 1 -3
Screens 1 -3
Patient
Result Set
Patient
Result Set
Next steps:
If extrapolated results are
validated….
• Applying algorithms to identify
subsets of patients can save
time and be cost effective
• Algorithms can be most
effective when search for
larger numbers of patients
• More work needs to evaluate
relative accuracy
• Complete review of 500 charts
• Document results in white paper or manuscript