The Case for a Single

Download Report

Transcript The Case for a Single

Report Cards, P4P, EMRs, and
Disease Management
An Analysis of Managed Care 2.0
The debate about quality has been
corrupted in two ways
• Quality problems have been exaggerated;
this is usually accomplished by confusing
inferior quality with access barriers.
• Discussion of QI has been limited to those
activities which plans can conduct (e.g.,
financial incentives, report cards). QI which
leaves out plans (e.g., public health, ending
the nurse shortage) gets less attention.
Example of exaggeration of the
quality problem
“Extensive research has documented that all
three forms of clinical quality problems –
underuse, overuse, and misuse – are ubiquitous in
American medicine….” (p. 166).
Elise C. Becher and Mark R. Chassin,
“Improving the quality of health care: Who will
lead?” Health Affairs 2001;20(5):164-179, 166.
Becher and Chassin offered this
proof of “ubiquitous” inferior quality
• A 1998 Rand literature review finding 3040% underuse and 20-30% overuse, and
malpractice studies finding, 1% misuse.
• But a far more extensive Rand study (2003)
found 46% underuse and 11% overuse.
• Overuse and misuse obviously involve
provider error. But underuse may not.
Rand reported 46% underuse and
11% overuse
But Rand made no attempt to determine what
caused underuse and overuse.
Examples of Rand findings for diabetics:
* 24% had A1c measured every six months;
* 14% had annual eye exam;
* 23% had urine protein checked annually;
* 56% received dietary and exercise
counseling;
* 45% had follow-up visit every six months.
Researchers ignored underuse
until late 1990s
“Most health services research to date has
been directed at identifying and reducing
excessive utilization. Little attention has
been given to underuse of care.”
Two scholars at the RAND Corporation
(R. L. Kravitz and M. Laouri, “Measuring and averting
underuse of necessary cardiac procedures: A summary of
results and future directions,” Joint Commission Journal
on Quality Improvement 1997;23:268-76).
Example of misuse of the 2003 Rand
study (conflating quality and access)
“[D]espite the extensive investment in
developing clinical guidelines, most
clinicians do not routinely integrate them
into their practices. In a recent study of US
adults, Elizabeth McGlynn and colleagues
found that more than half did not receive
the recommended … care….”
Dan Mendelson and Tanisha V. Carino, “Evidence-based medicine in the
United States: De rigueur or dream deferred?” Health Affairs 2005;24:133136, 134.
Another example of the misuse of
the Rand study
“Research has shown that physicians
incorporate the latest medical evidence into
their treatment decisions 50 percent of the
time (McGlynn et al, 2003).”
US Department of Health and Human
Services, Office of National Coordinator for
Health Information Technology, The Decade of
Health Information Technology: Delivering Consumer-Centric and
Information-Rich Health Care, July 21, 2004, 3.
Another example of misuse of the
Rand study
“Physicians deliver recommended care only about
half of the time….” (citing McGlynn et al.)
Richard Hillestad et al., “Can electronic
medical record systems transform health care?
Potential health benefits, savings, and costs,”
Health Affairs 2005;24:1103, 1110.
This article, also by Rand scholars, was
funded by the computer industry hailing the
benefits of EMRs.
Rand facilitated misunderstanding:
“our results need no risk adjustment”
“We primarily chose measures of processes
as indicators, because they represent the
activities that clinicians control most
directly, [and] because they do not generally
require risk adjustment….”
Elizabeth McGlynn et al., “The quality
of health care delivered to adults in the
United States,” New England Journal of
Medicine 2003;348:2635-45, 37.
Outcome and process measures
• Outcome measures reflect changes in
patient health. Examples: mortality rates
after surgery, cholesterol level, and ability
to carry out activities of daily living.
• Process measures reflect how well providers
comply with standards of care. Examples:
percent of children vaccinated, and percent
of diabetics given eye exams.
Underuse is affected by factors
outside physician control
• No health insurance or insurance with preex exclusions or out-of-pocket payments;
• Other barriers (patient values, low income,
illiteracy, immobility, transportation,
daycare, change in residence or insurance).
Evidence that health insurance
affects underuse by diabetics
“[A]though an estimated 35 percent of those with
health coverage had received a blood glucose test,
a cholesterol test, eye exam, foot exam, and
influenza vaccination, just 14 percent of those
without health coverage received the same set of
services.”
US GAO, Managing Diabetes: Health Plan
Coverage of Services and Supplies, February
2005, 19.
Evidence that patient behavior
affects process measures
• Three-fifths of elderly Medicare beneficiaries who receive
an appropriate recommendation for cholecystectomy fail to
have it done;
• half of insured patients who should, according to a stress
test, have an angiogram do not get it; and
• a fourth of insured patients who, according to their
angiogram, should have angioplasty or bypass surgery
receive neither.
Sources: SM Asch et al., “Measuring underuse of necessary care among
elderly Medicare beneficiaries using inpatient and outpatient claims,” JAMA
2000;284:2325-2333 (cholecystectomy bullet); PP Garg et al., “Understanding
individual and small area variation in the underuse of coronary angiography
following acute myocardial infarction,” Med Care 2002;40:614-626, and M
Laouri et al., “Underuse of coronary angiography: Application of a clinical
method,” Int J Qual Health Care 1997;9:15-22 (angiogram bullet); LL Leape
et al., “Underuse of cardiac procedures: Do women, ethnic minorities, and the
uninsured fail to receive needed revascularization?” Ann Internal Med
Patient refusal has been documented
in studies of …
•
•
•
•
•
•
•
warfarin for atrial fibrillation,
aspirin for heart attack,
hypertension medication,
vaccines for influenza and pneumonia,
blood glucose tests,
colorectal cancer screens, and
radiation therapy for cancer.
Sources: SD Weisbord et al., “Is warfarin really underused in patients with
atrial fibrillation?” J Gen Intern Med 2001;16:743-749; J O’Neil, “A small
step for women’s hearts,” New York Times, February 22, 2005, D6; BS
Bloom, “Continuation of initial antihypertensive medication after one year
of therapy,” Clin Ther 1998;20:671-681; PR Dexter et al., “Inpatient
computer-based standing orders vs physician reminders to increase
influenza and pneumococcal vaccination rates: A randomized trial,” JAMA
(Patient refusal cont.)
• Patient refusal accounted for 59 percent of the
underuse of colorectal cancer screens among
Veterans Affairs patients.
• At a 2005 meeting of the American Heart
Association, investigators reported on a study
which found that doctors recommended aspirin on
a daily basis to about 95 percent of women who
had suffered heart attacks and stroke, but that only
54 percent of the heart-attack patients and 43
percent of the stroke patients complied with the
recommendation.
Sources: Walter et al., op cit.(colorectal bullet); O’Neil, op cit. (aspirin
bullet)
Thus, current research permits us to
say…
• Overuse occurs 11% of the time and
• Misuse (malpractice) occurs <1% of the
time.
• Underuse due to provider failure occurs
some unknown percent of the time.
• These figures reveal serious problems, but
they do not add up to “ubiquitous.”
Exaggerating the problem of inferior
providers serves insurance industry
• Insurance industry has used the picture of
inept providers to promote managed care.
• QI that does not assume inept providers
and/or which insurance companies cannot
do – that is, which does not fall under the
rubric of “managed care” – gets much less
attention.
Managed care is not the only way to
improve quality
•
•
•
•
•
•
•
Other methods with more substantial evidence to
support them include:
Ending the nurse shortage;
ending waiting times for emergency services;
insuring the uninsured and under-insured;
conducting public education campaigns re appropriate
medical care and the effects of unhealthy behavior;
rolling back the excesses of managed care;
measuring and sharing performance results privately with
providers;
conducting controlled trials and other forms of traditional
research to find new treatments and to evaluate the efficacy
of existing treatments.
Managed care has gone through two
stages
Managed Care 1.0 relied on
* financial incentives (capitation and
bonuses), and
* utilization review and drug formularies.
Managed Care 2.0 relies on
* report cards, which facilitate P4P, and
* disease management.
Definition of terms
• Report cards: Any document purporting to
measure the quality of care given by
particular providers which is used to reward
or punish providers.
• Pay for performance: Any method of paying
providers based on grades on report cards.
(Definitions cont.)
Report card advocates propose that
providers be rewarded and punished by
* market forces (plans, employers, and
patients avoid low-scoring providers and
patronize high-scoring providers), and/or
* “pay for performance” (insurers pay
low scorers less, high scorers more).
DMAA’s definition of DM
Activities conducted by third parties that:
* Identify people with certain diseases by
examining their medical records or claims;
* Rely on evidence-based practice guidelines;
* Educate patients (may include surveillance);
* Measure processes and outcomes and report the
results to patients and providers.
Source: Disease Management Association of America
http://www.dmaa.org/definition.html, accessed February 9, 2006.
Another definition of DM
“‘Disease management’ is the latest
catchphrase in the ever-evolving American
health care spectacle. … [D]isease
management is ‘a systematic, populationbased approach to identify persons at risk,
intervene with specific programs of care,
and measure clinical and other outcomes.’”
Thomas Bodenheimer, “Disease management – Promises and pitfalls,”
New Eng J Med 1999;340:1202-1205, 1202.
Report cards are now advocated
simultaneously with …
• “Interoperable electronic medical records”
(EMRs) (aka, regional and national health
information networks) and
• “Pay-for-performance” methods of
reimbursement in order to reward high
scorers and punish low scorers.
Interoperable EMRs are advocated
in order …
• To facilitate collection of medical records
on all Minnesotans/Americans all the time,
and
• To “risk adjust” scores on report cards.
“Risk adjustment” refers to the process
of adjusting scores on report cards to reflect
differences in patient health and other
factors outside of provider control.
In sum, Managed Care 2.0 means …
(1) Report cards, which require
• interoperable EMRs and
• pay-for-performance methods of
reimbursement; and
(2) Disease management.
Managed Care 2.0 appeared in the
wake of the failure of MC 1.0
“Events of the past year demonstrate beyond a
doubt that managed care has failed – and failed
dismally. The greatest single ethical crisis facing
American health care as we move into new year is
what to do about it.”
Art Caplan, director of the Center for
Bioethics at the University of Pennsylvania ("In 2001,
managed care our No. 1 health crisis," MSNBC, December 21, 2001
http://www.msnbc.com/news/671464.asp, accessed December 23, 2001).
(Failure of MC 1.0 cont.)
“Managed care is basically over. People hate it,
and it's no longer controlling costs. Health-care
inflation is now back in the double digits. So if it's
not saving money, then why should we have it?
But like an unembalmed corpse decomposing,
dismantling managed care is going to be very
messy and very smelly, and take awhile.”
George Lundberg, former editor of JAMA who as recently as 1996 had co-
authored an article defending managed care (Linda Marsa, “Former JAMA editor laments the state of
medical care,” Los Angeles Times, March 26, 2001, http://www.latimes.com/print/health/200103
26/t000026016.html, accessed March 28, 2001).
MUHCC’s position on report cards
and pay-for-performance
• Quality: Report cards and P4P have not been
shown to improve quality, and some research
indicates they harm patients.
• Cost: Report cards and P4P have not been shown
to save money, and may raise costs.
• Small-scale report card and P4P experiments
should be conducted; report cards P4P should not
implemented on a wide scale.
MUHCC’s position on EMRs
• Quality: EMRs may enhance quality in some
clinics and hospitals. Evidence does not support
the claim that making EMRs interoperable will
improve quality.
• Cost: Evidence does not support the claim that
EMRs, with or without interoperability, will
reduce cost.
• Providers should not be required by government,
or given financial incentives financed by taxes, to
buy EMR hardware and software.
MUHCC’s position on disease
management
• Quality: DM has been shown to improve quality.
• Cost: The evidence does not warrant the claim that
DM will save money.
• Because DM can improve quality, research on
effective means of DM should continue, and
effective DM programs should be covered by
insurance or delivered through public health
agencies.
Report cards
The following slides examine the claims
made for report cards, pay-for-performance,
and electronic medical records.
Governor claims report cards will
improve quality, reduce costs
“[R]ewarding providers for improved
health outcomes and encouraging
patients to use the best providers will
not only help contain costs, it will
improve the quality of care,’ Pawlenty
said.” (“Governor Pawlenty unveils ‘Smart Buy’ Alliance to slow health
care costs and improve quality,” press release, November 29, 2004,
http://www.governor.state.mn.us, accessed November 30, 2004).
The Legislature claims report
cards improve quality, cut costs
Minnesota Statutes Sec. 62J.43, signed by
Governor Pawlenty on May 29, 2004, says:
“To improve quality and reduce health care costs,
state agencies shall encourage the adoption of best
practice guidelines…. The commissioner of health
shall facilitate access to … quality of care
measurement information to providers, purchasers,
and consumers by … disseminating information
… on adherence to best practices care by
physicians and other health care providers….”
Governor-Legislature claims rely
on three assumptions
(1) Report cards improve quality more
often than they damage quality;
(2) Quality improvements inevitably lead
to cost reductions;
(3) The cost reductions achieved by
report cards will outweigh the cost of
producing report cards.
There is little evidence that report
cards improve quality
“Despite … extensive adoption of quality
measurement and reporting, little research
examines the effect of public reporting on the
delivery of health care, and even less examines
how report cards may improve care. …[T]he
potential … negative consequences of public
reporting are largely unexplored.”
Rachel M. Werner and David A. Asch, “The unintended consequences of
publicly reporting quality information,” JAMA 2005;293:1239-44, 39.
Report cards could damage
quality three ways
(1) By being inaccurate (steering patients to
inferior doctors);
(2) By inducing doctors to reject sicker
patients;
(3) By inducing doctors to shift resources
from unmeasured to measured patients.
Report cards can be accurate for
some things, e.g., vacuum cleaners
Consumer Reports’ report card on vacuum
cleaners:
* Offers grades on 38 vacuum cleaners on a fivepoint scale (from excellent to poor).
* 3 quality measures:
- cleaning (carpet, bare floors, w/ tools)
- other results (ease of use, noise, emissions)
- features (bag, brush, manual pile adj, weight)
* Kenmore (Sears) got 79 points, Sanyo
Performax and Panasonic Fold N’Go got 53
But patients are not floors, and
doctors are not vacuum cleaners
• Comparisons of quality are not useful if the
playing field is not level, that is, if the
conditions under which quality is measured
are not the same.
• Keeping the playing field level is much
easier to do while measuring the quality of
vacuum cleaners than it is while measuring
doctors and hospitals.
Many factors outside provider
control influence health outcomes
Factors that influence health outcomes
that are outside of provider control include:
* Patient health status prior to treatment;
* Patient insurance status (presence of
deductibles and co-pays; no coverage for
service being measured; no coverage at all)
* Patient income, education and values.
Failure to measure health status
affects scores
The next slide illustrates how scores on
hospitals can be distorted when differences
in patient health are measured only crudely.
It shows that when “stage of illness” at
admission was ignored, 18 of 65 hospital
units scored above or below average, but
when it was factored in, only 6 scored
above or below average.
Hospital mortality rates vary
depending on “stage of illness”
Hospital mortality rates for 13 hospitals and five
conditions* under HCFA and Green-Wintfeld Models**
Actual Mortality Rate HCFA Model Green-Wintfeld Model
Above expected range
Within expected range47
Below expected range
Total
8
2
59
10
65
4
65
*Low-risk heart disease, severe acute heart disease, cancer, stroke, and
pulmonary disease
** HCFA adjusted mortality rates for only a few of the factors that could have
affected patient mortality that were outside hospital control (risk adjustment
included age, sex, diagnoses other than the principal diagnosis, number of
Income affects preventive
services for insured patients*
“[L]ower SES [socioeconomic status] patients had
lower compliance with Pap smears,
mammograms, and diabetic eye exams, and were
less likely to have a referral or make any office
visit…. These income effects are not confined to
the poorest patients but span the entire
socioeconomic spectrum.”
,
Peter Franks et al. “Effects of patients and physician practice
socioeconomic status on the health care of privately insured managed care
patients,” Medical Care 2003;41:842-852, 842
* Patients were all insured by the same plan, described as “the largest local
managed care organization” in the ten-county area surrounding Rochester,
“Quality-of-care” scores for diabetics
vary depending on measure of quality
(1) LDL cholesterol under 130
(2) Measure (1) + doctor has responded to high
reading, + patient has contraindications to statins
(3) Measures (1) + (2) + other factors*
73%
87%
90%
* “Other factors” included: patient refuses to take lipid-lowering
medications; lipid management low priority or difficult to address; no
primary care visit after high reading; has active care elsewhere; other
interventions tried within six months of high reading (diet, exercise, or
other lipid-lowering drug).
Source: Eve Kerr et al., “Building a better quality measure: Are some patients
with ‘poor quality’ actually getting good care?” Medical Care 2003;41:11731182.
Experts say risk adjustment of
report card grades is essential
“The interpretation of [medical] outcomes is further
complicated by the need to make adjustments for
comorbidity and the intensity and state of the patient’s
illness – a far from trivial undertaking.” Paul Ellwood
(“Outcomes management: A technology of patient experience,” New England
Journal of Medicine1988;318:1549-1556).
“[T]he importance of co-morbidity must be stressed.... If
co-morbidity is not considered, there will always be the
potential for individual providers … to be unjustly accused
of poor quality because of patient selection….” Richard W.
Asinger, MD (“Constructive use of clinical databases,” The Medical
Journal of Allina, 1996(1):31-34, 32).
(Experts say risk adjustment is
essential, cont.)
“Case-mix adjustments are made in almost all profile
analyses to account for the differences in provider
performance attributable solely to differences in the
populations served” (p. 764). “Risk adjustments contribute
vitally to reducing unfair profile evaluations” (p. 765).
Cindy L. Christiansen and Carl N. Morris, “Improving the statistical approach
to health care provider profiling,” Ann Intern Med 1997;127:764-768.
“Accurate risk adjustment is necessary for observational
and health services research, including comparison of
outcomes of different treatments and quality assessment.”
Jay F. Piccirillo et al., “Prognostic importance of comorbidity in a hospitalbased cancer registry,” JAMA 2004;291:24241-47.
(Experts say risk adjustment is
essential, cont.)
“We found that patient characteristics were 315
times more important than hospital
characteristics in predicting mortality after
simple surgery, so small errors in risk
adjustment may loom large compared to
hospital differences.”
Jeffrey H. Silver and Paul R. Rosenbaum, “A spurious correlation
between hospital mortality and complication rates: The importance of
severity adjustment,” Medical Care 1997;35;OS77-OS92, Supplement,
OS87.
Unadjusted report cards damage
access for sicker patients
“Performance-based contracting gave providers of
substance abuse treatment financial incentives to
treat less severe OSA [Office of Substance Abuse]
clients in order to improve their performance
outcomes. Fewer OSA clients with the greatest
severity were treated in outpatient programs with
the implementation of PBC [performance-based
contracting].”
Yujing Shen, “Selection incentives in a performance-based contracting
system,” Health Services Research 2003;38:535-552, 535.
Even risk-adjusted report cards
can damage access for sicker
diabetics
“[We found that] if those physicians with the worst profiles . . . for
1991 managed to discourage the patients with the top 5% of HbA1c
levels (representing only 1-3 patients per physician) from returning to
their panel, they would in most cases achieve a panel HbA1c profile in
1992 that would be substantially improved than average. . . . . Thus,
the patient’s HbA1c levels from the previous year proved a far better
predictor of what a patient’s HbA1c level would be in the current year,
better than . . . our case-mix adjusters. Manipulating their patient pool,
based on a patient’s prior year HbA1c level, is the easiest way for
physicians to have a substantial improvement in their profile”
Timothy P. Hofer et al., “The unreliability of individual physician
‘report cards’ for assessing the costs and quality of chronic disease,” JAMA,
1999;281:2098-2105, 2103; emphasis added.
New York’s heart surgery report
card
• First physician-specific report card
• Grades performance of hospitals and
surgeons on heart surgery using 30-day
mortality as quality measure
• Considered most accurate report card in
America
• Has been more carefully examined that any
other report card
New York heart surgery report
card is the gold standard
“New York State’s measurement and publication
of coronary artery bypass graft (CABG) surgery
mortality rates has emerged as a model in the
campaign for useful performance data…. The
reality is that these measures of performance are
… the best available, and that substantial
improvements are not likely for some years.”
Stephen F. Jencks, “Clinical performance measurement -- a hard sell,”
JAMA 2000;283:2015-2016, 2015, 2016.
NY heart surgery report card is
rigorously adjusted
• 72 risk factors are adjusted
• They include:
• number of coronary arteries occluded and degree of
occlusion
• previous heart attack
• hemodynamic state just prior to surgery (ability to
maintain blood pressure)
• chronic obstructive pulmonary disease
• kidney failure
• smoking history (last two weeks, last year)
NY report card is expensive
The New York Department of Health pays for:
– “five full-time equivalent staff maintaining the
database...” and
– “a utilization review agent … to audit a sample of 50
cases from half the hospitals each year.”
The three dozen heart surgery hospitals in NY pay for:
“data coordinators to collect and maintain their databases;
most hospitals have a full-time coordinator dedicated to
this task.”
Source: Edward L. Hannan et al., “Public release of cardiac surgery outcomes data
in New York: What do New York state cardiologists think of it?” Am Heart J
1997;134:55-61, 62)
Results of 1998-2000 NY report
card on 34 CABG hospitals
• Statewide 30-day mortality average: 2.32%
• Three hospitals had higher-than-expected
rates
• Two hospitals had lower-than-expected
rates
• 29 hospitals had expected rates
Source: New York Department of Health, Adult Cardiac Surgery in New York
State, 1998-2000, http://www. health.state.ny.us/nysdoh/heart/pdf/1998_2000)
cabg.pdf, accessed January 16, 2005.
Results of 2000-2002 NY report
card on 36 CABG hospitals
• Statewide 30-day mortality average: 2.27%
• Three hospitals had higher-than-expected
rates
• Three hospitals had lower-than-expected
rates
• 30 hospitals had expected rates
Source: New York Department of Health, Adult Cardiac Surgery in New York
State,2000-2002.
Outliers on 1998-2000 and 20002002 NY hospital CABG reports
1998-2000
2000-2002
High mortality rates
Albany Med Ctr (4.08%) Buffalo General (4.67%)
Ellis Hosp (6.13%)
Mount Sinai (4.86%)
Mount Sinai (6.01%)
NY Hospitals Ctr (4.31%)
Low mortality rates
Lenox Hill (1.15%)
St. Josephs (0.90%)
Winthrop U Hosp (1.10%) Staten Island (0.82%)
Vassar Brothers (0.00%)
Rates for 1998-2000 NY hospital
outliers two years later
1998-2000 2000-2002
Albany Med Ctr
Ellis Hosp
Mount Sinai
4.08%
6.13%
6.01%
2.83%
3.29%
4.86%
Lenox Hill
Winthrop U Hosp
1.15%
1.10%
2.02%
2.78%
Change in outlier status among 156
surgeons,1998-2000 to 2000-2002 report
• 156 surgeons met criteria for grading in
1998-2000 report*; 21 (13%) were outliers
• 14 had higher-than-expected mortality rates
• 7 had lower-than-expected mortality rates
• All 21 outliers were graded in 2000-2002
report, but in that period only 6 of these 21
were outliers
* Criteria were either 200 operations during this period, or at least one
operation in each of 1998, 1999, and 2000.
Study suggested New York report
card improves quality
Odds of death from CABG surgery in
NY relative to rest of US, 1994-1999: 0.67
Source: Edward L. Hannan et al., “Provider profiling and quality improvement
efforts in coronary artery bypass graft surgery,” Medical Care 2003;41:11641172, Table 4, 1170 (subjects were Medicare beneficiaries; risk adjustment
was done with 12 adjusters from administrative data)
But the study in the preceding slide
was poorly done
The study in the preceding slide is not
credible because it examined mortality rates
only among New Yorkers who underwent
CABG surgery. The study did not attempt to
determine if NY surgeons were refusing to
perform surgery on sicker heart patients.
The next several slides indicate that is what
happened.
Recent studies find NY report
card damages health overall
“[O]ur results show that report cards [on heart
surgeons] led to increased expenditures for both
healthy and sick patients, marginal health benefits
for healthy patients, and major adverse health
consequences for sicker patients. Thus, we
conclude that report cards reduced our measure of
welfare over the time period of our study” (p.
577). “[M]ore severely ill … patients experienced
dramatically worsened health outcomes” (p. 583).
David Dranove et al., “Is more information better? The effects of
‘report cards’ on health care providers,” Journal of Political Economy
2003;111:555-588.
Reason: NY report card induces
surgeons to reject sicker patients
“[M]andatory reporting mechanisms inevitably
give providers the incentive to decline to treat
more difficult and complicated patients” (p. 581).
“Report cards led to a decline in the illness
severity of patients receiving CABG in New York
… relative to patients in states without report
cards” (p. 583).
David Dranove et al., “Is more information better? The effects of
‘report cards’ on health care providers,” Journal of Political Economy
2003;111:555-588.
(NY report card induces surgeons
to reject sicker patients, cont.)
“The [December 19, 1991] Newsday article stated that
several [NY] surgeons warned that some surgeons were
turning down difficult cases to protect their statistics” (p.
410). “[A]n article appeared in the New York Times entitled
‘Faint hearts.’ As fate would have it, a woman was turned
down for surgery because she had a fresh, large myocardial
infarction. Her daughter was a reporter for the New York
Times. After great difficulty, the daughter eventually found
a surgeon who would operate on her mother” (p. 411).
Bradley J. Harlan, “Statewide reporting of coronary artery surgery
results: A view from California,” J Thorac Cardiovasc Surg 2001;121(3):40917.
(NY report card induces surgeons
to reject sicker patients, cont.)
“The incentive to refuse treatment for highrisk patients has created a kind of spiritual
crisis in the field of cardiac surgery. Heart
surgeons … are shrinking from taking on
the toughest cases because of statistics.”
Sandeep Jauhar (“When doctors slam
the door: Under the current system, a doctor’s
reputation may depend on his or her willingness to
turn away a dying man,” New York Times Magazine, March 16,
2003, 30, 34).
Even the best surgeons don’t trust
the NY report card
“’[T]here is nothing that separates me from the
rest of the people on the list,’ Dr. [Jeffrey] Gold
said…. And even though Dr.Gold is ranked at the
top of the [1994] report, he has qualms about it.
‘I’m concerned about the predictability of it,’ he
said. ‘I certainly would not use it as the sole way
of selecting an institution or a surgeon.’”
Elisabeth Bumiller (“Death rankings shake
New York cardiac surgeons,” New York Times,
September 6, 1995, A1, B11)
New York’s angioplasty report
card is having a similar effect
“An overwhelming majority of cardiologists
[79%] in New York say that, in certain instances,
they do not operate on patients who might benefit
from heart surgery, because they are worried about
hurting their rankings on physician scorecards
issued by the state, according to a survey released
today.”
Marc Santora, “Cardiologists say rankings sway choices on surgery,”
New York Times, January 11, 2005, A18.
Report cards cause resource
shifts to services being graded
“Although paying for high quality is an innovation with
obvious potential benefits, it may also lead to the
misallocation of … resources…. The medical director at
one of California’s largest managed-care organizations
described the problem succinctly: 'Everybody's doing what
they are required to do in responding to the quality
measurements that are being used. Every ounce of energy
is being diverted to responding to these; not one ounce of
energy is going to any other aspect of quality.”
Lawrence Casalino, “The unintended consequences of measuring
quality on the quality of medical care," New England Journal of Medicine
1999;341:1147-1150, 1147.
(NY’s angioplasty report card, cont.)
“[T]he patient population in the Michigan
[angioplasty] registry had a significantly higher
frequency of comorbidities…. [A] case selection
bias driven by the fear of public reporting of
higher mortality rates in New York was one
possible explanation ….”
Mauro Moscucci et al., “Public reporting and case selection for
percutaneous coronary interventions,” J Am Coll Cardiology 2005;45:175965.
(Report cards cause resource
shift, cont.)
“[I]f providers face a number of tasks and
resources are limited, then effort will be allocated
toward those tasks that are explicitly rewarded,
taking resources away from other activities.
Inevitably, ... the dimensions of care that will
receive the most attention will be those that are
most easily measured and not necessarily those
that are most valued.”
Meredith B. Rosenthal et al., “Paying for quality: Providers’ incentives
for quality improvement,” Health Affairs 2004;23(2):127-141,139.
(Report cards cause resource shift,
cont.)
“From the present study [which found HMOs were less
likely to detect colorectal cancer early] and the earlier
breast cancer study … [which found HMOs were more
likely to detect breast cancer early] one can infer that the
incentives of health plans are to allocate resources to those
activities upon which they are measured…. This suggests
that preventive screening for conditions such as colorectal
cancer that are not required to be in a report card (such as
HEDIS) are more likely to be neglected.”
Anna Lee-Feldstein et al., “Health care factors related to stage at diagnosis and survival
among Medicare patients with colorectal cancer,” Med Care 2002;40:362-374, 374.
Example of a shift in resources
triggered by report cards
“… [I]t may seem that an optimal performance
standard would be to maximize the percentage of
patients who have an HbA1c <7.0%. Such a
standard may divert a … health system’s attention
from treating poorly controlled patients to
disproportionately focusing on the larger numbers
of patients who are slightly above this cutoff.”
Rodney A. Hayward et al., “Quality improvement initiatives,” Diabetes
Care 2004 27 (Suppl. 2):B54-B60, B56.
Reports on number of procedures
do not pose risks report cards do
For a few procedures, evidence exists that
quality is higher at hospitals that do high volumes
of those procedures. Reports on the number of
procedures do not create the three report card
risks:
(1) Inaccuracy
(2) Doctors avoiding sicker patients
(3) Doctors shifting resources away from
unmeasured to measured services
“Practice makes perfect” rule has
been found for …
* Treatment for AIDS (strong correlation)
* Pancreatic cancer surgery (strong)
* Esophageal cancer surgery (strong)
* Abdominal aortic aneurysm surgery (strong)
* Congenital heart disease surgery (strong)
* Coronary-artery bypass surgery (weak)
* Coronary angioplasty (weak correlation)
* Carotid endarterectomy (weak)
* Other types of surgery for cancer (weak)
* Some orthopedic procedures (weak)
* Treatment of low-birth-weight and premature babies (weak)
Source: Kenneth W. Kizer, “The volume-outcome conundrum,” New England
Journal of Medicine 2003;349:2159-2161.
Review
We have reviewed the first of three
assumptions that have to be true in order for report
cards to work: that report cards improve quality of
care. Report cards can damage quality three ways:
(1) By being inaccurate;
(2) by inducing providers to refuse to treat sicker
patients (regardless of how accurate the report
card is); and
(3) by inducing providers and plans to shift
resources away from unmeasured services.
We turn now to the last two
assumptions about report cards:
(2) Quality improvements inevitably lead
to cost reductions;
(3) The cost reductions achieved by
report cards outweigh the cost of
producing report cards.
Quality improvement does not
inevitably lead to lower costs
“[A]lthough it's a widely held belief that
quality health care leads to lower costs,
insurers have no data that directly
measures return on investment of their
P4P [pay-for-performance] programs.”
Healthleaders (Paula DeWitt , “The new incentive plan”,
March 2004, http://www.healthleaders.com/magazine/cover.php?
contentid=53006, accessed April 10, 2004)
(Quality improvement does not lead
inevitably to lower costs, cont.)
“Results of this study show that it is possible to
increase SFDs [symptom free days] in children
[with asthma]…. However, the improvements
were realized with an increase in the costs
associated with asthma care.”
Archives of Pediatrics and Adolescent
Medicine (S.D. Sullivan et al., “A multisite randomized trial of the
effects of physician education and organizational change in chronic
asthma care: Cost-effectiveness analysis of the Pediatric Asthma Care
Patient Outcomes Research Team II (PAC-PORT II),” 2005;159:428434, 428).
(Quality improvement and costs,
cont.)
“Right from the start, it has been one of the
great illusions … that quality and cost go in
opposite directions. There remains very
little evidence of that.”
Donald Berwick, President and CEO,
Institute for Healthcare Improvement (“’A
deficiency of will and ambition’: A conversation with Donald Berwick,”
Health Affairs, Web Exclusive, January-June 2005, W5-1-W5-9, 7)
Report card infrastructure will be
expensive
“To achieve an NHIN (National Health
Information Network) would cost $156 billion in
capital investment over 5 years and $48 billion
annual operating costs [or a total of about $400
billion over 5 years, or 2% of total spending].”
Note: This is infrastructure only. The cost of
grading thousands of services provided by
hundreds of thousands of providers is extra.
Rainu Kaushal et al., “The costs of a National Health
Information Network,” Ann Int Med 2005;143:165-73, 165
Report cards on providers suffer
defects similar to those on schools
No Child Left Behind report cards on
schools have been criticized for the same
reasons provider report cards have:
* They don’t adjust for factors outside
school control and are therefore inaccurate;
* they shift resources away from
unmeasured services; and
* they are costly.
Bipartisan group concluded NCLB
impedes quality improvement
“The underlying problem is that all schools
… are measured equally, regardless of
differences in socioeconomic factors … or
unique challenges the … schools face” (p
15). “[S]chools are reluctant to accept
transfers because they fear it would increase
their chance of [failing]” (p. 22)
National Conference of State Legislatures, Task Force on NCLB,
Final Report, February 2005.
Governor assumes Alliance can
measure quality accurately
“The Smart Buy Alliance will adopt uniform methods of
measuring quality of care … and will purchase health care
based upon those measurements…. Consumers and
purchasers cannot make good … decisions in the
marketplace without access to … easy-to-understand
information about health care ... quality. The … Alliance
will require health plans and providers to participate in
efforts to make such information available. The
Community Measurement Project … [is] an example of the
type of information to be made available.” (“Governor Pawlenty
unveils ‘Smart Buy’ Alliance to slow health care costs and improve quality,”
press release, November 29, 2004, http://www.governor.state.mn.us, accessed
November 30, 2004).
Diabetes quality measures,
Community Measurement Project
None of these measures is risk-adjusted
(1) % patients with HbA1c less than or equal to 8.0 (and 7.0):
OUTCOME*
(2) % patients with LDL-cholesterol less than 130 (and 100): OUTCOME
(3) % patients with blood pressure less than 130/85 (and 130/80):
OUTCOME
(4) % patients over age 40 taking aspirin: PROCESS*
(5) % patients known to be nonsmokers: OUTCOME
(6) % patients with annual screening for kidney and eye complications:
PROCESS
(7) A composite of the first five measures
* An “outcome” measure is one that measures the effect of treatment on
patient health. Survival after surgery is an example of an outcome measure. So
HMO advocates have called for
report cards for 35 years
“A performance reporting system of proven
reliability would be developed and installed to
provide both individual consumers and quantity
buyers (e.g., HEW) with accurate information on
the comparative performance of alternative
sources of health care. (HMOs would be required
to make such information available.)”
Paul M. Ellwood et al. (“Health maintenance strategy,”
Medical Care 1971;9:291-298, 297).
(HMO advocates have called for
report cards, cont.)
“The development of an effective system of collecting and
disseminating data on quality and outcomes is an essential
component of a health care reform strategy. Such a strategy
will allow the monitoring of the impact of cost
containment initiatives on health care quality. . . . The
Commission and the Commissioner of Health will work
collaboratively to collect and disseminate comparative data
on the quality of services provided by providers, health
plans, and ISNs in order to facilitate competition and
continuously improve systemwide health care quality.”
Minnesota Health Care Commission (Containing Costs in
Minnesota’s Health Care System: A Report to Governor Arne H. Carlson and
the Minnesota Legislature, January 25, 1993, 28).
High-deductible advocates also call
for report cards
“Consumer-directed health care supposes a new
formulation – one driven by consumers with cashin-hand, demanding to know for themselves who
is the best urologist in town, … how do I get the
most value for the money I’m spending?
Information systems to support this movement
will grow exponentially. But the information ... is
not an end to itself. The real revolution will come
when health-care consumers use that information
to reward higher quality and punish the
mediocre….”
Greg Scandlen, Galen Institute (“How consumerdriven health care evolves in a dynamic market,” Health Services Research
2004;39;1113-1118, 1117)
But accurate report cards are
almost nonexistent
"[W]e have no assurances that the competition of
[health] plans . . . will reward those who deliver
higher quality care. . . . [P]urchasers and
consumers have not, so far, rewarded or punished
plans based on quality. . . . If purchasers and
consumers had tools that allowed them to buy on
quality, ... the thinking that lay behind the original
HMO movement may still play out"
Paul M. Ellwood, Jr. and George D. Lundberg,
("Managed Care: A Work in Progress," Journal of the American Medical
Association 1996;276:1083-1086, 1085).
(Accurate report cards are almost
nonexistent, cont.)
“[P]hysician profiles are not and may
never be ready for public
consumption.”
Andrew Bindman,“Can physician profiles be trusted”? JAMA 1999;281:
2142-2143, 2143)
(Accurate report cards are almost
nonexistent, cont.)
“Hospital profiling remains an
unproven strategy for improving
outcomes of care.”
David W. Baker et al., “Mortality trends during a program that publicly
reported hospital performance,” Medical Care 2002;40:879-90, 879.
Quality can be improved without
report cards
• The Cooperative Cardiovascular Project induced
large improvements in quality of care of heart
attack patients in four pilot states by giving
doctors feedback (at the hospital, in seminars, by
phone, and by mail).
• Improvements included increased use of aspirin
(84% to 90%) and beta blockers (47% to 68%),
and reduced one-year mortality (32.3% to 29.6%).
Source: Thomas A. Marciniak et al., “Improving the quality of care for
Medicare patients with acute myocardial infarction: Results from the
Cooperative Cardiovascular Project,” JAMA 1998;2179;1351-1357
(Quality improvement without report
cards, cont.)
Other methods of improving quality
without report cards include:
(1) Traditional research;
(2) Establishing universal health insurance;
(3) Reducing drug prices;
(4) Ending the nurse shortage;
(5) Public health programs.
Electronic medical records (EMRs)
The following slides demonstrate that
the evidence does not support the claim that
interoperable EMRs will improve quality or
reduce costs.
Advocates claim EMRs can do it all
“[B]y computerizing health records, we can
avoid dangerous medical mistakes, reduce
costs, and improve care.”
George W. Bush, State of the Union
Address, January 20, 2004 (quoted in Rainu Kaushal et
al., “The costs of a National Health Information Network,” Ann Int Med
2005;143:165-173, 165).
(Advocates claims re EMRs cont.)
“It is widely believed that broad adoption of
electronic medical records (EMR) systems will
lead to major health care savings, reduce medical
errors, and improve health.”
Richard Hillestad et al., “Can electronic
medical record systems transform health care?
Potential health benefits, savings, and costs,”
Health Affairs 2005;1103-1117, 1103.
Proponents make three claims
(1) EMRs save time;
(2) EMRs improve doctors’ decisions;
(3) EMRs facilitate the production of
report cards which in turn improve quality.
None of these claims have been proven.
EMRs have not been shown to save
time for providers
“With the exception of pharmacy settings,
there is little consistent evidence that IT
[information technology] systems save time
for providers. In some instances, the
literature suggests the reverse.”
Medicare Payment Advisory
Commission (Report to Congress: New Approaches in Medicare,
June 2004, 163)
(EMRs don’t save time, cont.)
“Only 13% of [100] trials evaluated the
impact of the CDSS [clinical decision
support systems] on clinician workflow,
with more than half of these CDSSs
requiring more time and effort from the user
compared with paper-based methods.”
Amit X. Garg et al., “Effects of computerized clinical decision
support systems on practitioner performance and patient outcomes: A
systematic review,” JAMA 2005;293:1223-1238, 1226.
EMRs have not been shown to
improve health
“Fifty-two trials [of clinical decision
support systems] assessed patient outcomes
…. Only 7 trials reported improved patient
outcomes….”
Amit X. Garg et al., “Effects of computerized clinical decision
support systems on practitioner performance and patient outcomes: A
systematic review,” JAMA 2005;293:1223-1238, 1231.
(EMRs and health, cont.)
“In 2001, the Agency for Healthcare Research and
Quality … determined that 14 safety practices had
greater strength of evidence regarding their impact
and effectiveness than any practice which relied
on IT. These include such low-cost items as
appropriate provision of nutrition … and use of
maximum sterile barriers while placing central
intravenous catheters to prevent infections.”
Medpac (Report to Congress: New Approaches in Medicare, June 2004,
162)
Some studies report harm done by
computers
“We found that a widely used CPOE
[computerized physician order entry] system
facilitated 22 types of medication error risks.
Examples include fragmented CPOE displays that
prevent a coherent view of patients’ medications,
pharmacy inventory displays mistaken for dosage
guidelines, … and inflexible ordering formats
generating wrong orders.”
Ross Koppel et al., “Role of computerized
physician order entry systems in facilitating
medication errors,” JAMA 2005;293:1197-1203.
NHIN advocates’ favorite “studies”
are opinions, not evidence
Two papers cited frequently by EMR
advocates:
* Richard Hillestad et al., “Can electronic
medical record systems transform health care?
Potential health benefits, savings, and costs,”
Health Affairs 2005;1103-1117, 1103.
* Jan Walker et al., “The value of health care
information exchange and interoperability,”
Health Affairs Web Exclusives, January-June
2005; 24, Suppl. 1):W5-10-18.
Hillestad et al.
Conclusion: “Fully standardized HIEI
[health care information exchange and
interoperability] could yield a net value of
$77.8 billion per year….”
According to an accompanying paper,
savings would amount to 1.6 percent of
health spending in 2019 (Clifford Goodman,
“Do it for the quality,” 1125)
(Hillestad et al. cont.)
• Authors are part of the Rand HIT Project.
• Funded by Cerner, GE, Hewlett-Packard, Johnson
and Johnson, and Xerox.
• Their methods were extraordinarily biased:
– “[T]he currently useful evidence is not robust enough to
make strong predictions, and we describe our results
only as ‘potential.’”
– “We chose to interpret reported evidence of negative or
no effect of HIT as likely being attributable to
ineffective or not-yet effective implementation.”
Walker et al.
Conclusion: “[N]et savings from
national implementation of fully
standardized interoperability between
providers and five other types of
organizations could yield $77.8 billion
annually, or approximately 5 percent of the
projected $1.661 trillion spent on US health
care in 2003” (W5-10)
(Walker et al. cont.)
• Funded by the Foundation for the eHealth
Initiative, which is funded by the computer
and insurance industries among others.
• “We convened a panel of nationally known
experts…. With relatively little research and
literature on the value of HIEI [health care
information exchange and interoperability],
the panelists played an important role….”
Disease management
The following slides demonstrate that
disease management (DM) is promoted by
insurance companies and DM vendors, and
that the evidence does not support the claim
that disease management will reduce health
care costs.
Disease Management Association of
America’s board, 2006
Lifemasters
Wellpoint
Geisinger Health Plan
McKesson Health Solutions
Matria Healthcare
Caremark Rx
Fibrogen
Pitney-Bowes
Astra-Zeneca Pharmaceuticals
Jefferson Medical College
Dept of Mental Health, TN
American Healthways
Air Logix
Magellan Health Services
Sanofi-Aventis
Kaiser Permanente
American College of
Cardiology
DM was begun by the drug industry
“The boom in [DM] was initiated by the
pharmaceutical industry. By 1995, most
pharmaceutical manufacturers had unveiled a
variety of [DM] programs. … Merck-Medco
Managed Care sells its diabetes [DM] program to
… employers and [plans] …, identifying patients
with diabetes through its 51-million-person
pharmacy data base.”
Thomas Bodenheimer, “Disease management – Promises and pitfalls,” New Eng
J Med 1999;340:1202-1205, 1202.
No evidence that disease
management saves money
“On the basis of its examination of peer-reviewed
studies of disease management programs…, CBO
finds that to date there is insufficient evidence to
conclude that disease management programs can
generally reduce the overall cost of health care
services.”
Congressional Budget Office (An Analysis of the Literature
on Disease Management Programs, October 13, 2004,
http://www.cbo.gov/showdoc.cfm?index=5909&sequence=0, accessed September 25,
2005)
(DM doesn’t cut costs, cont.)
“Although interest in … disease
management programs is growing, evidence
of their clinical and cost effectiveness
remains limited. … Without many attractive
alternative mechanisms to control costs,
many employers are adopting disease
management despite the lack of evidence.”
Center for Studying Health System
Change (Ashley Short et al., “Disease management: A leap of faith to
lower-cost, higher-quality health care,” October 2003, Issue Brief No. 69, 3)
(DM doesn’t cut costs, cont.)
“Despite high expectations, evidence of
both disease management and case
management programs’ success in
controlling costs and improving quality
remains limited.”
Center for Studying Health System
Change (Ashley Short et al., “Disease management: A leap of faith to
lower-cost, higher-quality health care,” Issue Brief No. 69, October 2003).
(DM doesn’t cut costs, cont.)
“A growing number of [DM] programs offer
to monitor patients with chronic conditions
and help avoid dangerous complications….
But the long-term cost effectiveness of such
programs has been hard to measure. ...
There is a chance [DM] programs could
actually raise costs….”
Wall Street Journal (“Laura Landro, “Does disease
management pay off,” October 20, 2004, D4).
(DM doesn’t cut costs, cont.)
“‘We’ve made real progress in keeping people
healthier who have chronic illnesses,’ says Edward
Wagner [with] Group Health Cooperative’s Center
for Health Studies in Seattle. ‘But we still don’t
know definitively what the economic impacts of
disease management are.’ … Dr. Wagner expresses
skepticism about outsourced disease-management
programs….”
Wall Street Journal (“Laura Landro, “Does disease management pay
off,” October 20, 2004, D4).
(DM doesn’t cut costs, cont.)
DM vendors claim DM cuts costs, but they
either offer no empirical evidence or they offer
evidence that fails to take into account the cost of
DM itself. See for example:
RJ Rubin et al., “Clinical and economic impact of
implementing a comprehensive diabetes
management program in managed care,” J Clin
Endocrinol Metab 1998;83:2635-42.
DM can improve quality but at a
cost: Example from the research
The paper in the next slide is among the
best on the costs and savings associated
with disease management of diabetes. The
paper was funded by Kaiser Permanente,
the American Diabetes Association, and
Bristol-Myers Squibb.
(DM can improve quality but at a
cost, cont.)
“Even for the most optimistic picture – a
30-year horizon and assuming no turnover
[patients stay with the same plan for 30
years] – the net effect on diabetes-related
costs would be an increase of about 25%”
(p. 261). “The program used in [this] study
may be too expensive for health plans or a
national program to implement” (p. 251).
David M. Eddy et al., “Clinical outcomes and cost-effectiveness of
strategies for managing people at high risk for diabetes,” Ann Intern Med
2005;143:251-64
Example of how the myth that DM
cuts costs is nourished
“A transformation in diabetes care … has its
foundation in comprehensive health management
for individuals. This …. can be provided by …
endocrinologists, diabetes educators, pharmacists,
dietitians and social workers. Yes, it costs, but
study after study shows it saves money.”
Newt Gingrich, Saving Lives and Saving
Money, Alexis de Tocqueville Institution,
Washington, DC,148.
But Gingrich cited no studies
showing diabetes DM saves money
• Gingrich cited five studies, but none demonstrated
that the savings from improved health offset the
costs of DM.
• For example, Gingrich offered this quote:
“Improving glycemic control in people with
diabetes is clearly cost-effective.”
• But the study defined “cost effective” to mean the
intervention achieved health benefits (QALYs) at
about the same cost as other accepted treatments.
Conclusions
Managed Care 2.0 has been oversold just as
Managed Care 1.0 was, and will fail to meet
expectations as Managed Care 1.0 did.
Governor and Legislature support
evidence-based medicine …
“DHS will improve the value of Minnesota’s
public health care programs as measured by cost,
quality and access.... [W]e are undertaking several
key efforts. The first is implementing evidencebased decision-making for benefit design and
coverage.”
Source: E-mail message from DHS Commissioner Kevin Goodno to DHS
employees, January 31, 2006
They should support evidence-based
health policy as well
The evidence does not support
* the claim that report cards (or pay-forperformance or “value purchasing”) and
EMRs will improve quality and reduce
costs; nor
* the claim that disease management will
reduce costs.