Transcript Document

Don’t let them pull the wool over your eyes
or
Wessex Drug and Medicines Information
Centre, Pharmacy Department,
Southampton General Hospital
Why independent critical evaluation of the evidence
is essential (and not all about stats)
Jonathan Hall
Critical Evaluation Pharmacist, Southampton General Hospital
When the subject of critical evaluation is raised (as one often does), two reactions spring to mind. The first is “Why bother, they’re the experts?”, and the second
is “Arghh that’s all stats isn’t it?”. My response to the first is usually to get on a soap box and preach, but for the sake of space on this poster we’ll leave it as
“because we need to”. My reply to the second is “no it isn’t”.
The following is a light hearted depiction of ways information can be generated and reported to present a drug in a more favourable light, and whilst it may be
considered slightly extreme, these are all ploys that I have seen. Unfortunately space precludes the provision of specific examples, but trust me, they do exist.
ORIGINAL’ISH ARTICLE
A Randomised, Controlled Study to Prove
*
What We Want it To: The ALYSM Study
B.K. Hander, MD
S.L. Mysoul, MD
M.Y. Researchnursedidalltheworkbutigetmynameonthepaper, MD, PHD
*AS LONG AS YOU SPONSOR ME
ABSTRACT
As this is the only part most people read, only state percentages and P-values for positive
outcomes, even if the primary endpoint has been missed. Don’t mention any potentially
detrimental safety data or significant patient withdrawals due to intolerant side-effects.
Finally, end with the standard line that this [ INSERT DRUG NAME HERE] is efficacious and safe and
should be the new standard of care in [INSERT NAME OF CHRONIC CONDITION], even though this
was only a 6-week study and 25% of patients withdrew due to side-effects.
INTRODUCTION
Fill this section with very long names and complicated abbreviations in the hope that it will deter
people from reading on in any kind of depth, and just go back to the abstract.
RESEARCH DESIGN AND METHODS
Only compare your drug with placebo and blame regulatory constraints for the lack of comparative
data. If you are brave enough to use an active comparator, ensure it is inappropriate, inadequately
dosed, or both. Alternatively use the active drug as an ‘internal validator’ in a placebo controlled
trial. This leaves you free to only conduct between drug analyses for selected ‘positive’ endpoints,
even though the study is not adequately powered to do so. Try and put your drug in a new delivery
device so that more than one intervention is studied. Use inclusion criteria that will not be used in
practice making it impossible to know how the drug will perform in ‘real life’. Employ a run-in
period with your study drug, and only allow responders with no adverse effects to proceed into the
study. Alternatively use a placebo-run-in period and disqualify responders from entering the
placebo-controlled study. Either way you’ll bias the results in favour of your drug. When choosing
your endpoints, ensure that their timing favours your drug. It is also useful to drop any endpoints
from earlier studies that weren’t positive, irrespective of how clinically important they were. When
disclosing your endpoints, only list those where the outcome was found to be favourable. Endpoints
can be changed from those listed in the original protocol (allegedly), or just not reported in the
paper (who will know?). Where possible use short-term, surrogate endpoints. Speculate how these
will lead to positive changes in more important endpoints, but never conduct the studies.
RESULTS
If a less favourable endpoint must be presented (e.g. primary endpoint) gloss over it and put
undue emphasis on a secondary endpoint of choice. If this proves difficult, subgroup analysis will
invariably find some subset of patients where your drug will show superiority. Don’t present details
of measurement scales used, making it more difficult to assess the clinical significance of any
changes. Interpret the clinical significance of any changes accordingly. Where an ITT analysis is
unfavourable conduct a per-protocol analysis and argue its merits. Present all data as relative risk
reductions (RRR) rather than absolute risk reductions to overemphasise any differences (A 50%
RRR looks good on the face of it, but keep quiet that a 50% improvement of sweet FA is still sweet
FA). Where statistical significance is shown, just call these differences significant, and don’t present
confidence intervals if they are unfavourable. Present the results in such a way as to make it
impossible to know when the drug should be stopped in practice. Finally, if none of these methods
show your drug to be favourable, bury the study.
DISCUSSION
When discussing the effects of the drug, don’t discuss any effect seen in the placebo arm. If there
are any undesirable outcomes in the results section use this section as a way to hypothesise why
these have occurred. Remember don’t hypothesise why the same deficiencies could have explained
why the positive outcomes occurred. Call the study ‘pivotal’.
COST-EFFECTIVENESS (CURRENTLY OPTIONAL)
Generate utilities in-house so desired results can always be achieved. To prove cost effectiveness,
a Markov model is useful as they are difficult to understand and infinitely flexible, with many places
where assumptions can be inserted. Generate many, each with different assumptions, and only use
the most favourable one. If cost-effectiveness can’t possibly be shown, don’t measure it.
Some Top Tips for Critical Evaluation (from my experience)
Define the question – e.g. What is it wanted for locally? Sounds obvious but it can prevent a lot of unnecessary work. Engage with the clinical experts.
Literature search - Be careful of existing drug reviews, they don’t always critically appraise the quality of the evidence. Look for letters from experts in the field
commenting on published papers. It offers an expert clinical insight which we do not always possess
Reviewing the information - Read with open mind. If you don’t understand the stats, don’t get bogged down, it is more important to determine clinical
significance. Make your review a ‘real world’ translation of the trial results, i.e. what will it mean both to the patient and the local healthcare economy. Remember
when you have evaluated the data, you are the expert on the evidence base. Don’t assume that this is the same for clinical experts.