Diagnostic Tests - Psycho
Download
Report
Transcript Diagnostic Tests - Psycho
Diagnostic tests
Subodh S Gupta
MGIMS, Sewagram
1
Standard 2 X 2 table
(For Diagnostic Tests)
Gold
Standard
Positive
(T+)
Diagnostic
test
Negative
(T-)
Total
Disease Status
Present
(D+)
Absent
(D-)
Total
a
b
a+b
c
d
c+d
a+c
b+d
N
Standard 2 X 2 table
(For Diagnostic Tests)
Gold
Standard
Positive
(T+)
Diagnostic
test
Negative
(T-)
Disease Status
Present
(D+)
Absent
(D-)
TP
FP
FN
TN
Gold standard
In any study of diagnosis, the method
being evaluated has to be compared to
something
The best available test that is used as
comparison is called the GOLD
STANDARD
Need to remember that all gold standards
are not always gold; New test may be
better than the gold standard
Test parameters
Gold Standard
Diagnostic
Test
Disease Status
Present
(D+)
Absent
(D-)
Total
Positive
(T+)
a
b
a+b
Negative
(T-)
c
d
c+d
a+c
b+d
Sensitivity = Pr(T+|D+) = a/(a+c)
--Sensitivity is PID (Positive In Disease)
Total
Specificity
= Pr(T-|D-) = d/(b+d)
--Specificity is NIH (Negative In Health)
N
Test parameters
Gold Standard
Diagnostic
Test
Disease Status
Present
(D+)
Absent
(D-)
Total
Positive
(T+)
a
b
a+b
Negative
(T-)
c
d
c+d
Total
a+c
b+d
N
False Positive Rate (FP rate) = Pr(T+|D-) = b/(b+d)
False Negative Rate (FN rate) = Pr(T-|D+) = c/(a+c)
Diagnostic Accuracy = (a+d)/n
Test parameters
Gold Standard
Diagnostic
Test
Disease Status
Present
(D+)
Absent
(D-)
Total
Positive
(T+)
a
b
a+b
Negative
(T-)
c
d
c+d
Total
a+c
b+d
N
Positive Predictive Value (PPV) = Pr(D+|T+) =
a/(a+b)
Negative Predictive Value (NPV) = Pr(D-|T-) =
d/(c+d)
Test parameters: Example
Gold Standard
Diagnostic
Test
Disease Status
Present
(D+)
Absent
(D-)
Total
Positive
(T+)
90
5
95
Negative
(T-)
10
95
105
Total
100
100
200
Sensitivity = 90/(90+10), Specificity = 95/(95+5)
FP rate = 5/ (95+5); FN Rate = 10/ (90+10)
Diagnostic Accuracy = (90+95) / (90+10+5+95)
PPV = 90/(90+5); NPV = 95/(95+10)
PPV & NPV with Prevalence
Sensitivity
90%
Specificity
95%
False Negative Rate
False Positive Rate
10%
5%
PPV
NPV
Diagnostic Accuracy
94.7%
90.5%
92.5%
Healthy population vs sick population
Healthy
Sick
Predictive Values in hospital-based data
Predictive Values in population-based data
Test Parameters: Example
Gold Standard
Diagnostic
Test
Disease Status
Present
(D+)
Absent
(D-)
Total
Positive
(T+)
90
5
95
Negative
(T-)
10
95
105
Total
100
100
200
Prevalence = 50%
PPV = 94.7%
NPV = 90.5%
Diagnostic Accuracy = 92.5%
Test Parameters: Example
Gold Standard
Diagnostic
Test
Disease Status
Present
(D+)
Absent
(D-)
Total
Positive
(T+)
90
95
185
Negative
(T-)
10
1805
1815
Total
100
1900
2000
Prevalence = 5%
PPV = 48.6%
NPV = 99.4%
Diagnostic Accuracy = 94.8%
Test Parameters: Example
Gold Standard
Diagnostic
Test
Disease Status
Present
(D+)
Absent
(D-)
Total
Positive
(T+)
90
995
1085
Negative
(T-)
10
18905
18915
Total
100
19900
20000
Prevalence = 0.5%
PPV = 8.3%
NPV = 99.9%
Diagnostic Accuracy = 95%
Test Parameters: Example
Gold Standard
Diagnostic
Test
Disease Status
Present
(D+)
Absent
(D-)
Total
Positive
(T+)
90
9995
10085
Negative
(T-)
10
189905
189915
Total
100
199900
200000
Prevalence = 0.05%
PPV = 0.9%
NPV = 100%
Diagnostic Accuracy = 95%
PPV & NPV with Prevalence
Prevalence
50%
5%
0.5% 0.05%
Sensitivity
90%
90%
90%
90%
Specificity
95%
95%
95%
95%
PPV
94.7%
48.6%
8.3%
0.9%
NPV
90.5%
99.4%
99.9%
100%
Diagnostic
Accuracy
92.5%
94.8%
95%
95%
Trade-offs between
Sensitivity and Specificity
Sensitivity and Specificity solve the
wrong problem!!!
When we use Diagnostic test clinically, we do not
know who actually has and does not have the
target disorder, if we did, we would not need the
Diagnostic Test.
Our Clinical Concern is not a vertical one of
Sensitivity and Specificity, but a horizontal one of
the meaning of Positive and Negative Test
Results.
BE-Workshop-DT-July2007
21
When a clinician uses a test,
which question is important ?
If I obtain
probability
disease?
If I obtain
probability
disease?
a positive test result, what is the
that this person actually has the
a negative test result, what is the
that the person does not have the
BE-Workshop-DT-July2007
22
Test parameters
Gold Standard
Diagnostic
Test
Disease Status
Present
(D+)
Absent
(D-)
Total
Positive
(T+)
a
b
a+b
Negative
(T-)
c
d
c+d
a+c
b+d
Sensitivity = Pr(T+|D+) = a/(a+c)
Specificity = Pr(T-|D-) = d/(b+d)
Total
PPV
= Pr(D+|T+) = a/(a+b)
NPV = Pr(D-|T-) = d/(c+d)
N
Likelihood Ratios
Likelihood Ratio is a ratio of two probabilities
Likelihood ratios state how many time more
(or less) likely a particular test results are
observed in patients with disease than in
those without disease.
LR+ tells how much the odds of the disease
increase when a test is positive.
LR- tells how much the odds of the disease
decrease when a test is negative
24
The likelihood ratio for a positive result (LR+)
tells how much the odds of the disease
increase when a test is positive.
The likelihood ratio for a negative result (LR-)
tells you how much the odds of the disease
decrease when a test is negative
25
Likelihood Ratios
The LR for a positive test is defined as:
LR (+) = Prob (T+|D) / Prob(T+|ND)
LR (+) = [TP/(TP+FN)] [FP/(FP+TN)]
LR (+) = (Sensitivity) / (1-Specificity)
26
Likelihood Ratios
The LR for a negative test is defined as:
LR (-) = Prob (T-|D) / Prob(T-|ND)
LR (-) = [FN/(TP+FN)] [TP/(FP+TN)]
LR (-) = (1-Sensitivity) / (Specificity)
27
What is a good ‘Likelihood Ratios’?
A LR (+) more than 10 or a LR (-) less than
0.1 provides convincing diagnostic evidence.
A LR (+) more than 5 or a LR (-) less than
0.2 is considered to give strong diagnostic
evidence.
28
Likelihood Ratio: Example
Gold Standard
Diagnostic
Test
Disease Status
Present
(D+)
Absent
(D-)
Total
Positive
(T+)
90
5
95
Negative
(T-)
10
95
105
Total
100
100
200
Likelihood Ratio for a positive test = (90/100) / (5/100)
= 90/ 5 = 18
Likelihood Ratio for a negative test = (10/100) / (95/100)
= 10/ 95 = 0.11
Exercise
In a hypothetical example of a diagnostic
test, serum levels of a biochemical
marker of a particular disease were
compared with the known diagnosis of
the disease. 100 international units of the
marker or greater was taken as an
arbitrary positive test result:
Example
Disease Status
Present Absent
Total
>=100
431
30
461
<100
29
116
145
Total
460
146
606
Marker
Exercise
Initial creatine phosphokinase (CK)
levels were related to the subsequent
diagnosis of acute myocardial infarction
(MI) in a group of patients with
suspected MI. Four ranges of CK result
were chosen for the study:
Exercise
Disease Status
Present Absent
CPK
Total
>=280
80-279
40-79
1-39
97
118
13
2
1
15
26
88
98
133
39
100
Total
230
130
360
Odds and Probability
Disease Status
Present
Absent
Total
a
b
a+b
Probability of Disease = (# with disease) /
(# with & # without disease) = a/ (a+b)
Odds of a disease = (# with disease) /
(# without disease) = a/ b
Probability = Odds/ (Odds+1);
Odds = Probability / (1-Probability)
35
Use of Likelihood Ratio
Employment of following three step procedure:
1. Identify and convert the pre-test probability to
pre-test odds.
2. Determine the post-test odds using the formula,
Post-test Odds = Pre-test Odds * Likelihood
Ratio
3. Convert the post-test odds into post-test
probability.
36
Likelihood Ratio: Example
A 52 yr woman presents after detecting 1.5 cm
breast lump on self-exam. On clinical exam,
the lump is not freely movable. If the pre-test
probability is 20% and the LR for non-movable
breast lump is 4, calculate the probability that
this woman has breast cancer.
37
Likelihood Ratio: Solution
First step
Pre-test probability = 0.2
Pre-test odds = Pre-test prob / (1-pre-test prob)
Pre-test odds = 0.2/(1-0.2) = 0.2/0.8 = 0.25
Second step
Post-test odds Pre-test odds * LR
Post-test odds = 0.25*4 = 1
Third step
Post-test probability = Post-test odds / (1 + Post-test odds)
Post-test probability = 1/(1+1) = ½ = 0.5
38
40
Receiver Operating Characteristic (ROC)
Finding a best
test
Definitive
positive
Probably
positive
Finding a best
cut-off
Equivocal
Finding a best
combination
probably
negative
ROC curve constructed from
multiple test thresholds
Multiple thresholds evaluated in test
Not
diseased
a
a b c d
b
Diseased
sensitivity
c
d
1 - specificity
Receiver Operating Characteristic (ROC)
ROC Curve allows comparison of
different tests for the same condition
without (before) specifying a cut-off point.
The test with the largest AUC (Area
under the curve) is the best.
Features of good diagnosis study
Comparative (compares new test
against old test).
Should be a “gold standard”
Should include both positive and
negative results
Usually will involve “blinding” for both
patient, tester and investigator.
USERS GUIDES TO THE MEDICAL LITERATURE
How to use an Article about a Diagnostic Test?
Are the results of the study valid?
What are the results and will they help me in
caring for my patients?
BE-Workshop-DT-July2007
65
Methodological Questions for Appraising
Journal Articles about Diagnostic Tests
1. Was there an independent, ‘blind’ comparison with a
‘gold’ standard’ of diagnosis?
2. Was the setting for the study as well as the filter
through which the study patients passed, adequately
described?
3. Did the patient sample include an appropriate spectrum
of disease?
4. Have they done analysis of the pertinent subgroups
5. Where the tactics for carrying out the test described in
sufficient detail to permit their exact replication?
66
6. Was the reproducibility of the test result (precision) and
its interpretation (observer variation) determined?
7. Was the term ‘ normal’ defined sensibly?
8. Was precision of the test statistics given?
9. Was the indeterminate test results presented?
10. If the test is advocated as a part of a cluster or
sequence of tests, was its contribution to the overall
validity of the cluster or sequence determined?
11. Was the ‘ utility’ of the test determined?
67
Thank you