Temporal Aspects of Visual Extinction
Download
Report
Transcript Temporal Aspects of Visual Extinction
Catastrophic structural changes
Chris Rorden
Voxelwise Lesion Symptom Mapping
–
–
–
–
–
–
Motivation: strengths and limitations.
Should we examine acute or chronic injury?
Visualizing injury.
Mapping brain injury.
Normalization lesion maps.
Lesion mapping statistics.
“George Miller coined the term ‘cognitive neuroscience’…we
already knew that neuropsychology was not what we had
in mind…the bankruptcy and intellectual impoverishment
of that idea seemed self evident.”
-Michael S. Gazzaniga, 2000
1
Lesion symptom mapping
Lesion symptom mapping infers brain function
by observing consequences of brain injury.
Historically, one of the most influential tools in
understanding brain function
– Language: Broca’s and Wernickes Area
– Memory: anterograde amnesia
– Vision: Prosopagnosia, achromatopsia
– Emotions, Motor Control, etc.
2
Lesion mapping
Perhaps lesion mapping was historically important
because it was the only tool.
New tools address some of the weaknesses.
Disadvantages of lesion mapping:
– Poor spatial precision: lesions are messy: location and
extent influenced by vasculature, not functional areas.
– Poor temporal precision: lesions are permanent.
Sequence of information processing difficult to assess.
– Dilemma of acute vs chronic lesion mapping.
3
Acute vs chronic lesion mapping
Acute lesion mapping:
– Initially after a stroke, one sees widespread
dysfunction. Intact areas are disrupted as they
depend on damaged areas.
– Chronically, the brain is plastic. So it is difficult to
infer what a brain region used to do.
Ideally, we would examine both acute and
chronic effects.
– Acute injury is more clinically relevant.
– Chronic deficits: more stable and identifies
functions that can not be compensated.
4
Lesion mapping
Advantages:
– Stronger inference:
Activation techniques (fMRI, ERP): is area involved with task?
Disruption techniques (TMS, lesions): is area required by task?
– Understand function of tightly coupled network.
Activation techniques like listening to whole orchestra: hard to
identify differential contribution of musicians.
• Example: highly connected visual attention network tends to be
activated in concert, even though lesions suggest different functions.
Lesion method: How does music change if individual stops playing?
– Clinically relevant:
can we predict recovery or best treatment?
5
CT versus MRI scans
CT
– Clinically crucial:
Detect acute hemorrhage
Can be conducted when MRI contraindicated
– Limited research potential
Exposes individual to radiation
• Difficult to collect control data
• Typically very thick slices, impossible to normalize
Little contrast between gray and white matter
MRI
Different
contrasts (T1,T2,
DWI)
No radiation, so
we can collect
thin slices if we
have time.
6
Conventional MRI scans
T1 (anatomical): fast to acquire, excellent structural
detail (e.g. white and gray matter).
T2 (pathological): slower to acquire, therefore usually
lower resolution than T1. Excellent for finding lesions.
T1
T2
7
Lesion mapping: T1 vs T2
T1 scans offer good spatial resolution.
T2 scans better for identifying extent of injury, but
poor spatial resolution.
Solutions:
1. Acquire chronic T1 (>8 weeks)
2. Acquire both T1 and T2, use T2 to guide mapping on T1.
3. Acquire T2, map on normalized iconic brain (requires
expert lesion mapper).
4. Aquire high resolution T2 image, use for both mapping
and normalization (e.g. 1x1x1mm T2 ~9min). Requires
latest generation MRI.
Note: Many clinicians like FLAIR as it attenuates
CSF. Lesion signal similar to T2. Normalization
tricky (thick slices, no standard template).
T1
T2
FLAIR
8
Imaging acute stroke
T1/T2 MRI and x-rays can not visualize
hyperacute ischemic strokes.
– Acute: Subtle low signal on T1, often difficult to
see, and high signal (hyperintense) on spin
density and/or T2-weighted and proton densityweighted images starting 8 h after onset. Mass
effect maximal at 24 h, sometimes starting 2 h
after onset.
– Subacute (1 wk or older): Low signal on T1, high
signal on T2-weighted images. Follows vascular
distribution. Revascularization and blood-brain
barrier breakdown may cause enhancement with
contrast agents.
– Old (several weeks to years): Low signal on T1,
high signal on T2. Mass effect disappears after 1
mo. Loss of tissue with large infarcts.
Parenchymal enhancement fades after several
months.
www.strokecenter.org/education/ct-mri_criteria/
acute
+3days
CT
T2
www.med.harvard.edu/AANLIB/
9
Imaging Hyperacute Stroke
T1/T2 scans do not show acute injury.
Diffusion and Perfusion weighted scans show
acute injury:
– Diffusion images show permanent injury. Perhaps
good predictor of eventual recovery.
– Perfusion scans show functional injury. Best correlate
of acute behavior.
– Difference between DWI and PWI is tissue that might
survive.
Diaschisis: regions connected to damaged areas show acute
hypoperfusion and dysfunction.
Hypoperfused regions may have enough collateral blood
supply to survive but not function correctly (misery perfusion).
T2
DW
10
Perfusion imaging
Allows us to measure perfusion
– Static images can detect stenosis and
aneurysms (MRA)
– Dynamic images can measure perfusion (PWI)
Measure latency – acute latency appears to be strong
predictor of functional deficits.
Measure volume
– Perfusion imaging uses either Gadolinium or
blood as contrast agent.
Gd offers strong signal. However, only a few boluses
can be used and requires medical team in case of
(very rare) anaphylaxis.
Arterial Spin Labelling can be conducted continuously
(CASL). Good CASL requires good hardware.
11
DTI in stroke
Diffusion tensor imaging is an extension of Diffusion Weighted
Imaging.
DTI allows us to examine integrity and direction of fiber tracts.
This will allow us to examine disconnection syndromes (see Catani).
Analysis of DTI still in infancy.
DTI - stroke
Healthy
12
fMRI in stroke
fMRI analysis of stroke difficult.
Hemodynamic response often disrupted.
– Misery perfusion: system always compensating for
reduced bloodflow, so no dynamic ability to increase.
– Luxury perfusion: destroyed tissue no longer requires
blood, so regulation not required for surviving tissue.
13
Summary
Modality of scanning depends on age of lesion.
Hyperacute imaging will require PWI/DWI.
Older injuries seen on T1 and T2.
Different modalities provide different information: can we
combine information across modalities?
Our analysis should be based on individuals with similar
delay between injury and observation.
14
Thought experiment
What brain injury
leads to visual field
injury?
15
Mapping Lesions
With my software it is easy to trace injured area.
We can create an overlay plot of damaged region.
For example: here are the lesion maps for 36
people with visual field defects:
16
The problem with overlay plots
Overlay plots are misleading:
– Highlight areas involved with task (good)
– Highlight areas commonly damaged (bad)
Brain damage is not random: some brain areas
more vulnerable. Overlay plots highlight these
areas of common damage.
Solution: collect data from patients with similar
injury but without target deficit:
17
Value of control data
Solution: collect data from patients with similar
injury but without target deficit:
18
Statistical plots
We can use statistics to identify areas that
reliably predict deficit
E.G. Damage that results in visual field cuts
19
Lesion analysis
fMRI – many low
resolution volumes per
individual
Typical stages (SPM):
1.
2.
3.
4.
5.
Slice Time Correct
Motion correct
Normalize
Smooth (spatial, temporal)
Statistics
Lesions – one high
quality scan per
indivdiual
Typical stages:
1. Map Lesion
2. Normalize
3. Statistics
20
Map lesion
We need to map the
region of brain injury.
We can use MRIcron to
draw the location of the
lesion.
Note that our highresolution T1 scans may
not show full extent of
injury (must refer to
other scans).
21
Normalization
Normalization adjusts
size and shape of brain.
Aligns brains to
‘template’ image in
stereotaxic space.
Allows us to compare
brains between
individuals.
22
Normalization Transforms
Linear
Translation
Non-linear
Zoom
Rotation
Shear
23
Lesions disrupt normalization
Normalization works by adjusting the image orientation until
‘difference’ with template is minimized
However, region of injury appears different in image and template
Therefore, normalization will attempt to warp lesioned region.
One solution is to use ‘masked’ normalization (Brett et al., 2001),
however SPM5 and later are very robust (Crinion et al., 2007)
Image
Template
Variance
24
Lesion analysis
Two classes of analysis:
– Binomial analysis: if behavior falls into two mutually
exclusive groups (e.g. Broca’s Aphasia, No Broca’s
Aphasia).
Traditional test: Fisher-exact or Chi-squared test
Alternative test: Liebermeister quasi-exact test
– Continuous analysis: if data is not binomial. (e.g.
number of words starting with ‘b’ spontaneously
reported in two minutes).
Traditional test: t-test
Alternative test: Brunner-Munzel test.
25
Binomial Data
Visual neglect: ask patients for find each letter ‘A’ in
cluttered display (60 possible).
People who detect <55 are considered to have a deficit.
Deficit
Control
26
12 people w. cancellation deficit
Statistics
Our goal is to see if brain
anatomy predicts behavior.
We collect data scans and
behavioral data from many
stroke patients.
Consider 24 patients – half with
deficits.
Statistics will identify regions that
predict deficit.
12 people wo cancellation deficit
Statistical
27
Binomial Tests
Traditional tests: The
Fisher and Chi-squared
tend to be very
conservative (they
assume fixed marginals).
The Liebermeister
measure offers nearoptimal performance.
Rorden et al. (2007)
28
T-test lesion analysis.
A t-test requires two groups and one continuous variable.
The VLSM t-test is orthogonal to t-tests used for fMRI/VBM:
– fMRI/VBM t-tests:
Deficit defines two groups.
Voxel intensity provides continuous variable.
– VLSM
Voxel intensity (lesion/no lesion) defines two groups.
Behavioral performance provides continuous variable.
Note VLSM group size varies from voxel-to-voxel.
Statistical tests provide optimal power both groups have the same
number of observations (balanced).
– Therefore, VLSM power fluctuates across voxels
– We can not make inferences of voxels that are rarely damaged or always
damaged (also true for binomial tests).
29
Neuropsychological Data
Visual neglect: ask patients for find each letter ‘A’ in
cluttered display (60 possible).
Continuous measure is number of ‘A’s detected.
Performance on neuropsychological tasks is rarely normal.
– Skewed distribution: many at ceiling
Performance
30
T-test and skewed data
0
-2.5
1.00
0
2.5
Power Gauss(0,1) vs Expo(1,1)
t5
t1
tP5
tP1
bm5
bm1
bmP5
bmP1
0.90
0.80
Hit Rate
Skewed data causes the ttest to lose power.
If our data is skewed, the ttest may fail to detect real
differences.
Solution: Rorden et al (in
press) propose using nonparametric Brunner-Munzel
test for skewed data.
Frequency
8000
0.70
0.60
0.50
0.40
0.30
0.20
12
24
36
Observations
48
60
31
Brunner-Munzel test
Monte-Carlo simulation to compare t-test with BM test.
– Data: cancellation performance from 63 stroke patients.
– Permutations:
Randomly select 30 patients and analyze, then threshold with FDR.
Repeat 1000 times.
– Mean detection shown below: BM test is more sensitive.
Note: t-test slightly more
sensitive for normal data; BM
test slightly more sensitive for
skewed data.
32
Advanced VLSM statistics
Lesion volume always correlates with deficits.
– Small lesions unlikely to knock out entire functional region.
– Large lesions knock out more regions.
Therefore, previous tests can not distinguish between
equipotentiality and localized function.
Logistic regression can covary out lesion volume: is location
still a good predictor independent of volume? (Karnath et al.,
2004).
As lesion volume correlates well with deficits, LR analysis
offers poor statistical power but strong inference.
33
Problems with lesion-behavior inference
Group lesion studies may have biases
based on vasculature (Nachev)
– Lesions not random, typically middle
cerebral artery territory
Small lesions: red zone
Medium lesions: green zone
Large regions: blue zone
– Note that VLSM of a task that relies on
frontal cortex may incidentally detect
posterior area as well.
Single patient studies may simply
sample outliers to find ‘double
dissociations’ (Plunkett)
Intact
Behavior A
Impaired
Impaired
Intact
Behavior B
34
Statistical thresholding
The bad news:
–
–
–
–
Typical fMRI study uses 3x3x3mm voxels.
Typical VLSM study uses 1x1x1 voxels.
x27 the number of voxels.
Bonferroni correction leads to exceptionally
low power.
The good news:
– Lesions are large, contiguous across
individuals, therefore less within-subject
variability than fMRI.
– Ideally suited for Permutation Thresholding
(Frank et al. 1997; Kimberg et al. in press).
Solution: use either FDR or
permutation thresholds.
– My NPM provides both.
35
Region of Interest Analysis
Voxelwise Lesion Symptom Mapping has low power.
– Lesions large and variable.
– You will need to test many people to find effects.
You can dramatically increase power by conducting a region of
interest analysis, pooling data within a pre-defined area.
E.G. Aron et al. (2003) examine medial, orbital, inferior, middle
and superior frontal regions. Find IFG (red) strongly predicts gonogo performance.
36
VLSM applied to surgery
Lesion analysis typically used to assess brain function.
Technique can also refine neurosurgery, e.g. epilepsy.
Sustained uncontrolled seizures are a problem:
• Seizures lead to more seizures (vicious circle)
• Leads to brain damage and cognitive decline
• Impairs lifestyle and job options (driving car, tractor, etc.)
If drugs do not stop seizures, surgery can be used.
Surgery attempts to remove origin of seizures.
• Current surgery attempts to remove hippocampus and amygdala
• Fails to stop seizures in around 30% of patients
Can we map lesions to identify brain regions crucial to stopping
seizures?
37
Epilepsy Surgery
Current surgery attemps to
remove hippocampus.
Large anterior portion of
temporal lobe removed.
Some variability in which
regions are removed.
Bonilha et al, Arq Neuropsiquiatr 2004;62(1):15-20
38
Normalizing brains
39
Overlay: regions typically removed
-11
-27
-21
1
33
1
10
40
Behavioral Measure: Engel Outcome Scale
Class I - seizure-free;
Class II - rare seizures;
Class III - worthwhile improvement, with a reduction
of more than 90% of seizures;
Class IV – no worthwhile improvement (<90%
reduction in seizure frequency).
41
Statistics
A
B Entorhinal cortex voxel
8,-30)
Surgically Resected
Not Surgically Resected
(-
% 88
59
59
29
29
66
A Hippocampal voxel
26,-8,-22)
B
% 88
1
2
3
4
44
66
1
2
3
4
1
2
3
4
44
(-20,-
22
22
1
2
3
4
Engel Outcome Scale
Engel Outcome Scale
42
Statistics -
-15
hippocampus and entorhinal cortex is crucial
-30
0
-20
3.66
43
Conclusions
Our behavioural measure (Engel outcome) is not
binomial
– People are not cured/not cured, rather some have a degree
of improvement.
– Data is ordinal, not interval, so t-test is not appropriate.
– We use the non-parametric Brunner-Munzel test for this
data.
This work shows that the surgery can be refined to
target the entorhinal cortex.
Future work: make sure removal of entorhinal cortex
does not lead to further deficits.
44
Fisher’s Exact Test
Fisher Exact Test gives the precise
probability of an occurrence.
– Psychic claims to predict our behavior.
– To test, we will place four coins on table, two
heads and two tails.
– Psychic will tell us the order we placed the
coins, choosing 2 tails and two heads.
– There are 6 ways we could order the coins.
– 1/6 chance of correctly guessing all correctly.
– We could also compute probability of getting at
least half your choices correct.
– This is a ‘permutation’ test – we can derive all
possible combinations to estimate odds.
45
Fisher’s Exact Test
Strengths:
– Nonparametric: does not assume normal
distribution.
– Precise: Reports exact probability (as the precise
number of permutations are known).
– Mathematically simple and elegant (solved using
factorials).
4! = 4*3*2*1 = 24
5! = 5*4*3*2*1 = 120
Weaknesses:
– Difficult with large numbers of trials
factorials become huge quickly).
use chi-squared for larger values.
– In practice, Fisher’s test is very conservative.
46
Permutation Tests
Permutation tests
attempt to compute
exact probability.
Example: sum of two
dice. There is only one
combination for 2 (1+1),
two combinations for 3
(1+2,2+1), etc.
Probability for rolling 11
or higher is 3/36.
2
3
4
5
6 7 8
9
10 11 12
47
Fisher’s Exact Test
In our example, we MUST present two heads and two
tails, and the observer MUST name two as being heads
and two as being tails.
Note: The observer can only make 0,2,4 mistakes – they
can not make only one error, or precisely 3 errors.
One combination of guessing all correct, 4 combinations
of 2 errors, and one combination of guessing all
incorrectly.
“x”
“y”
x
2
0
y
0
2
“x”
“y”
x
1
1
y
1
1
“x”
“y”
x
0
2
y
2
0
48
Fisher’s Exact Test
Fisher’s exact test is precise, but assumes fixed
marginals.
– In our example, the sum of each column must equal 2, and
the sum of each row must equal 2.
“h”
“t”
h
2
0
2
t
0
2
2
2
2
“h”
“t”
h
1
1
t
1
1
“h”
“t”
h
0
2
t
2
0
49
Power of Fisher’s Exact Test
Most experiments do not have fixed
marginals.
For example, if we flip a coin 4 times, we
will not always observe exactly two heads
and two tails.
There are 16 combinations.
Also, observer responses are not typically
fixed. They will not always want to say
heads twice and tails twice.
The chance of a psychic precisely guessing
all four coin tosses correctly is 1/16.
Fisher’s exact test is much more
conservative.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
hhhh
hhht
hhth
hthh
thhh
hhtt
htht
htth
thht
thth
tthh
httt
thtt
ttht
ttth
tttt
50
Fisher’s Exact Test
In the real world, Fisher’s test is very
conservative. When it says something is a 6%
chance, it is often much more significant.
51
Lancaster’s mid-P
In the 1960’s, Lancaster
proposed a modification
of Fisher’s exact test.
This mid-P is in practice
much more accurate
than the exact test.
2 3 4 5 6 7 8 9 10 11 12
Mathematicians dislike Analogy:
Fisher’s approach reports the chance of rolling a
this test, as it is an
7 or higher is (6+5+4+3+2+1)/36 = 58%
Lancaster’s approach says the percentile of 7 or
inelegant kludge.
higher is ((6/2)+5+4+3+2+1)/36 = 50%
In other words, there are 6 ways to roll a 7, and
Lancaster assumes our observation is in the
middle of these possibilities.
52
Liebermeister’s measure
In 1877 Liebermeister proposed
permutation test for situations
without fixed marginals.
– Forgotten until around 2001!
Liebermeister was an MD
Did not offer an elegant, easy
mathematical solution.
– Fisher’s Factorial algorithm can solve
Liebermiester’s measure.
53
Binomial Tests
Binomial Tests: Fisher is
too conservative. MidP
and Liebermeister are
more accurate.
54