Temporal Aspects of Visual Extinction
Download
Report
Transcript Temporal Aspects of Visual Extinction
Catastrophic structural changes
Chris Rorden
Voxelwise Lesion Symptom Mapping
–
–
–
–
–
–
Motivation: strengths and limitations.
Should we examine acute or chronic injury?
Visualizing injury.
Mapping brain injury.
Normalization lesion maps.
Lesion mapping statistics.
“George Miller coined the term ‘cognitive neuroscience’…we
already knew that neuropsychology was not what we had
in mind…the bankruptcy and intellectual impoverishment
of that idea seemed self evident.”
-Michael S. Gazzaniga, 2000
1
Lesion symptom mapping
Lesion symptom mapping infers brain function
by observing consequences of brain injury.
Historically, one of the most influential tools in
understanding brain function
– Language: Broca’s and Wernickes Area
– Memory: anterograde amnesia
– Vision: Prosopagnosia, achromatopsia
– Emotions, Motor Control, etc.
2
Lesion mapping
Perhaps lesion mapping was historically important
because it was the only tool.
New tools address some of the weaknesses.
Disadvantages of lesion mapping:
– Poor spatial precision: lesions are messy: location and
extent influenced by vasculature, not functional areas.
– Poor temporal precision: lesions are permanent.
Sequence of information processing difficult to assess.
– Dilemma of acute vs chronic lesion mapping.
3
Acute vs chronic lesion mapping
Acute lesion mapping:
– Initially after a stroke, one sees widespread
dysfunction. Intact areas are disrupted as they
depend on damaged areas.
– Chronically, the brain is plastic. So it is difficult to
infer what a brain region used to do.
Ideally, we would examine both acute and
chronic effects.
– Acute injury is more clinically relevant.
– Chronic deficits: more stable and identifies
functions that can not be compensated.
4
Lesion mapping
Advantages:
– Stronger inference:
Activation techniques (fMRI, ERP): is area involved with task?
Disruption techniques (TMS, lesions): is area required by task?
– Understand function of tightly coupled network.
Activation techniques like listening to whole orchestra: hard to
identify differential contribution of musicians.
• Example: highly connected visual attention network tends to be
activated in concert, even though lesions suggest different functions.
Lesion method: How does music change if individual stops playing?
– Clinically relevant:
can we predict recovery or best treatment?
5
Lesion visualization
Lesion-symptom mapping requires us to
identify which bit of the brain is injured.
Different MRI modalities show different aspects
of injury.
The quality of the MRI scans dramatically
influences the analysis.
6
CT scans
Crucial clinical tool
– Detect acute hemorrhage
– Can be conducted when MRI contraindicated
Limited research potential
– Exposes individual to radiation
Difficult to collect control data
Typically very thick slices, impossible to normalize
– Little contrast between gray and white matter
7
MRI scans
MRI scans are very flexible
Various protocols can contrast
different properties
–
–
–
–
–
T1: Excellent spatial resolution
T2/FLAIR: Pathology
DWI: White matter, acute stroke
PWI: Acute perfusion deficits
T2*: ‘fMRI’ ~ brain function
Does not expose participant to
radiation
8
Conventional MRI scans
T1 (anatomical): fast to acquire, excellent structural
detail (e.g. white and gray matter).
T2 (pathological): slower to acquire, therefore usually
lower resolution than T1. Excellent for finding lesions.
T1
T2
9
Lesion mapping: T1 vs T2
T1 scans offer good spatial resolution.
T2 scans better for identifying extent of injury, but
poor spatial resolution.
Solutions:
1. Acquire chronic T1 (>8 weeks)
2. Acquire both T1 and T2, use T2 to guide mapping on T1.
3. Acquire T2, map on normalized iconic brain (requires
expert lesion mapper).
4. Aquire high resolution T2 image, use for both mapping
and normalization (e.g. 1x1x1mm T2 ~9min). Requires
latest generation MRI.
Note: Many clinicians like FLAIR as it attenuates
CSF. Lesion signal similar to T2. Normalization
tricky (thick slices, no standard template).
T1
T2
FLAIR
10
Imaging acute stroke
T1/T2 MRI and x-rays can not visualize
hyperacute ischemic strokes.
– Acute: Subtle low signal on T1, often difficult to
see, and high signal (hyperintense) on spin
density and/or T2-weighted and proton densityweighted images starting 8 h after onset. Mass
effect maximal at 24 h, sometimes starting 2 h
after onset.
– Subacute (1 wk or older): Low signal on T1, high
signal on T2-weighted images. Follows vascular
distribution. Revascularization and blood-brain
barrier breakdown may cause enhancement with
contrast agents.
– Old (several weeks to years): Low signal on T1,
high signal on T2. Mass effect disappears after 1
mo. Loss of tissue with large infarcts.
Parenchymal enhancement fades after several
months.
www.strokecenter.org/education/ct-mri_criteria/
acute
+3days
CT
T2
www.med.harvard.edu/AANLIB/
11
Imaging Hyperacute Stroke
T1/T2 scans do not show acute injury.
Diffusion and Perfusion weighted scans show
acute injury:
– Diffusion images show permanent injury. Perhaps
good predictor of eventual recovery.
– Perfusion scans show functional injury. Best correlate
of acute behavior.
– Difference between DWI and PWI is tissue that might
survive.
Diaschisis: regions connected to damaged areas show acute
hypoperfusion and dysfunction.
Hypoperfused regions may have enough collateral blood
supply to survive but not function correctly (misery perfusion).
T2
DW
12
Perfusion imaging
Allows us to measure perfusion
– Static images can detect stenosis and
aneurysms (MRA)
– Dynamic images can measure perfusion (PWI)
Measure latency – acute latency appears to be strong
predictor of functional deficits.
Measure volume
– Perfusion imaging uses either Gadolinium or
blood as contrast agent.
Gd offers strong signal. However, only a few boluses
can be used and requires medical team in case of
(very rare) anaphylaxis.
Arterial Spin Labelling can be conducted continuously
(CASL). Good CASL requires good hardware.
13
DTI in stroke
Diffusion tensor imaging is an extension of Diffusion Weighted
Imaging.
DTI allows us to examine integrity and direction of fiber tracts.
This will allow us to examine disconnection syndromes (see Catani).
Analysis of DTI still in infancy.
DTI - stroke
Healthy
14
fMRI in stroke
fMRI analysis of stroke difficult.
Hemodynamic response often disrupted.
– Misery perfusion: system always compensating for
reduced bloodflow, so no dynamic ability to increase.
– Luxury perfusion: destroyed tissue no longer requires
blood, so regulation not required for surviving tissue.
15
Summary
Modality of scanning depends on age of lesion.
Hyperacute imaging will require PWI/DWI.
Older injuries seen on T1 and T2.
Different modalities provide different information: can we
combine information across modalities?
Our analysis should be based on individuals with similar
delay between injury and observation.
16
Thought experiment
What brain injury
leads to visual field
injury?
17
Mapping Lesions
With my software it is easy to trace injured area.
We can create an overlay plot of damaged region.
For example: here are the lesion maps for 36
people with visual field defects:
18
The problem with overlay plots
Overlay plots are misleading:
– Highlight areas involved with task (good)
– Highlight areas commonly damaged (bad)
Brain damage is not random: some brain areas
more vulnerable. Overlay plots highlight these
areas of common damage.
Solution: collect data from patients with similar
injury but without target deficit:
19
Value of control data
Solution: collect data from patients with similar
injury but without target deficit:
20
Statistical plots
We can use statistics to identify areas that
reliably predict deficit
E.G. Damage that results in visual field cuts
21
Lesion analysis
fMRI – many low
resolution volumes per
individual
Typical stages (SPM):
1.
2.
3.
4.
5.
Slice Time Correct
Motion correct
Normalize
Smooth (spatial, temporal)
Statistics
Lesions – one high
quality scan per
indivdiual
Typical stages:
1. Map Lesion
2. Normalize
3. Statistics
22
Map lesion
We need to map the
region of brain injury.
We can use MRIcron to
draw the location of the
lesion.
Note that our highresolution T1 scans may
not show full extent of
injury (must refer to
other scans).
23
Normalization
Normalization adjusts
size and shape of brain.
Aligns brains to
‘template’ image in
stereotaxic space.
Allows us to compare
brains between
individuals.
24
Linear Transforms
Translation
Zoom
Rotation
Shear
25
Nonlinear Basis Functions
Basis functions can have local influence.
Allow better fit of scans.
Can cause distortion in some case (e.g. stroke)
26
Lesions disrupt normalization
Normalization works by adjusting the image orientation until
‘difference’ with template is minimized
Typically, variance between image and template is used as a
measure of difference [variance= (image-template)2 ]
However, region of lesion appears different in image and
template
Therefore, normalization will attempt to warp lesioned region
Image
Template
Variance
27
Lesion masked normalization
How to normalize
images with lesions?
Only calculate
normalization on healthy
tissue – ignore regions
of injury.
Healthy
Lesion-NoMask
Lesion-Mask
28
Lesion analysis
Two classes of analysis:
– Binomial analysis: if behavior falls into two mutually
exclusive groups (e.g. Broca’s Aphasia, No Broca’s
Aphasia).
Traditional test: Fisher-exact or Chi-squared test
Alternative test: Liebermeister quasi-exact test
– Continuous analysis: if data is not binomial. (e.g.
number of words starting with ‘b’ spontaneously
reported in two minutes).
Traditional test: t-test
Alternative test: Brunner-Munzel test.
29
Binomial Data
Visual neglect: ask patients for find each letter ‘A’ in
cluttered display (60 possible).
People who detect <55 are considered to have a deficit.
Deficit
Control
30
Statistics
Our goal is to see if brain
anatomy predicts behavior.
We collect data scans and
behavioral data from many
stroke patients.
Consider 24 patients – half with
deficits.
Statistics will identify regions that
predict deficit.
12 people w. cancellation deficit
12 people wo cancellation deficit
31
Voxelwise statistics
MRIcron tutorial
We will compute statistics at
every voxel of the brain.
The statistical test will differ if
the behavioral data is binomial
(deficit present or absent) or
continuous (performance is
graded continuum).
32
Binomial Tests
Traditional tests: The
Fisher and Chi-squared
tend to be very
conservative (they
assume fixed marginals).
The Liebermeister
measure offers nearoptimal performance.
Rorden et al. (in press)
33
T-test lesion analysis.
A t-test requires two groups and one continuous variable.
The VLSM t-test is orthogonal to t-tests used for fMRI/VBM:
– fMRI/VBM t-tests:
Deficit defines two groups.
Voxel intensity provides continuous variable.
– VLSM
Voxel intensity (lesion/no lesion) defines two groups.
Behavioral performance provides continuous variable.
Note VLSM group size varies from voxel-to-voxel.
Statistical tests provide optimal power both groups have the same
number of observations (balanced).
– Therefore, VLSM power fluctuates across voxels
– We can not make inferences of voxels that are rarely damaged or always
damaged (also true for binomial tests).
34
Neuropsychological Data
Visual neglect: ask patients for find each letter ‘A’ in
cluttered display (60 possible).
Continuous measure is number of ‘A’s detected.
Performance on neuropsychological tasks is rarely normal.
– Skewed distribution: many at ceiling
Performance
35
T-test and skewed data
0
-2.5
1.00
0
2.5
Power Gauss(0,1) vs Expo(1,1)
t5
t1
tP5
tP1
bm5
bm1
bmP5
bmP1
0.90
0.80
Hit Rate
Skewed data causes the ttest to lose power.
If our data is skewed, the ttest may fail to detect real
differences.
Solution: Rorden et al (in
press) propose using nonparametric Brunner-Munzel
test for skewed data.
Frequency
8000
0.70
0.60
0.50
0.40
0.30
0.20
12
24
36
Observations
48
60
36
Brunner-Munzel test
Monte-Carlo simulation to compare t-test with BM test.
– Data: cancellation performance from 63 stroke patients.
– Permutations:
Randomly select 30 patients and analyze, then threshold with FDR.
Repeat 1000 times.
– Mean detection shown below: BM test is more sensitive.
Note: t-test slightly more
sensitive for normal data; BM
test slightly more sensitive for
skewed data.
37
Advanced VLSM statistics
Lesion volume always correlates with deficits.
– Small lesions unlikely to knock out entire functional region.
– Large lesions knock out more regions.
Therefore, previous tests can not distinguish between
equipotentiality and localized function.
Logistic regression can covary out lesion volume: is location
still a good predictor independent of volume? (Karnath et al.,
2004).
As lesion volume correlates well with deficits, LR analysis
offers poor statistical power but strong inference.
38
Statistical thresholding
The bad news:
–
–
–
–
Typical fMRI study uses 3x3x3mm voxels.
Typical VLSM study uses 1x1x1 voxels.
x27 the number of voxels.
Bonferroni correction leads to exceptionally
low power.
The good news:
– Lesions are large, contiguous across
individuals, therefore less within-subject
variability than fMRI.
– Ideally suited for Permutation Thresholding
(Frank et al. 1997; Kimberg et al. in press).
Solution: use either FDR or
permutation thresholds.
– My NPM provides both.
39
Region of Interest Analysis
Voxelwise Lesion Symptom Mapping has low power.
– Lesions large and variable.
– You will need to test many people to find effects.
You can dramatically increase power by conducting a region of
interest analysis, pooling data within a pre-defined area.
E.G. Aron et al. (2003) examine medial, orbital, inferior, middle
and superior frontal regions. Find IFG (red) strongly predicts gonogo performance.
40
VLSM applied to surgery
Lesion analysis typically used to assess brain function.
Technique can also refine neurosurgery, e.g. epilepsy.
Sustained uncontrolled seizures are a problem:
• Seizures lead to more seizures (vicious circle)
• Leads to brain damage and cognitive decline
• Impairs lifestyle and job options (driving car, tractor, etc.)
If drugs do not stop seizures, surgery can be used.
Surgery attempts to remove origin of seizures.
• Current surgery attempts to remove hippocampus and amygdala
• Fails to stop seizures in around 30% of patients
Can we map lesions to identify brain regions crucial to stopping
seizures?
41
Epilepsy Surgery
Current surgery attemps to
remove hippocampus.
Large anterior portion of
temporal lobe removed.
Some variability in which
regions are removed.
Bonilha et al, Arq Neuropsiquiatr 2004;62(1):15-20
42
Normalizing brains
43
Overlay: regions typically removed
-11
-27
-21
1
33
1
10
44
Behavioral Measure: Engel Outcome Scale
Class I - seizure-free;
Class II - rare seizures;
Class III - worthwhile improvement, with a reduction
of more than 90% of seizures;
Class IV – no worthwhile improvement (<90%
reduction in seizure frequency).
45
Statistics
A
B Entorhinal cortex voxel
8,-30)
Surgically Resected
Not Surgically Resected
(-
% 88
59
59
29
29
66
A Hippocampal voxel
26,-8,-22)
B
% 88
1
2
3
4
44
66
1
2
3
4
1
2
3
4
44
(-20,-
22
22
1
2
3
4
Engel Outcome Scale
Engel Outcome Scale
46
Statistics -
-15
hippocampus and entorhinal cortex is crucial
-30
0
-20
3.66
47
Conclusions
Our behavioural measure (Engel outcome) is not
binomial
– People are not cured/not cured, rather some have a degree
of improvement.
– Data is ordinal, not interval, so t-test is not appropriate.
– We use the non-parametric Brunner-Munzel test for this
data.
This work shows that the surgery can be refined to
target the entorhinal cortex.
Future work: make sure removal of entorhinal cortex
does not lead to further deficits.
48
Fisher’s Exact Test
Fisher Exact Test gives the precise
probability of an occurrence.
– Psychic claims to predict our behavior.
– To test, we will place four coins on table, two
heads and two tails.
– Psychic will tell us the order we placed the
coins, choosing 2 tails and two heads.
– There are 6 ways we could order the coins.
– 1/6 chance of correctly guessing all correctly.
– We could also compute probability of getting at
least half your choices correct.
– This is a ‘permutation’ test – we can derive all
possible combinations to estimate odds.
49
Fisher’s Exact Test
Strengths:
– Nonparametric: does not assume normal
distribution.
– Precise: Reports exact probability (as the precise
number of permutations are known).
– Mathematically simple and elegant (solved using
factorials).
4! = 4*3*2*1 = 24
5! = 5*4*3*2*1 = 120
Weaknesses:
– Difficult with large numbers of trials
factorials become huge quickly).
use chi-squared for larger values.
– In practice, Fisher’s test is very conservative.
50
Permutation Tests
Permutation tests
attempt to compute
exact probability.
Example: sum of two
dice. There is only one
combination for 2 (1+1),
two combinations for 3
(1+2,2+1), etc.
Probability for rolling 11
or higher is 3/36.
2
3
4
5
6 7 8
9
10 11 12
51
Fisher’s Exact Test
In our example, we MUST present two heads and two
tails, and the observer MUST name two as being heads
and two as being tails.
Note: The observer can only make 0,2,4 mistakes – they
can not make only one error, or precisely 3 errors.
One combination of guessing all correct, 4 combinations
of 2 errors, and one combination of guessing all
incorrectly.
“x”
“y”
x
2
0
y
0
2
“x”
“y”
x
1
1
y
1
1
“x”
“y”
x
0
2
y
2
0
52
Fisher’s Exact Test
Fisher’s exact test is precise, but assumes fixed
marginals.
– In our example, the sum of each column must equal 2, and
the sum of each row must equal 2.
“h”
“t”
h
2
0
2
t
0
2
2
2
2
“h”
“t”
h
1
1
t
1
1
“h”
“t”
h
0
2
t
2
0
53
Power of Fisher’s Exact Test
Most experiments do not have fixed
marginals.
For example, if we flip a coin 4 times, we
will not always observe exactly two heads
and two tails.
There are 16 combinations.
Also, observer responses are not typically
fixed. They will not always want to say
heads twice and tails twice.
The chance of a psychic precisely guessing
all four coin tosses correctly is 1/16.
Fisher’s exact test is much more
conservative.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
hhhh
hhht
hhth
hthh
thhh
hhtt
htht
htth
thht
thth
tthh
httt
thtt
ttht
ttth
tttt
54
Fisher’s Exact Test
In the real world, Fisher’s test is very
conservative. When it says something is a 6%
chance, it is often much more significant.
55
Lancaster’s mid-P
In the 1960’s, Lancaster
proposed a modification
of Fisher’s exact test.
This mid-P is in practice
much more accurate
than the exact test.
2 3 4 5 6 7 8 9 10 11 12
Mathematicians dislike Analogy:
Fisher’s approach reports the chance of rolling a
this test, as it is an
7 or higher is (6+5+4+3+2+1)/36 = 58%
Lancaster’s approach says the percentile of 7 or
inelegant kludge.
higher is ((6/2)+5+4+3+2+1)/36 = 50%
In other words, there are 6 ways to roll a 7, and
Lancaster assumes our observation is in the
middle of these possibilities.
56
Liebermeister’s measure
In 1877 Liebermeister proposed
permutation test for situations
without fixed marginals.
– Forgotten until around 2001!
Liebermeister was an MD
Did not offer an elegant, easy
mathematical solution.
– Fisher’s Factorial algorithm can solve
Liebermiester’s measure.
57
Binomial Tests
Binomial Tests: Fisher is
too conservative. MidP
and Liebermeister are
more accurate.
58