Computational Psychiatry Seminar: Spring 2014 Week 11: The
Download
Report
Transcript Computational Psychiatry Seminar: Spring 2014 Week 11: The
D-ITET/IBT
Week 11: The Role of Noradrenaline in
Learning and Plasticity – Part 1.
Gabor Stefanics (TNU)
Neuropharmacology
(Computational Psychiatry Seminar: Spring 2014)
16.05.2014
D-ITET / IBT / TNU
1
Outline
-Noradrenaline (norepinephrine)
-NE activity and effects
-Involvement in cognition
-Theory of locus coeruleus noradrenaline function
-Computational model of decision making
-Twenty-Five Lessons from Computational Neuromodulation
16.05.2014
D-ITET / IBT / TNU
2
The noradrenergic system
NE
monoamine, an organic compound that has a
catechol (benzene with two hydroxyl side groups)
and a side-chain amine (basic nitrogen atom with a
lone pair)
dopamine
16.05.2014
norepinephrine
D-ITET / IBT / TNU
3
Neurochemical Modulation of Response
Inhibition and Probabilistic Learning in Humans
Chamberlain et al. investigated the differential involvement of NA and 5-HT
transmitter systems in these processes in humans, using the
1) selective NA reuptake inhibitor (SNRI) atomoxetine and the
2) selective 5-HT reuptake inhibitor (SSRI) citalopram.
These agents are among the most selective inhibitors for brain NA and 5-HT
reuptake transporters available for human use.
.
Chamberlain et al. (2006) Science
16.05.2014
D-ITET / IBT / TNU
4
Neurochemical Modulation of Response
Inhibition and Probabilistic Learning in Humans
Stop-signal task
Subjects respond rapidly to left- or right-facing arrows on screen with corresponding motor responses,
and they attempt to inhibit responses when an auditory stop-signal sounds. Over the course of the task,
the time between stimulus onset and occurrence of the stop-signal is varied by means of a tracking
algorithm. This permits calculation of the SSRT, which reflects an estimate of the time taken to internally
suppress prepotent motor responses.
~NE
.
Chamberlain et al. (2006) Science
16.05.2014
D-ITET / IBT / TNU
5
Stop-signal inhibition disrupted by damage to
right inferior frontal gyrus in humans
.
Aron et al. (2003) Nat Neurosci
16.05.2014
D-ITET / IBT / TNU
6
Neurochemical Modulation of Response
Inhibition and Probabilistic Learning in Humans
stage 1
Probabilistic learning task
Volunteers make a two-alternative forced choice between two
stimuli (one red, one green) on each trial. The ‘‘correct’’ stimulus
(always the first stimulus touched) receives an 4:1 ratio of
positive:negative feedback, and the opposite ratio is given for the
‘‘incorrect’’ stimulus. Feedback is provided in the form of ‘‘CORRECT’’
or ‘‘INCORRECT’’ appearing on screen after each choice. Ability to
acquire the stimulus-reward association on the basis of this degraded
feedback is assessed by the number of errors made before reaching
criterion, defined as eight consecutive correct responses to the
maximally rewarded stimulus. After 40 trials (stage 1), the
contingencies reverse for the subsequent 40 trials (stage 2). The
detrimental effect of misleading negative feedback on learning is
assessed by means of an overall ‘‘feedback sensitivity’’ score. This is
defined as the overall likelihood that the volunteer inappropriately
switched to choose the incorrect stimulus after misleadingly being
informed that his or her correct response on the previous trial was
not correct.
~5-HT
stage 2
Chamberlain et al. (2006) Science
16.05.2014
D-ITET / IBT / TNU
7
Phasic norepinephrine: A neural interrupt signal
for unexpected events.
A) Rats solved a sequential decision problem with two sets of cues (spatial
and visual). When the relevant cues were switched after a few days of
learning (from spatial to visual), rats with pharmacologically boosted NE
(idazoxan) learned to use the alternative set of cues faster than the
controls. Adapted from Devauges & Sara (1990).
NE activity and effects
B) Monkeys solved a vigilance task in which they had to react to rare targets
and ignore common distractors. The trace shows the activity of a single
norepinephrine neuron around the time of a reversal between target and
distractor cues (vertical dashed line). The tonic activity is elevated for a
considerable period. Adapted from Aston-Jones et al. (1997).
C) In the same vigilance task, single NE cells are activated on a phasic
time-scale stimulus locked (vertical line) to the target (upper plot) and not
the distractor (lower plot). Adapted from Rajkowski et al. (2004).
D) Again in the same task, the average responses of a large number of
norepinephrine cells (over a total of 41,454 trials) stimulus locked (vertical
line) to targets or distractors, sorted by the nature and rectitude of the
response. Adapted from Rajkowski et al. (2004).
E) In a GO/NO-GO olfactory discrimination task for rats, single units are
activated by the target odor (and not by the distractor odor), but are
temporally much more tightly locked to the response (right) than the
stimulus (left). Trials are ordered according to the time between stimulus and
response, with the red and blue marks showing the time of the response and
stimulus respectively. From Bouret & Sara (2004).
F) Correlation between the gross fluctuations in the tonic activity of a single
NE neuron (upper) and performance in the simple version of the vigilance
task (lower, measured by false alarm rate).
Dayan & Yu (2006) Network: Computation in Neural Systems
09.05.2014
D-ITET / IBT / TNU
8
Norepinephrine.
Involvement in cognition
The noradrenergic system might be involved in learning and memory, there have been
increasingly complex theories concerning the functional role of this system, beginning with
vigilance, attention and memory processes, and culminating in complex models concerning
.
prediction errors, decision making and unexpected uncertainty.
Bouret & Sara (2005) TINS
16.05.2014
D-ITET / IBT / TNU
9
NE and psychopathology
Relationship between depression
symptomps and some noradrenergic
projections from the locus coeruleus
Multi-stage model of attention-deficit
hyperactivity disorder (ADHD)
The posterior attention system receives dense NE innervation from the LC . NE inhibits the
spontaneous discharge of neurons, which enhances the signal-to-noise ratio of target cells
and primes the posterior system to orient to and engage novel stimuli . Attentional
function then shifts to the anterior executive system, which consists of the prefrontal
cortex (PFC) and the anterior cingulate gyrus. The responsiveness of the PFC and anterior
cingulate to the incoming signals is modulated primarily by dopaminergic (DA) input from
the ventral tegmental area in the midbrain. Ascending DA fibers stimulate postsynaptic D1
receptors on pyramidal neurons in the PFC and anterior cingulate, which in turn, facilitate
excitatory NMDA receptor inputs from the posterior attention system. Thus, DA selectively
gates excitatory inputs to the PFC and cingulate, thereby reducing irrelevant neuronal
activity during the performance of executive functions. Inability of NE to prime the
posterior attention system could account for the attentional problems seen in children
with ADHD, while the loss of DA's ability to gate inputs to the anterior executive system
may be linked to the deficit in executive functions characteristic of ADHD.
Himelstein et al. (2000) Front Bioscience
09.05.2014
D-ITET / IBT / TNU
10
Network reset: a simplified overarching theory of
locus coeruleus noradrenaline function
Current state of art
A new, simplified and overarching theory of
noradrenaline function is inspired by an invertebrate
model: neuromodulators in crustacea abruptly
interrupt activity in neural networks and reorganize
the elements into new functional networks
determining the behavioral output.
Analogously in mammals, phasic activation of
noradrenergic neurons of the locus coeruleus in time
with cognitive shifts could provoke or facilitate
dynamic reorganization of target neural networks,
permitting rapid behavioral adaptation to changing
environmental imperatives.
Bouret & Sara (2005) TINS
09.05.2014
D-ITET / IBT / TNU
A behavioral state is characterized by a given functional
network that could be defined by a specific
spatiotemporal pattern of neuronal activity, here
represented by a pattern of activated neurons (green
circles). Gray circles represent cells that do not
participate in the network. When a stimulus induces a
cognitive shift, activation of LC appears immediately
before the behavioral shift and, through a simultaneous
action on its multiple target structures, can promote the
underlying modification of network interactions. These
modifications are schematized by engagement (red
arrows) or disengagement (red crosses) of several cells.
Such an action, analogous to that described in
invertebrates, could underlie the implication of the
noradrenergic system in cognitive and behavioral
flexibility.
11
Modulators of decision making
1. Computational model of decision making
The process of decision making can be decomposed into four steps.
1) one recognizes the present situation (or state).
2) one evaluates action candidates (or options, e.g., going out tonight) in terms of how
much reward or punishment each potential choice would bring.
3) one selects an action in reference to one’s needs.
4) one may reevaluate the action based on the outcome.
.
Doya (2008) Nat Neurosci
16.05.2014
D-ITET / IBT / TNU
12
Modulators of decision making
1. Computational model of decision making
Computational model of decision making
1) Perceptual decision making, depends on veridical perception
2) Evaluation of action candidates
Value of a reward given by an action at a state is a function of reward amount, delay and
probability
V = f(amount) × g(delay) × h(probability)
when there are multiple possible outcomes
V = ∑ f(amounti) × g(delayi) × h(probabilityi)
i
In standard ‘expected utility’ theory, h is assumed to be identity, resulting in a simpler
form
V = E[f(amount) × g(delay)]
where E denotes expectation. However, subjects often undervalue probabilistic outcomes,
which is better modeled by a function h that is smaller than unity for probability < 1 (except.
at very small probabilities, which can be overvalued). Such deviations from expected utility
theory are summarized in ‘prospect theory’.
Doya (2008) Nat Neurosci
16.05.2014
D-ITET / IBT / TNU
13
Modulators of decision making
Standard models of evaluation of amount, delay and probability of reward
utility function
over- or undervaluation
of stochastic outcomes
‘temporal discounting’
of delayed rewards
Doya (2008) Nat Neurosci
16.05.2014
D-ITET / IBT / TNU
14
Modulators of decision making
1. Computational model of decision making
3) Action selection. After evaluating the value of each action candidate, the next issue is
how to select an appropriate one. Given the values for action candidates V(a1),…, V(an),
the most straightforward way is to choose the one with the highest value. This is called
greedy action selection.
Another common solution is ‘Boltzmann selection’ in which selection probabilities p are
proportional to the exponentials of the estimated values
By an analogy with thermodynamics, the scaling parameter β is called the ‘inverse
temperature’; β = 0 means all actions are taken with an equal probability of 1/n, and the
larger the β, the greedier the selection.
In animal behavior studies, a well known principle is the matching law, in which an action is
selected in proportion to its value
This is a nearly optimal strategy in ‘baited’ tasks, in which a reward becomes available at a
given location with a certain probability and will stay there until it is taken. In such an
environment, a less rewarded action becomes more profitable after a long interval.
.
Doya (2008) Nat Neurosci
16.05.2014
D-ITET / IBT / TNU
15
Modulators of decision making
1. Computational model of decision making
4) Learning
In learning the values of actions in dynamic environments, a critical issue is to identify which
action in time caused a given outcome (the problem of ‘temporal credit assignment’).
Three basic ways for learning values in dynamic environments:
-First, keep in memory which action was taken at which state in the form of ‘eligibility traces’,
and when a reward is given, reinforce the state-action associations in proportion to the
eligibility traces.
-Second, use so-called temporal difference learning (sampling the environment). In the case
of exponential temporal discounting, this involves following a model using a recursive
relationship of the values of subsequent states and actions to update the previous stateaction pair.
V(state, action) = E[reward + γV(new state, new action)]
-Third, learn a model of action-dependent state-transition probability and, given the present
state, predict the future rewards for hypothetical actions in order to select the best evaluated.
Temporal difference learning is more efficient but depends on the appropriate choice of the
state variable. Model-based planning requires more contrived operations but can provide
flexible adaptation to changes in behavioral goals.
Doya (2008) Nat Neurosci
16.05.2014
D-ITET / IBT / TNU
16
Modulators of decision making
Factors that affect decisions and learning
Needs and desires. The utility curve f should reflect the decision maker’s physiological or economic
needs. The utility of any amount exceeding the maximal consumption should also saturate. Thus utility
functions often have sigmoid shape with threshold and saturation. In people, different desires leads to
different thresholds of nonlinear valuation.
Risk and uncertainty. Buying insurance and a lotto ticket. Deviations from simple linear evaluation can
be regarded as ‘risk-averse’ or ‘risk-seeking’ decisions and be modeled by nonlinearity in either the
utility function f or the probability evaluation function h.
Knowledge and uncertainty about the environment are also important in decision making. E.g., expected
uncertainty vs. unexpected uncertainty. Stochastic environmental dynamics limits predictability, thus in
reinforcement learning, the temporal horizon needs to be set long enough, but not too long.
Time spent and time remaining. A general issue in learning is how fast one should learn from new
experiences and how stably old knowledge should be retained. The appropriate choice of the learning
rate depends on both the character of the environment and the experience of the subject. In a constant
environment, the theoretically optimal way to learn is to start with rapid memory updating and then to
decay the learning rate as an inverse of the number of experiences. When the dynamics of the
environment change over time, the setting of the learning rate should depend on the estimate of the
time for which the past experiences remain valid. (+ exclusiveness of commitment)
Doya (2008) Nat Neurosci
16.05.2014
D-ITET / IBT / TNU
17
Modulators of decision making
Neural substrates modulating decision making
Decisions are made in the circuit linking the cerebral cortex and the basal ganglia. Reward-predictive
neural activities are found in a variety of areas in the cortex, the striatum, the globus pallidus and the
thalamus. Functional brain imaging in humans show activity related to reward prediction error in the
striatum, which receives strong dopaminergic projections. Dopamine-dependent plasticity in the
striatum seems to be important in learning of reward-predictive neural activities. The dynamic
interaction of these areas composing the cortex–basal ganglia loops, as well as other subcortical
structures, especially the amygdala, is believed to result in reward-dependent selection of particular
actions.
The network is affected by the sensory
and contextual information represented
in the cortex, as well as in diffuse
neurochemical systems, such as
serotonin, norepinephrine and
acetylcholine.
.
Doya (2008) Nat Neurosci
16.05.2014
D-ITET / IBT / TNU
18
Modulators of decision making
Neural substrates modulating decision making
-Gains and losses. The amygdala is involved in processing of aversive stimuli and avoidance learning. Human brain imaging
shows response of the amygdala to expectation of losses as opposed to gains. FMRI reveals that different parts in the striatum
respond to gains and losses. Neurons in the lateral habenula respond to no-reward predictive cues as well as reward omission,
in exactly the opposite way as dopamine neurons, and that stimulation of the lateral habenula causes inhibition of dopamine
neurons. The result highlights the lateral habenula as a possible center of aversive learning.
-Cost and effort. In a T-maze with a small reward behind a low wall on one side and a large reward behind a high wall on the
other, lesions of the anterior cingulate cortex (ACC) cause choices of small rewards obtained by smaller effort. Choice of a
larger reward with a larger effort is impaired by a dopamine D2 receptor antagonist.
-Risk and variance. Brain imaging shows that especially the ventral striatum is involved in expectation of rewards. Imaging
studies show activity in the anterior insula and the lateral orbitofrontal cortex (OFC) in response to variance in the predicted
reward. Risk-seeking choice also activates the ventral striatum.
5-HT, serotonin;
ACC, anterior cingulate cortex;
DA, dopamine;
DLPFC, dorsolateral prefrontal cortex;
DS, dorsal striatum;
NE, norepinephrine;
OFC, orbitofrontal cortex;
VS, ventral striatum
Doya (2008) Nat Neurosci
16.05.2014
D-ITET / IBT / TNU
19
Modulators of decision making
Neural substrates modulating decision making
-Delay discounting. Deficits in the serotonergic system are implicated in impulsivity, both in suppression of maladaptive motor
behaviors and in choices of larger but delayed rewards. FMRI studies using a game in a dynamic environment show the
dorsolateral prefrontal cortex, dorsal premotor cortex, parietal cortex and insula are more activated in conditions requiring
long-term prediction of rewards rather than in conditions requiring short-term predictions. Activation of the ventral striatum,
medial OFC, ACC and posterior cingulate cortex encodes expectation of immediate rewards. Serotonin efflux increases in
medial prefrontal cortex while rats perform a delay discounting task.
-Learning and exploration. The optimal setting of the learning rate depends on how quickly the world is changing. Subjects’
learning rates vary depending on the volatility of the task environment, which is also correlated with the activity of ACC. After
an abrupt change of the environment, it is more appropriate to totally reset what has been learned (or switch to another
learning module) and start over. Norepinephrine is implicated in such ‘resets’ of ongoing activities. NE is also suggested to be
important in regulating the decision to explore alternatives versus exploiting a known resource. Deficits in serotonin, especially
in the medial prefrontal cortex, disturb adaptation to changes in the required action for a given cue (reversal learning) by
making the subjects more likely to stick to pre-learned behaviors.
5-HT, serotonin;
ACC, anterior cingulate cortex;
DA, dopamine;
DLPFC, dorsolateral prefrontal cortex;
DS, dorsal striatum;
NE, norepinephrine;
OFC, orbitofrontal cortex;
VS, ventral striatum
Doya (2008) Nat Neurosci
16.05.2014
D-ITET / IBT / TNU
20
Modulators of decision making
Neural substrates modulating decision making
Summary
-Expectation of a high reward motivates subjects to choose an action despite a large cost, for
which dopamine in the anterior cingulate cortex is important.
-Uncertainty of action outcomes can promote risk taking and exploratory choices, in which
norepinephrine and the orbitofrontal cortex seem to be involved.
-Predictable environments promote consideration of longer-delayed rewards, for which
serotonin and the dorsal part of the striatum as well as the dorsal prefrontal cortex are key.
-Much work will be required to build quantitative models of how decision parameters should
be regulated depending on the environment and experience, and then to elucidate how they
could be realized by network, cellular and neurochemical mechanisms.
Doya (2008) Nat Neurosci
16.05.2014
D-ITET / IBT / TNU
21
Twenty-Five Lessons from
Computational Neuromodulation
Neural processing faces three rather different, and perniciously* tied, communication
problems:
1) Computation is radically distributed, yet point-to-point interconnections are limited (smallworld network).
2) The bulk of these connections are semantically uniform, lacking differentiation at their
targets that could tag particular sorts of information (but also there are lots of cell types with
different connection patterns).
3) The brain’s structure is relatively fixed (but there are also many forms of plasticity), and yet
different sorts of input, forms of processing, and rules for determining the output are
appropriate under different, and possibly rapidly changing, conditions (context dependence).
Neuromodulators address these problems by their multifarious and broad distribution, by
enjoying specialized receptor types in partially specific anatomical arrangements, and by their
ability to mold the activity and sensitivity of neurons and the strength and plasticity of their
synapses.
* per.ni.cious.ly adv, mean exceedingly harmful, implies causing irreparable or deadly injury through evil or insidious corrupting or undermining
Dayan (2012) Neuron
16.05.2014
D-ITET / IBT / TNU
22
Diversity of Cortical Interneurons
Multiple dimensions of interneuron diversity
Kepecs & Fishell (2014) Nature 505, 318–326
16.05.2014
D-ITET / IBT / TNU
23
Diversity of Cortical Interneurons
Adding more interneurons of the same type linearly increases the network’s
combinatorial properties. However, adding novel interneuron types to the old
network, even in small numbers, offers a nonlinear expansion of qualitatively
different possibilities.
Simplified summary of the
salient features of the basic
cortical circuit, consisting of
only one type of pyramidal
cell and a set of local circuit
GABAergic cells
P. Somogyi et al., Brain Research Reviews 26 (1998) 113–135
16.05.2014
D-ITET / IBT / TNU
24
Twenty-Five Lessons from
Computational Neuromodulation
.
Dayan (2012) Neuron
09.05.2014
D-ITET / IBT / TNU
25
Twenty-Five Lessons from Computational
Neuromodulation (in decision making)
A first group of the 25 general lessons about neuromodulation emerges from what we know about
dopamine and reward (utility):
(A) Neuromodulatory neurons can report very selective information (i.e., reward prediction
errors for dopamine) on a
(B) very quick timescale. To put it another way, there is no reason why anatomical breadth should
automatically be coupled with either semantic or temporal breadth. Nevertheless
(C), neuromodulators can also signal over more than one timescale, with at least partially separable tonic
and phasic activity, and different receptor types may be sensitive to the different timescales; additionally
(D) by having different affinities (as do D1 and D2 receptors), different types can respond selectively to
separate characteristics of the signal. Along with their different properties
(E), different receptor types can be localized on different pathways, and these pathways are also
potentially subject to modulation from a variety of other systems, such as the local, tonically active
interneurons in the striatum that release Ach.
(F) observe the multiplexing inherent in having a neuromodulator report a signal (e.g., a reward
prediction error) that has a variety of important, but distinct, functions.
(G) a key role is played by autoreceptors that are typically inhibitory to the release of the neuromodulator
concerned, e.g., dopamine receptors on dopamine neurons and their terminals. An obvious role for these
is feedback control. Autoinhibition is a way for tonic signaling to set a baseline for phasic signaling. .
Dayan (2012) Neuron
09.05.2014
D-ITET / IBT / TNU
26
Twenty-Five Lessons from Computational
Neuromodulation (in decision making)
(H) interneuromodulatory interactions, such as the influence of one set of neuromodulators on others
are very widespread. Structures that drive dopamine activity might themselves be directly sensitive to
motivational state—for instance, it has been suggested that the amygdala’s sensitivity to the
neuromodulator oxytocin will change its responding in the face of social threats or opportunities.
(I) forms of opponency between different neuromodulators are a common motif, both in the central
nervous system and indeed in the periphery. However
(J) this opponency is rarely simple or symmetric: for instance, although it appears as if the dominant
influence of 5-HT on behaviors associated with dopamine in practice is inhibitory, there are many types
of serotonin receptor that have an excitatory net effect on dopamine.
(K) a complex tapestry of (receptor) heterogeneity is revealed, particularly within the serotonin system.
Dayan (2012) Neuron
09.05.2014
D-ITET / IBT / TNU
27
Twenty-Five Lessons from
Computational Neuromodulation
A second group of the 25 general lessons about neuromodulation emerges from what we know about
uncertainty and learning:
(M) neuromodulators control the course of activity by regulating which of a number of gross pathways
determines the activity of neurons is a common scheme. There are also other potential neuromodulatory
routes for this influence: for instance,
(N) ACh helps regulate oscillations, which is a critical dynamical effect of neuromodulators in many
circumstances, that simultaneously affect multiple sub-regions of the hippocampal formation.
(O) effects of neuromodulators on various timescales of plasticity are among their most
influential.
(P) although there is structure in the loops connecting cholinergic nuclei to sensory processing and
prefrontal cortices, as indeed with other loops between prefrontal regions and neuromodulatory nuclei,
there is only rather little work as to how the relatively general forms of uncertainty that could be
represented even by a wired neuromodulatory system might interact with the much more specific
uncertainty that could be captured in, say, a cortical population code.
(R) There is evidence for local, presumably glutamatergic, control of the release of neuromodulators in
the cortex, independent of the spiking activity of the neuromodulatory neurons themselves, which could
allow for more specificity in their local effects, but the computational implications of this are not clear.
(S) NE helps organize a massive response to stress, notably in conjunction with cortisol, a steroid
hormone that acts as another neuromodulator. This involves everything from changing energy storage
and usage, via glucocorticoids (involvement with energy regulation is itself a more general principle of
neuromodulation.
Dayan (2012) Neuron
09.05.2014
D-ITET / IBT / TNU
28
Twenty-Five Lessons from
Computational Neuromodulation
(T) a commonly reported finding for neuromodulators, namely a inverted U-shaped curve of efficacy. An
example finding is that drugs that boost a neuromodulator such as dopamine have a beneficial effect for
subjects whose baseline levels are low, but a harmful effect for subjects for whom these levels are high.
(U) a general issue for (relatively slow) phasic activity, namely that the time it takes for the
neuromodulator to be delivered to its site of action (norepinephrine fibers are not myelinated) appears to
be at the margins of the period in which there is a chance to have a suitable effect on the on-going
computation.
(V) problems or manipulations of neuromodulatory systems are tied to debilitating neurological and
psychiatric diseases, such as addiction and Parkinson’s disease, and they are also major therapeutic
targets, as in schizophrenia, depression, Alzheimer’s disease, etc.
(W), individual (e.g., genetic) differences in factors such as the properties of particular receptor types, or
the efficacy of transporters controlling the longevity of neuromodulators following release, have been
associated with differences in decision making behavior, such as the propensity to explore or to learn
from positive or negative feedback.
(X) There is a number of forms of control, including self-regulation by autoreceptors, complex
forms of interneuromodulator interaction, and even the possibility of local glutamatergic control over
release (co-release of glutamate).
(Y) not only do we know very little about the coupling between activity and the blood oxygenation
level-dependent (BOLD) signal that is measured in fMRI in areas such as the striatum that are the main
targets of key neuromodulators, but also these neuromodulators might be able to affect local blood flow
.
directly themselves.
Dayan (2012) Neuron
09.05.2014
D-ITET / IBT / TNU
29
Twenty-Five Lessons from
Computational Neuromodulation
Summary
From a computational perspective, there is much work to do to understand the overall network and
systems effects of the changes that we know different neuromodulators lead to in individual elements in
those circuits.
The most compelling computational issue is the relationship between specificity and generality and
cortical versus neuromodulatory contributions to representation and processing.
For utility, this issue centers on the interactions between model-free and model-based systems, with the
former being substantially based on neuromodulators such as dopamine and serotonin, whereas the latter
depends on cortical processing (albeit itself subject to modulation associated with specific stimulus
values).
For uncertainty, the question is how representations of uncertainty associated with cortical population
codes, with their exquisite stimulus discrimination, interact with those associated with
neuromodulators, with their apparent coarseness.
.
Dayan (2012) Neuron
09.05.2014
D-ITET / IBT / TNU
30
Reading List
•
•
•
•
•
•
•
•
•
Dayan, Peter. 2012. “Twenty-Five Lessons from Computational Neuromodulation.” Neuron 76 (1):
240–56.
Chamberlain SR, Müller U, Blackwell AD, Clark L, Robbins TW, Sahakian BJ. 2006.
“Neurochemical modulation of response inhibition and probabilistic learning in humans.” Science
311(5762): 861-3.
Doya K. 2008. “Modulators of decision making.” Nature Neuroscience 11(4): 410-6.
Preuschoff, Kerstin, Bernard Marius ’t Hart and Wolfgang Einhäuser. 2011. “Pupil Dilation Signals
Surprise: Evidence for Noradrenaline’s Role in Decision Making.” Frontiers in Neuroscience 5:
115.
Yu, Angela J., and Peter Dayan. 2005. “Uncertainty, Neuromodulation, and Attention.” Neuron 46
(4): 681–92.
Eldar, Eran, Jonathan D. Cohen, and Yael Niv. 2013. “The Effects of Neural Gain on Attention and
Learning.” Nature Neuroscience 16 (8): 1146–53.
Reading List+
Aron AR, Fletcher PC, Bullmore ET, Sahakian BJ, Robbins TW. 2003. Stop-signal inhibition
disrupted by damage to right inferior frontal gyrus in humans. Nat Neurosci 6(2):115-6.
Bouret S, Sara SJ. 2005. Network reset: a simplified overarching theory of locus coeruleus
noradrenaline function. Trends Neurosci 28(11):574-82.
16.05.2014
D-ITET / IBT / TNU
.
31
Thank you!
16.05.2014
D-ITET / IBT / TNU
32