Syllabus P140C (68530) Cognitive Science
Download
Report
Transcript Syllabus P140C (68530) Cognitive Science
COGNITIVE NEUROSCIENCE
Note
• Please read book to review major brain
structures and their functions
• Please read book to review brain imaging
techniques
• See also additional slides available on class
website
Cognitive Neuroscience
• the study of the relation between cognitive processes
and brain activities
• Potential to measure some “hidden” processes that are
part of cognitive theories (e.g. memory activation,
attention, “insight”)
• Measuring when and where activity is happening.
Different techniques have different strengths: tradeoff
between spatial and temporal resolution
Techniques for Studying Brain
Functioning
• Single unit recordings
– Hubel and Wiesel (1962, 1979)
•
•
•
•
•
Event-related potentials (ERPs)
Positron emission tomography (PET)
Magnetic resonance imaging (MRI and fMRI)
Magneto-encephalography (MEG)
Transcranial magnetic stimulation (TMS)
The spatial and temporal ranges of some
techniques used to study brain functioning.
Single Cell Recording
(usually in animal studies)
Measure neural activity with probes. E.g., research by
Hubel and Wiesel:
Hubel and Wiesel (1962)
• Studied LGN and primary visual cortex in the cat. Found
cells with different receptive fields – different ways of
responding to light in certain areas
LGN On cell (shown on left)
LGN Off cell
Directional cell
Action potential frequency of a cell associated with a specific receptive field in a monkey's field of
vision. The frequency increases as a light stimulus is brought closer to the receptive field.
COMPUTATIONAL COGNITIVE
SCIENCE
Computer Models
• Artificial intelligence
– Constructing computer systems that produce
intelligent outcomes
• Computational modeling
– Programming computers to model or mimic some
aspects of human cognitive functioning. Modeling
natural intelligence.
Simulations of behavior
Why do we need computational models?
• Provides precision need to specify complex
theories. Makes vague verbal terms specific
• Provides explanations
• Obtain quantitative predictions
– just as meteorologists use computer models to predict
tomorrow’s weather, the goal of modeling human
behavior is to predict performance in novel settings
Neural Networks
• Alternative to traditional information processing
models
– Also known as: PDP (parallel distributed processing
approach) and Connectionist models
• Neural networks are networks of simple processors
that operate simultaneously
• Some biological plausibility
Idealized neurons (units)
Inputs
Processor
S
Output
Abstract, simplified description of a neuron
Different ways to represent information with
neural networks: localist representation
concept 1
1 0 0 0 0 0
concept 2
0 0 0 1 0 0
concept 3
0 1 0 0 0 0
(activations of units; 0=off 1=on)
Each unit represents just one item “grandmother” cells
Coarse Coding/ Distributed Representations
concept 1
1 1 1 0 0 0
concept 2
1 0 1 1 0 1
concept 3
0 1 0 1 0 1
(activations of units; 0=off 1=on)
Each unit is involved in the representation of multiple items
Advantage of Distributed Representations
• Efficiency
– Solve the combinatorial explosion problem: With n
binary units, 2n different representations possible.
(e.g.) How many English words from a combination of
26 alphabet letters?
• Damage resistance
– Even if some units do not work, information is still
preserved – because information is distributed across
a network, performance degrades gradually as
function of damage
– (aka: robustness, fault-tolerance, graceful
degradation)
Suppose we lost unit 6
concept 1
1 1 1 0 0 0
concept 2
1 0 1 1 0 1
concept 3
0 1 0 1 0 1
(activations of units; 0=off 1=on)
Can the three concepts still be discriminated?
An example calculation for a single neuron
Diagram showing how the
inputs from a number of
units are combined to
determine the overall input
to unit-i. Unit-i has a
threshold of 1; so if its net
input exceeds 1 then it will
respond with +1, but if the
net input is less than 1 then
it will respond with –1
Neural-Network Models
The simplest models include three layers of units:
(1) The input layer is a set of units that receives stimulation from the
external environment.
(2) The units in the input layer are connected to units in a hidden
layer, so named because these units have no direct contact with the
environment.
(3) The units in the hidden layer in turn are connected to those in the
output layer.
Multi-layered Networks
• Activation flows from a layer of
input units through a set of
hidden units to output units
output units
• Weights determine how input
patterns are mapped to output
patterns
• Network can learn to associate
output patterns with input
patterns by adjusting weights
• Hidden units tend to develop
internal representations of the
input-output associations
• Backpropagation is a common
weight-adjustment algorithm
hidden units
input units
Example of Learning Networks
• http://www.cs.ubc.ca/labs/lci/CIspace/Version3/n
eural/index.html
Another example: NETtalk
Connectionist network learns to pronounce English words: i.e., learns
spelling to sound relationships. Listen to this audio demo.
teacher
target output
/k/
26 output units
80 hidden units
7 groups of
29 input units
_
a
_
c
target letter
(after Hinton, 1989)
a
t
_
7 letters of
text input
Other demos
Hopfield network
http://www.cbu.edu/~pong/ai/hopfield/hopfieldapplet.html
Backpropagation algorithm and competitive learning:
http://www.cs.ubc.ca/labs/lci/CIspace/Version4/neural/
http://www.psychology.mcmaster.ca/4i03/demos/demos.html
Competitive learning:
http://www.neuroinformatik.ruhr-unibochum.de/ini/VDM/research/gsn/DemoGNG/GNG.html
Various networks:
http://diwww.epfl.ch/mantra/tutorial/english/
Optical character recognition:
http://sund.de/netze/applets/BPN/bpn2/ochre.html
Brain-wave simulator
http://www.itee.uq.edu.au/%7Ecogs2010/cmc/home.html
Neural Network Models
• Inspired by real neurons and brain organization but are
highly idealized
• Can spontaneously generalize beyond information
explicitly given to network
• Retrieve information even when network is damaged
(graceful degradation)
• Networks can be taught: learning is possible by changing
weighted connections between nodes