Transcript Slide 1

HEARING AND LANGUAGE
CHAPTER 9
Hearing
language
Hearing
• A receptor is a cell, often a specialized neuron, that is suited by its
structure and function to respond to a particular form of energy, such
as sound.
– A receptor’s function is to convert that energy into a neural response.
• An adequate stimulus is the energy form for which the receptor is
specialized.
• The pattern of the information contained in the sensory stimulus
makes information meaningful.
• Many people consider audition, along with vision, to be the most
important senses.
Hearing
 Sensation is the acquisition of sensory information.
 Perception is the interpretation of this information.
 The adequate stimulus for audition is vibration in a conducting
medium, such as air.
 Frequency refers to the number of cycles or waves of compressions
and decompressions
 Frequency provides the perception of pitch.
 A pure tone, produced by striking a tuning fork for example, has
only one frequency.
 Complex sounds, such as that produced by a clarinet, are
composed of multiple frequencies.
Hearing
• The intensity of a sound is perceived as
loudness.
– Intensity refers to the amplitude or size of the wave. It
represents the physical energy of a sound.
– Loudness is influenced by the frequency of the sound.
• For example, we are most sensitive to sounds
between 2000 and 4000 Hz, the range in which most
conversation occurs.
• Sounds outside this range would seem less loud.
– Also, the intensity of the sound influences the
perception of pitch.
◊
Hearing
• The outer ear, or pinna:
– captures the sound and amplifies it by funneling it
into the smaller auditory canal.
selects for sounds in front and to the side of us,
partially blocking those behind.
◊
Hearing
 The Middle Ear
 The tympanic membrane, or eardrum, collects the vibrations and
transmits them to the ossicles.
 The lever action of the ossicles transfers the vibration to the cochlea.
 Amplification as the vibration passes from the eardrum to the
smaller base of the stirrup compensates for the loss of energy in
passing from air into the liquid inside the cochlea.
– We can detect sound when the eardrum vibrates as little as the
diameter of the hydrogen atom.
◊
The Outer, Inner, and Middle Ear
Figure 9.4
Hearing
 The inner ear includes
the cochlea.
 The cochlea is divided into
the fluid-filled vestibular
canal,
tympanic canal,
cochlear canal.
 The stirrup sends
vibrations throughout the
cochlea and to the soundanalyzing structure.
◊
Figure 9.5
Hearing
 The organ of Corti rests on the basilar
membrane.
 It consists of four rows of hair cells and the
tectorial membrane above the hair cells.
 Vibration bends the hair cells, opening
potassium and calcium channels.
 This depolarizes the cells and sets off signals
in the neurons.
 The less numerous inner hair cells provide
the majority of information about the
auditory stimulus.
 Lengthening and shortening of the outer hair
cells against the tectorial membrane adjusts
the organ of Corti’s rigidity.
 This amplifies weak signals and provides
adjustable frequency selectivity.
Hearing
 The Auditory Pathway
 Auditory neurons form part of the auditory (8th) cranial nerves.
 The neurons travel to the inferior colliculi, to the medial geniculate
nucleus of the thalamus, and to the auditory cortex.
 Neurons from each ear go to both temporal lobes, but there are
more connections to the opposite side than the same side.
 The auditory cortex is topographically organized: Neurons from
adjacent receptors project to adjacent cells in the cortex, forming
a map of the unrolled cochlea.
◊
Auditory Pathway and Auditory Cortex
Figure 9.7
Hearing
– Secondary auditory areas are involved in analyzing complex
sounds and understanding their meaning.
– The dorsal stream travels from the auditory cortex to the
parietal lobes (for spatial location of sounds) and to the
frontal lobes (for directing eye movements and planning
movements). It is the auditory “where” system.
– The ventral stream passes from the temporal to the frontal
lobes. It is involved in identifying sounds and is the auditory
“what” system.
◊
Hearing
• Frequency Analysis
– Frequency theory assumes that the auditory mechanism
transmits the actual sound frequencies to the auditory cortex
for analysis there.
• Telephone theory: Individual neurons in the auditory nerve
fire at the same frequency as the rate of vibration of the
sound source.
• Volley theory: Groups of neurons follow the frequency of a
sound at higher frequencies.
• Even volleying fails to follow sounds beyond about 5200 Hz,
so the frequency theory is inadequate.
◊
Hearing
– Place theory: The frequency of a sound is identified
according to the location of maximal vibration on the basilar
membrane and, therefore, which neurons are firing most.
• Higher frequencies cause the base end to vibrate most
and low frequencies cause the apex end to vibrate most.
• The auditory cortex is topographically organized, in the
form of a tonotopic map.
Thus, each successive area responds to successively
higher frequencies.
◊
Tonotopic Map
Figure 9.12
Hearing
– However, place theory alone is inadequate.
• The basilar membrane vibrates about equally at the
lowest frequencies.
• Frequency-specific neurons have not been found below
200 Hz.
– Frequency-place theory:
• Neurons follow the sound’s frequency below about
200 Hz.
• Higher frequencies are detected by place analysis.
◊
Hearing
• Analyzing Complex Sounds
– The basilar membrane does a Fourier analysis of a
complex sound, separating it into its sine wave
components.
Figure 9.14: Fourier analysis
of a clarinet note
Hearing
– We also must sort out meaningful sounds embedded in a
confusing background of sounds. This is known as the cocktail
party effect.
• Selective attention enhances some sounds and suppresses
others.
• These separated sounds become auditory objects.
• We must then identify a sound.
This occurs in the ventral “what” area.
Voices are identified in the superior temporal area.
Environmental sounds are identified primarily in posterior
temporal areas and to some extent in frontal areas.
Hearing
 Locating Sounds With Binaural Cues
 Binaural cues permit us to locate sounds quickly and accurately.
 When a source is to one side or the other, the head blocks some
of the sound energy; thus, there is a difference in intensity at the
two ears.
 A sound directly to the left or the right of the listener takes about
0.5 ms to travel the additional distance to the second ear. Thus,
there is also a difference in time of arrival at the two ears.
◊
Differential Intensity & Time of Arrival Cues
Figure 9.17
Hearing
– At low frequencies, a sound arriving from one side of the
body will be at a different phase of the wave at each ear.
• As a result, the rising or falling pressure will be different
at the two eardrums.
Figure 9.18: Phase differences at the two ears
Hearing
 Brain Circuits for Locating Sounds
 The “time of arrival” circuit (studied most extensively)
contains coincidence detectors.
 In this circuit, a longer pathway from one ear compensates
for the delay in sound reaching the other ear.
 A coincidence detector fires most when it receives input
from both ears at the same time.
 As a result, each detector is specialized for sounds arriving
at a particular angle to the body.
◊
Difference in Time of Arrival Circuit
Figure 9.19
Hearing
• Next, this directional information must be
integrated with
information from the visual environment
and information about the position of the body in
space.
– This involves the parietal lobes, part of the dorsal
“where” stream.
◊
Language
 Language includes the generation and understanding of
written, spoken, and gestural communication.
 Broca’s area was discovered in 1861 when Paul Broca studied a stroke
patient with injury in the frontal area.
• Symptoms of Broca’s aphasia include:
 nonfluent speech;
 anomia, or trouble finding words;
 articulation problems;
 lack of grammatical, or function, words.
• Reading and writing are impaired as much as speech.
• Comprehension is impaired when the meaning depends on
grammatical words.
Language
– Wernicke’s area is in the posterior portion of the left
temporal lobe.
• Wernicke’s Aphasia
The individual has trouble understanding spoken
and written language.
However, the term receptive aphasia is
misleading, because the patient has as much
difficulty producing language as understanding it.
Speech, for example, is fluent but meaningless
(and is often referred to as “word salad”).
◊
Language-Related Areas of the Cortex
Figure 9.20
Language
• The Wernicke-Geschwind model is an effort to explain how
Broca’s area and Wernicke’s area interact to produce
language.
– Answering a Spoken Question:
AUDITORY CORTEX ➞ WERNICKE’S AREA ➞ BROCA’S AREA
Broca’s area then communicates with the facial area of the motor
cortex to produce speech.
– Reading Aloud: The visual information is first transformed into an
auditory form in the angular gyrus; then
ANGULAR GYRUS ➞ WERNICKE’S AREA ➞ BROCA’S AREA
– What would happen if the response is to be written?
– This model is generally accurate but oversimplified; for example,
functions are not so localized.
Wernicke-Geschwind Model of Language
Figure 9.21
Language
• Reading, Writing, and Their Impairment
– Alexia, the inability to read, and agraphia, the inability to
write, are presumably due to disruption of pathways in the
angular gyrus.
• These pathways connect the visual projection area
with the auditory and visual association areas.
– Dyslexia, an impairment of reading, can be acquired
through damage, but is more often developmental.
◊
Language
Brain Differences in Dyslexia
• The left planum temporale, where Wernicke’s area is
located, is typically larger than on the right; in dyslexics
it is larger on the right or equal in size.
• Neurons in the left planum temporale lack orderly
arrangement.
• The most reliably identified genes are involved in
neuron guidance and migration.
◊
Anomalies in the Dyslexic Brain
Figure 9.24
Left planum temporale in a normal brain (left)
and in the brain of a person with dyslexia (right)
Language
– People with dyslexia have both auditory and visualperceptual difficulties.
• They have trouble detecting the frequency and amplitude
changes that distinguish letter sounds.
• Words are read backwards, mirror-image letters
(b and d) are confused, and words appear to move
around on the page.
– According to the magnocellular hypothesis, dyslexia involves
deficiencies in auditory and visual magnocellular cells.
◊
Language
– According to the phonological hypothesis, the major
problem is an impairment of phoneme processing.
• A phoneme is a small sound unit that distinguishes one
word from another (as in book versus cook).
• fMRI indicates the problem is in an auditory word
analysis area, not in the area that recognizes words by
their visual form.
• Dyslexia occurs much less frequently in countries where
the languages are phonologically simpler.
◊
Language
 Recovery From Aphasia
 The right hemisphere can take over language functions
following left-hemisphere damage, as long as the injury
occurs early in life.
 If damage occurs later in life, language control is more likely
to shift into bordering areas in the left hemisphere.
– The ability of other areas to take over language functions
may be due to their normal participation in language.
For example:
Language
• The right hemisphere contributes prosody to speech;
prosody is the use of intonation, emphasis and
rhythm to convey meaning.
• The right hemisphere also is important in
understanding information from language that is not
specifically communicated by the meaning of the
words:
when meaning must be inferred from an entire
discourse;
when the meaning is figurative rather than literal,
as in the moral of a story.
◊
Language
 A Language-Generating Mechanism?
 Children have a remarkable readiness to learn language; on average,
they learn a new word every 90 waking minutes.
 This readiness led researchers to believe there is a language
acquisition device, a part of the brain that is dedicated to learning
and producing language.
– Other researchers agree that there are biological mechanisms
that make language acquisition so easy.
• For example, speaking and signing children follow the same
sequences in learning language.
– Their interpretation: Language has co-opted areas specialized for
abilities that language requires.
Language
• Specializations in the brain suggest that it is innately
well fitted for creating and learning language.
– The left hemisphere is dominant for language in 90% of righthanded people and most left-handed.
– Broca’s area is larger and the lateral fissure and planum temporale
are longer on the left than on the right.
– Even newborns show left-hemisphere response to speech.
– Sign language activates the left hemisphere, even in individuals
deaf from birth.
◊
Language
 Animal language research has the potential to reveal the roots
of language.
– Because chimps lack an adequate larynx for forming word sounds,
researchers have attempted to communicate with them using
various sign and symbol languages.
• Washoe learned to use 132 symbols, but critics said the
behavior was not complex enough to represent true
language.
• Washoe and three other chimps taught her son Loulis 47
signs and they regularly carried on conversations among
themselves.
• The communications of Loulis, the bonobo Kanzi, and the
parrot Alex suggest to some that they are evidence of
evolutionary foundations of our language abilities.
Language
• Other animals also share some of the brain organization
associated with human language.
– Chimps, monkeys, dolphins, and canaries show lefthemisphere dominance for meaningful sounds or
gestures.
– Similar structures in these animals likely provide
prelanguage communicative abilities.
 We share apparent genetic antecedents, such as FOXP2.
 However, the key to language probably lies in slight
modifications of those genes and in the genes that are
turned on or turned off.
◊
Language
• We do share mirror neurons with other species, and they may be
critical to the development of language.
Figure 9.32
Brown areas are active during
imitation of others’ actions.
They overlap Broca’s and
Wernicke’s areas (yellow).