Module 10: Operant & Cognitive Approaches
Download
Report
Transcript Module 10: Operant & Cognitive Approaches
Introduction to Psychology
Virginia Union University
Friday October 7, 2011
I will be hosting office hours regularly
beginning this afternoon. They will be from
6-8pm every Friday in the Psychology
computer lab of Grays (Room 207). If you
have any questions about material covered in
the class, your performance in class, or need
additional help, please feel free to see me
during that time.
Read the text before class
Attend class and take notes in your own
words
Ask questions if you do not understand
course material
Review the course lectures and rewrite in
your own words
If you continue to have questions, visit me
during office hours to get your questions
addressed
What is Classical Conditioning (2 sentence max
definition)?
Classical Conditioning Example – identify the UCR in
this example:
A student is conditioned to become anxious when
instructed by the teacher to “Clear your desk”, because
this usually suggests that a pop quiz will be given
What is the difference between Classical & Operant
Conditioning?
What is the difference between the responses given in
Classical & Operant Conditioning?
Operant Conditioning
Goal: to increase or decrease the rate of some response
Voluntary Response: A voluntary response must first be
performed before a reward/reinforcer, or punishment is given
Emitted Response: A voluntary response is emitted, or
acted/operated
Contingent on Behavior: Performance of a desired response
depends or is contingent upon what happens next, or the
consequences (whether it be a reinforcer/reward or
punishment)
Consequences: Animals & humans learns that performing or
emitting some behavior is followed by a consequence (reward
or punishment) that increases or decreases the chances of
performing the behavior again
Classical Conditioning
The basic learning process that involves
repeatedly pairing a neutral stimulus with a
response-producing stimulus until the neutral
stimulus elicits the same response
▪ Involves a reflexive behavior (or an automatic behavior)
▪ A new behavior isn’t produced, an existing behavior is
Ivan Pavlov!
=
+
=
=
no salivating
Classical Conditioning Recap
=
no salivating (NR)
(NS)
(NS) +
(UCS) =
(CS)
=
(UCR)
(CR)
20. Todd feels happy whenever he smells chocolate-chip cookies baking because, when he was a
child, his grandmother, whom he loved very much, used to bake chocolate-chip cookies for him
whenever he visited her. In this example, the CS is
A. the smell of chocolate-chip cookies baking.
B. the happiness Todd feels when he smells chocolate-chip cookies baking.
C. Todd's grandmother.
D. the happiness Todd felt when he visited his grandmother.
22. When I was a child, as a joke my mother used to put on a goalie mask, start up her chain saw,
and chase the neighborhood children around until they passed out from fright. Even now I have a
phobia of goalie masks and cannot watch a hockey game without soiling myself. For me, the
goalie mask is a(n)
A. CS.
B. CR.
C. UCS.
D.UCR.
23. A stimulus that causes an automatic (reflexive) response in an organism BEFORE the
organism has been classically conditioned is called a(n)
A. CS.
B. CR.
C. UCS.
D.UCR.
Stimulus Generalization
The occurrence of a learned response not only to the original
stimulus, but to other similar stimuli as well. Usually, the more
similar the new stimulus to the original conditioned stimulus,
the larger will be the conditioned response.
Example: Dog salivating to low & high pitched tones, Response
to Old Spice deodorant generalized to Old Spice body wash
Stimulus Discrimination
The occurrence of a learned response to a specific stimulus, but
not to other similar stimuli
Example: Dog salivating to high but not low pitched tones,
Response to Old Spice deodorant but NOT Dove deodorant
Extinction
The gradual weakening and apparent disappearance of conditioned
behavior.
In classical conditioning, occurs when the CS is repeatedly presented
without the UCS
Example: If the ringing bell is no longer presented with food to
Pavlov’s dogs, eventually they will stop salivating to the sound of a bell
Spontaneous Recovery
The reappearance of a previously extinguished CR after a period of
time without exposure to the CS
Demonstrates that learned response does not become eliminated
during extinction
Example: Dog begins to salivate again to the sound of a bell after a
period of rest following extinction
A guy is conditioned to use corny pick-up lines on women
“Girl, you must be from Tennessee, cause you’re the only ten I see….”
Women have responded in the past, but stop doing so &
begin shooting him down……it becomes downright
embarrassing
The guy learns (hopefully) to stop using his lines on women
because they’re not working
Extinction is in effect
Time passes without the guy using any of his go-to lines. He
encounters another attractive woman, and automatically
starts to use his lines again (“If I could rearrange the alphabet,
I would put U & I together”)
Spontaneous Recovery has taken place!
How do they make you feel?
Clip 1: http://youtu.be/gMLPnk9-6MM
Clip 2: http://youtu.be/QmMQfTJ3gYk
Clip 3: http://youtu.be/C4g8rLShURw
Clip 4: http://youtu.be/VgSMxY6asoE
Behaviorist John Watson (who supported the
study of observable behaviors) believed that
just like dogs reflexively salivated to food,
human emotional responses were also
reflexive
He studied fear, rage and love – three
emotions he believed represented inborn and
natural unconditioned reflexes
The case of little Albert
=
+
=
=
no response
The case of little Albert
(NS)
=
(NS) +
(UCS) =
(CS)
=
no response
(CR)
(NR)
(UCR)
The Case of Little Albert
Before conditioning
▪ White Rat (NS) = No Fear/Crying (NR)
During conditioning
▪ White Rat (NS) + Loud Noise of a Hammer (UCS) = Fear/Crying (UCR)
After conditioning
▪ White Rat (CS) = Fear/Crying (CR)
Little Albert was conditioned to fear a white rat, and expressed his fear
by crying. This fear generalized to other furry objects (i.e., rabbits,
cotton, a fur coat & Santa Claus beard).
Other examples of conditioning emotional responses?
Blood pressure increased when hearing the name of someone you have a strong
dislike for?
Operant Conditioning (applying this formula to Bart the
Bear)
Goal: to increase the rate of Bart holding the teddy bear
(response)
Voluntary Response: Bart holding a teddy bear is a voluntary
response b/c he can perform it at will. He must perform this
response before receiving a reward/reinforcement.
Emitted Response: Bart voluntary emits the response of holding
the teddy bear.
Contingent on Behavior: Bart holds the teddy bear and is
rewarded or reinforced for his actions with an apple.
Consequences: Bart learns that holding the teddy bear means
that he will receive an apple. The apple is desirable, thus
increasing the chances that he will hold the teddy bear again.
Apply the operant conditioning formula that was just discussed to
this example:
Training a puppy to obey the command “sit” by using treats.
Here is the previously mentioned formula:
Goal: to increase or decrease the rate of some response
Voluntary Response: A voluntary response must first be performed
before a reward/reinforcer, or punishment is given
Emitted Response: A voluntary response is emitted, or acted/operated
Contingent on Behavior: Performance of a desired response depends
or is contingent upon what happens next, or the consequences
(whether it be a reinforcer/reward or punishment)
Consequences: Animals & humans learns that performing or emitting
some behavior is followed by a consequence (reward or punishment)
that increases or decreases the chances of performing the behavior
again
The discovery of operant conditioning involved two different
researchers
E.L. Thorndike
Built a series of puzzle boxes from which a cat could learn to escape by
learning to make a specific response. Outside the puzzle box was a
reward for escaping – a piece of fish.
Thorndike graphed his data, and found that over time the cat needed
less time to escape.
Through trial and error, the cat learned to associate certain responses
with successfully escaping the box & gaining the food reward
The Law of Effect: states that behaviors followed by positive
consequences are strengthened, while behaviors followed by
negative consequences are weakened
Thorndike’s emphasis on studying the consequences of
behavior were further developed by B.F. Skinner
B.F. Skinner
Interested in analyzing the ongoing behaviors of animals, but
needed an objective way to measure the ongoing behaviors
Created a unit of behavior called an operant response
Operant response: a response that can be modified by its
consequences and is a meaningful unit of ongoing behavior that
can be easily measured (ex: Bart picking up the teddy bear)
By measuring/recording operant responses, Skinner was able to
analyze animal’s ongoing behaviors during learning. He called
this kind of learning operant conditioning.
The Skinner Box
An empty box for rats that has a bar for the rat to press and an
empty food bowl
The box is automated to record the rat’s bar presses and deliver
food pellets
Used to study how an animal’s ongoing behaviors may be
modified by changing the consequences of what happens after
the bar is pressed
3 factors involved in operantly conditioning the rat to press the
bar in the Skinner Box
1.
2.
3.
The rat has not been fed for hours and will be more likely to roam
looking for food and eat food that is rewarded
The goal is to condition the rat to press the bar, this is the operant
response
A procedure called shaping is used to get the rat to press the bar
Shaping
A procedure in which an experimenter
successively reinforces behavior that leads up to
or approximates the desired behavior
▪ Rat rewarded for first facing the bar
▪ Rat rewarded for touching the bar
▪ Rat rewarded for pressing the bar
Other examples of shaping: teaching a baby how
to talk
Other examples?
Immediate reinforcement
Skinner explains that in shaping behavior, the food
pellet or reinforcer should immediately follow after
the desired behavior.
By following immediately, the reinforcer is associated
with the desired behavior & not some other behavior
that happened to occur.
▪ Superstitious behavior: a behavior that increases in frequency
because its occurrence is accidentally paired with the delivery
of a reinforcer
▪ Ex: Wearing a specific item of clothing on the day that you
perform outstandingly in a game
Reinforcement
A consequence that occurs after a behavior &
increases the chance that the behavior will occur
again
Positive Reinforcement
The presentation of a stimulus that increases the
probability that a behavior will occur again
Examples:
▪ Bart receiving an apple for holding the teddy bear
▪ A hungry rat presses a bar in its cage and receives food
▪ A student studies for a course and receives a good
grade.
Other Examples?
Negative Reinforcement
An aversive or unpleasant stimulus whose removal
increases the likelihood that the preceding response will
occur again
Examples:
▪ Taking an aspirin to remove a headache
▪ A rat is placed in a cage and immediately receives a mild electrical
shock on its feet. . The rat presses a bar and the shock stops.
▪ Your safe driving record for a period of time leads your car
insurance company to reduce your monthly car insurance premium
Other Examples?
Primary Reinforcer
A stimulus such as food, water, or sex, that is innately
satisfying and requires no learning on the part of the
subject to become pleasurable
▪ You don’t have to learn to like
Secondary Reinforcer
Any stimulus that has acquired its reinforcing powers
through experience, are learned through pairing with
primary reinforcers or other secondary reinforcers
▪ Example: coupons, money, grades
Punishment
A consequence that occurs after a behavior and
decreases the chance that the behavior will occur
again
Positive Punishment
Presenting an aversive or unpleasant stimulus
after a response. The aversive stimulus decreases
the chances that the response will occur again.
Examples:
▪ Spanking/scolding a young child for misbehaving
▪ Getting a ticket for speeding
Other Examples?
Negative Punishment
Removing a reinforcing stimulus after a response.
This removal decreases the chances that the
response will occur again.
Examples:
▪ Timeout or getting allowance or toys removed for not obeying
parents
▪ Having your license suspended for not paying fines
Other Examples?
Response Increased
Stimulus
Presented
Stimulus
Removed
Positive
Reinforcement
Negative
Reinforcement
Response Decreased
Positive
Punishment
Negative
Punishment
Apply your understanding of the four categories of
Reinforcement & Punishment to creating 4 unique scenarios
that all center on one of the common themes listed below
1. School
2. Money
3. Car
Schedule of reinforcement
When a reinforcement is given influences how
consequent behaviors are shaped
▪ Continuous Reinforcement
▪ The desired behavior is reinforced every single time it occurs
▪ This schedule is best used during the initial stages of learning
▪ Once the behavior is learned, there is usually a switch to a partial
reinforcement schedule
▪ Partial Reinforcement
▪ The desired behavior is reinforced only part of time
▪ Behaviors learned more slowly, but the response is more resistant
to extinction
Partial Reinforcement Schedules
Fixed-ratio (FR) Schedule
▪ Reinforcement occurs after a fixed number of responses
▪ Schedule produces a high, steady rate of responding
with only a brief pause after the delivery of the
reinforcer
▪ Examples:
▪ Rat on a 10-to-1 fixed ratio schedule receives 1 food pellet per 10
bar presses
▪ Frequent buyer punch cards
▪ Other Examples?
Partial Reinforcement Schedules
Variable-ratio (VR) Schedule
▪ Reinforcement occurs after an average number of
responses, which varies from trial to trial
▪ Schedule creates a high, steady rate of responding
▪ Examples:
▪ Rat on a variable ratio 20 schedule might have to press the bar
25X on the 1st trial & 15X on the 2nd trial – the ratio works out to a
predetermined average
▪ Gambling
▪ Other Examples?
Partial Reinforcement Schedules
Fixed-interval (FI) schedule
▪ Reinforcement delivered after a preset time interval has
elapsed
▪ Schedule causes high amounts of responding near the end of
the interval, but much slower responding following the
delivery of the reinforcer
▪ Examples:
▪ Rat on a 2 minute FI schedule would receive food pellets 2 minutes
after the first bar press, independent of the number of additional bar
presses
▪ Paychecks
▪ Other Examples?
Partial Reinforcement Schedules
Variable-interval (VI) schedule
▪ Reinforcement occurs for the first response emitted after an
average amount of time has elapsed, but the interval varies from
trial to trial
▪ Produces slow, steady rate of responding
▪ Examples:
▪ Rat on a VI-30 second schedule might be reinforced for the 1st bar press
after 10 seconds for the 1st trial, after 50 seconds for the 2nd trial, and
after 30 seconds for the 3rd trial – time elapsed works out to a
predetermined average amount of time
▪ Whining kids
▪ Other Examples?