Disequilibrium Approaches

Download Report

Transcript Disequilibrium Approaches

Disequilibrium Approaches
A newer model!
Goal of behavior analysis/operant
conditioning
• Clarify control of human behavior by reinforcement contingencies
– Many techniques have been developed
– Used in wide variety of settings
• Problem: specifying ahead of time what works:
– No a priori way of determining what will be a reinforcer
– Makes for problems in applied settings
– Even lab research affected by this
• What usually do: reinforcer assessments
– Very time consuming
– Not very accurate
Successful approach to a priori assessment
should satisfy 3 practical requirements
Identification of Rs circumstances should involve:
• A small number of simple, nonintrusive procedures
– Must be widely applicable
– Require no special apparatus
– No novel or disruptive stimuli to be introduced
• Must be accurate and complete
• Result should be adaptable to variety of situations,
rather than limited to small number of stimuli,
responses or settings
Transituational Solution:
conceptual Analysis
• Meehl, 1950 assumption:
– Simplest method for figuring out what works: use what circumstances
have worked in the past
– If works in one setting, should work in others
• Three important assumptions about reinforcing stimuli and their "setting
conditions"
– Reinforcers and punishers form unique, independent sets of
transituationally effective stimuli
– Essential function of the contingency = produce temporally proximate
pairings between response and reinforcer
– Deprivation schedule specifying long-term denial of access to
reinforcer = critical setting condition
Transituational Solution:
conceptual Analysis
• Problem:
– None of these holds up to data
– The assumptions are incorrect!
• Reinforcers and punishers are not mutually exclusive or transituational
• Premack: Drinking and wheel running could reinforcer each other
• Applied settings: this is observed all the time
• Temporal contiguity not sufficient to produce reinforcement:
• Premack (1965): pairing wheel running w/drinking had no effect in
absence of contingency schedule
• Appear that contingency is key, not time
• Long term deprivation not necessary nor sufficient: short term deprivation
works
Application problems w/this approach:
• STILL is most often used technique
• Assessment techniques are intrusive
• Not very effective: Reinforcers seem to change across
individuals/settings/time
• Does not account for satiation effects, etc.
• Lacks flexibility, accuracy
• Ethical questions when using food, certain punishments
Premack's Probability Differential
Hypothesis: (Grandma’s Law)
• Premack (1959; 1965): distinct improvement over transituational view
– A schedule in which a higher probability response is contingent upon a lower
probability response will result in reinforcement
– If you eat your peas (low prob) then you can have chocolate pudding (high
prob)
• Important change in concept of reinforcement in several ways:
– Reinforcement is related to access to a response
– Probability of response determined by probability (duration) of that response
in FREE BASELINE
• Shows that transituational situation is special case of probabilitydifferential:
– Highest probability response contingent upon a lower probability response
– As long as is highest probability- should be transituational
Premack's Probability Differential
Hypothesis: (Grandma’s Law)
• Some problems, however:
– Incomplete and unclear about several things:
– Fails to specify conceptual rules for setting values of
contingency schedule:- pair 1:1, 5:1 or what?
• Unclear about role of reduction in contingent
responding relative to baseline that typically
accompanies an increase in instrumental responding
• Unclear about role of long-term deprivation
Application
• Probably most widely used behavioral technique
• Popularity due to several desirable characteristics:
– Procedures for identification are clear, relatively non-disruptive
– More accurate than transituational method
– Allows for far wider choices of Sr's and P's
• Problems even in applied arena:
– Duration of discrete response hard to measure
– Duration not always a good measure
– Problem in that must always use higher probability responses as
reinforcers
– Time consuming to measure baselines
Response Deprivation and
Disequilibrium Approach
• Assumption: reinforcement results from adaptation of motivational
processes underlying free baseline responding to the performance
constraints imposed by a contingency schedule
• What's that?
– Constraining a behavior (that would naturally occur in free baseline) to
a set contingency schedule
– Only allowing free baseline behavior to occur at certain levels, rates,
times
– Restrict free baseline via a contingency schedule
• Really looking at molar equilibrium theory:
– Free baseline = equilibrium state
– Disrupt this equilibrium state via a contingency schedule
– Assess free-baseline of instrumental and contingent responding
before imposition of contingency schedule
Response Deprivation and
Disequilibrium Approach
• Does NOT view baseline as stable hierarchy of
reinforcement value:
– Estimate of relative motivation underlying different
responses
– That is- can change from situation to situation
– Idea that just must disrupt baseline ratio and you create
behavioral effects
• By imposing different contingencies- can create
reinforcement and punishment conditions:
– Response deficit: reinforcement
– Response excess: punishment
Response deficits and satiation
• Response deficit: I/C >Oi/Oc
– If individual maintains instrumental responding at
baseline level, would engage in less of baseline level of
contingent responding
• Baseline pea eating = 0; baseline chocolate pudding eating equals
10
– Contingency changes this: Must eat 2 peas for 1 spoon of
chocolate pudding
• Thus: if I continue to eat my baseline level of peas (0), I would
engage in less than baseline chocolate pudding eating (10)
– Must increase pea eating (to 20 peas) to maintain
chocolate pudding eating (of 10 spoonfuls)
Response deficits and satiation
• Response excess: I/C < Oi/Oc
– Is the individual maintains instrumental responding at
baseline level, would engage in too much of baseline level
of contingent responding
• Baseline sister hitting = 1; baseline spankings = 0
– Contingency changes this: Must receive 1 spanking for
each episode of hitting your sister
• If I hit my sister at baseline levels, I would engage in/receive more
spankings (+1) than I engaged in/received during baseline (0)
– Thus, likely to reduce the number of times I hit my sister
(to 0) to maintain receiving 0 spankings.
Why an improvement?
•
Improvement for several reasons:
– Specifies rules for setting terms of schedule:
– I/C > Oi/Oc for reinforcement effects
– I/C < Oi/Oc for punishment effects
•
•
•
•
I = instrumental response
C = contingent response
Oi = baseline rate of instrumental response
Oc = baseline rate of contingent response
•
No limitations on units for measuring baseline behaviors, as long as keep same in
contingency setting and ratio
•
Sets NO restrictions on what can be a reinforcer or a punisher
•
Note: lower probability response can reinforcer higher probability response, as
long as setting conditions are met
•
Shows that long term denial is NOT necessary:
– Critical: allows for deprivation or disequilibrium within a session
– Immediate deprivation (disruption of ongoing behavior) works
– Long term denial is special case of disruption
Applications
•
Several desirable reasons for using:
–
–
–
–
•
Examples: Konarski (1980):
–
–
•
Procedures specific
Relatively non-disruptive
More accurate
Allows incredible flexibility- no set reinforcers or punishers
Grade school kids
Free baseline of coloring or working simple arithmetic problems
Konarski (1985): EMH classroom
–
–
Retarded children
Working arithmetic problems and writing
•
Incidental teaching
•
Behavior contracting: Dougher study (1983)
•
Good behavior game
•
Overcorrection: two part negative punishment technique
–
–
Restitution
Positive Practice
Incidental teaching and the
Minimum bliss Point Model
Farmer-Dougan, 1998
• Bitonic relationship between rate of reinforcement imposed by a
schedule and strength of reinforcement effect
– Response rate first increase then decrease as reinforcer rate increases
– When schedule provides very high rate of reinforcement (disrupts
disequilibrium only slightly) – little change in instrumental responding
– When schedule provides very low rate of reinforcement (disrupts
disequilibrium to high degree), little net reinforcement effect
• Thus, extreme rates of reinforcement should be less effective than
moderate rates
Can mathematically predict
reinforcement effects!
• Simple FR schedule: according to minimum distance
models, R rate that produced by ratio schedule is equal to:
–
–
–
–
R1 = predicted rate of response
Oi is rate of unconstrained instrumental response
Oc is rate of unconstrained contingent response
k is number of units reinforcement/response (inverse of FR
requirement)
A Theoretical Bliss Point
Bliss point: Recess vs. Math
120
Minutes of recess
100
80
60
40
20
0
0
20
40
60
80
Minutes of math problems
100
120
Plotted Bliss point
Incidental teaching
• Accurately identifies reinforcers and increases generalization and
maintenance via use of naturalistic teaching
• Involves capturing a teaching moment (Hart and Risley, 1980)
– Subject initiates (verbally/physically) toward an item or activity
– Teacher immediately imposes contingency such that access to the
item/activity is blocked until the contingent response is emitted
– Immediate assessment of baseline and immediate imposition of momentary
disequilibrium
• Question: how often to disrupt?
– Minimum bliss models suggest that moderate amounts should be better than
high interruption or very low interruption
Method
•
4 head start preschoolers
•
Worked 1:1 in workroom at Head
Start
•
Set of toy items for each child, and
set of 26 flash cards containing letters
A to Z
Task: ID letter expressively to gain
access to toy
•
•
Manipulated rate of disruption:
–
–
–
–
–
Baseline (0)
25%
50%
75%
100%
Results!
• Little academic
behavior when did not
disrupt (
– surprisingly, there was
some
– but differed by child
– Shows differences in
baseline rates
• Too much disruption =
no academic
responding!
• Moderate levels
worked best!!
Limitations on/Extensions of
Disequilibrium approach
• Not completely accurate
– Issues with measuring baseline
• How long a time horizon?
• Consistency across settings?
– Measuring baseline can be time consuming
– Only takes into account 2 behaviors (I and C), while many
more behaviors occur in any contingency setting
• Question of time frames: do baselines change w/time?
– Does constraining baseline affect or reset baseline?
Conclusions
• Strong need to predict reinforcement ahead of
time
– If can't- reinforcement is not a very usable concept
– Early theories did not do this very well
• Reinforcers and punishers aren't things:
– No magic wand
– Reinforcement/punishment effects depend upon
extent to which contingency schedule constrains the
free distribution of responding