Unit 7: Base-Level Activation

Download Report

Transcript Unit 7: Base-Level Activation

Unit 7:
Base-Level Activation
February 25, 2003
Activation Revisited
 Both latency and probability of recall depend on
activation
 Activation Equation
Base-level
activation
Spreading activation
Partial matching
Noise
(sgp :bll t)
February 26, 2002
Unit 7
2
Base-Level Activation
 Base-level activation depends on the history of
usage of a chunk
 Memory strength depends on
– How recently you used it in the past
– How much you practiced it
February 26, 2002
Unit 7
3
Base-Level Learning
Time
0
Pres 1 Pres 2 Pres k
February 26, 2002
Pres n
Unit 7
now
4
Base-Level Learning
Time
0
Pres 1 Pres 2 Pres k
time since the k-th
presentation of
the chunk i
decay parameter
(sgp :bll 0.5)
February 26, 2002
Pres n
now
Mathematically transforming
the ages to conform to the
functions optimal in the
environment
Unit 7
5
Power Law of Forgetting
 Strength of memories decreases with time
E.g. -Speed to recognize a sentence at various delays
– Number of paired associates that subjects recall
– People’s ability to recognize the name of a TV show for
varying numbers of years after it’s been canceled
 More and more delay produces smaller and
smaller losses
 This is the idea that individual events are
forgotten according to a power function
February 26, 2002
Unit 7
6
 p=probability
– p is a decreasing function of retention time
– p/(1-p) is a power function of retention time with
exponent d
– ln(p/(1-p)) is a linear function of ln(retentiontime)-d
– Accounts for the fact that each event age (tk) decays
at rate d
February 26, 2002
Unit 7
7
Power Law of Learning
Memory improves with practice; recall often gets close to
perfection, but speed increases with practice even
after that:
This is the idea that the accumulating sum of events is
also a power function.
Proof omitted shows that this holds true for evenly
spaced presentations
February 26, 2002
Unit 7
8
 p= need probability n=number of occurrences
–
–
–
–
p is a linear function of n
p/(1-p) is approximately a power function of n
ln(p/(1-p)) is a linear function of ln (n)
Accounts for the sum of all event ages (tks)
contributing
February 26, 2002
Unit 7
9
2
1.5
Activation
1
0.5
Time
0
-0.5 0
50
100
150
200
-1
-1.5
-2
February 26, 2002
How many tks are there at
time 40, 10, and 100. What
are they?
Unit 7
10
What Is a Event Presentation?
 Creating a new chunk
(p my-production
=goal>
isa associate
term1 vanilla
term2 3

+goal>
isa associate)
 Re-creating an old chunk
 Retrieving and harvesting a chunk
February 26, 2002
Unit 7
11
Optimized Learning
 At each moment when chunk i could be potentially
retrieved, ACT-R needs to compute new
n
computations; for each chunk ACT-R needs to store the
presentations
 Optimized learning is a fast approximation  1
operation per potential retrieval
 (sgp :ol t)
February 26, 2002
Unit 7
12
Optimized Learning Equation
Optimized learning works when the n
presentations are spaced approximately evenly
now
0
Pres 1 Pres 2 Pres k
February 26, 2002
Time
Pres n
Unit 7
13
2
1.5
Activation
1
0.5
0
0
50
100
150
200
-0.5
-1
-1.5
-2
Time
February 26, 2002
Unit 7
Activation
Approximation
14
Paired-Associates Example
Study and recall pairs word-digit:
vanilla
3
Each digit was used as a response twice.
20 paired associates; 8 trials
February 26, 2002
Unit 7
15
Paired Associates: Results
1.2
2.5
1
2
0.8
Data
0.6
1.5
Model
Model
1
0.4
0.2
Latency
0.5
0
1
2
3
4
5
6
7
0
8
1
2
3
4
5
6
7
8
Accuracy: items get under retrieval threshold if not
rehearsed soon
Latency: power law of learning
February 26, 2002
Unit 7
16
Homework: Zbrodoff’s Experiment
True or false?
A+3=D
(true)
G+2=H
(false)
Possible addends: 2, 3 or 4
Frequency manipulation:
•Control -- each problem x 2
•Standard – 2-add x 3, 3-add x 2, 4-add x 1
•Reverse -- 2-add X 1, 3-add X 2, 4-add x 3
3 Blocks
February 26, 2002
Unit 7
17
Zbrodoff’s Data
Control
Two Three
Block 1 1.840 2.460
Block 2 1.210 1.450
Block 3 1.140 1.420
Four
2.820
1.420
1.170
Standard Group (smaller problems more frequent)
Two Three Four
Block 1 1.840 2.650 3.550
Block 2 1.080 1.450 1.920
Block 3 0.910 1.080 1.430
Reverse Group (larger problems more frequent)
Two Three Four
Block 1 2.250 2.530 2.420
Block 2 1.470 1.460 1.110
Block 3 1.240 1.120 0.870
February 26, 2002
Unit 7
18
Tips
 Compute the addition result when it’s not available for
retrieval
 May add extra effort to the productions that make the
computation (articulation)
(spp myproduction :effort .1)
 (setallbaselevels <n> <T>)
 (spp :ga 0 :pm nil)
 Change retrieval threshold, latency factor, noise
February 26, 2002
Unit 7
19
Activation, Latency, and Recall
Activation
Probability of Retrieval
Base Level
Retrieval Latency
Back
February 26, 2002
Unit 7
20