Transcript calibration

The Modelling Process
Dr Andy Evans
This lecture
The modelling process:
Identify interesting patterns
Build a model of elements you think interact and the
processes / decide on variables
Verify model
Optimise/Calibrate the model
Validate the model/Visualisation
Sensitivity testing
Model exploration and prediction
Prediction validation
Parallelisation
Preparing to model
Verification
Calibration/Optimisation
Validation
Sensitivity testing and dealing with error
Preparing to model
What questions do we want answering?
Do we need something more open-ended?
Literature review
what do we know about fully?
what do we know about in sufficient detail?
what don't we know about (and does this matter?).
What can be simplified, for example, by replacing them with a
single number or an AI?
Housing model: detail of mortgage rates’ variation with
economy, vs. a time-series of data, vs. a single rate figure.
It depends on what you want from the model.
Data review
Outline the key elements of the system, and compare this with
the data you need.
What data do you need, what can you do without, and what
can't you do without?
Data review
Model initialisation
Data to get the model replicating reality as it runs.
Model calibration
Data to adjust variables to replicate reality.
Model validation
Data to check the model matches reality.
Model prediction
More initialisation data.
Model design
If the model is possible given the data, draw it out in detail.
Where do you need detail.
Where might you need detail later?
Think particularly about the use of interfaces to ensure
elements of the model are as loosely tied as possible.
Start general and work to the specifics. If you get the
generalities flexible and right, the model will have a solid
foundation for later.
Model design
Agent
Person
Thug
GoHome
GoElsewhere
Fight
Step
Vehicle
Refuel
Preparing to model
Verification
Calibration/Optimisation
Validation
Sensitivity testing and dealing with error
Verification
Does your model represent the real system in a rigorous
manner without logical inconsistencies that aren't dealt with?
For simpler models attempts have been made to automate
some of this, but social and environmental models are waaaay
too complicated.
Verification is therefore largely by checking rulesets with
experts, testing with abstract environments, and through
validation.
Verification
Test on abstract environments.
Adjust variables to test model elements one at a
time and in small subsets.
Do the patterns look reasonable?
Does causality between variables seem reasonable?
Model runs
Is the system stable over
time (if expected)?
Do you think the model
will run to an equilibrium
or fluctuate?
Is that equilibrium
realistic or not?
Preparing to model
Verification
Calibration/Optimisation
Validation
Sensitivity testing and dealing with error
Parameters
Ideally we’d have rules that determined behaviour:
If AGENT in CROWD move AWAY
But in most of these situations, we need numbers:
if DENSITY > 0.9 move 2 SQUARES NORTH
Indeed, in some cases, we’ll always need numbers:
if COST < 9000 and MONEY > 10000 buy CAR
Some you can get from data, some you can guess at, some you
can’t.
Calibration
Models rarely work perfectly.
Aggregate representations of individual objects.
Missing model elements
Error in data
If we want the model to match reality, we may need to
adjust variables/model parameters to improve fit.
This process is calibration.
First we need to decide how we want to get to a realistic
picture.
What are we going to calibrate
against?
Initialisation: do you want your model to:
evolve to a current situation?
start at the current situation and stay there?
What data should it be started with?
You then run it to some condition:
some length of time?
some closeness to reality?
Compare it with reality (we’ll talk about this in a bit).
Calibration methodologies
If you need to pick better parameters, this is tricky. What
combination of values best model reality?
Using expert knowledge.
Can be helpful, but experts often don’t understand the
inter-relationships between variables well.
Experimenting is lots of different values.
Rarely possible with more than two or three variables
because of the combinatoric solution space that must be
explored.
Deriving them from data automatically.
Solution spaces
Optimisation of
function
A landscape of possible variable combinations.
Usually want to find the minimum value of some optimisation
function – usually the error between a model and reality.
Local minima
Global minimum
(lowest)
Potential solutions
Calibration
Automatic calibration means sacrificing some of your data to
generating the optimisation function scores.
Need a clear separation between calibration and data used to
check the model is correct or we could just be modelling the
calibration data, not the underlying system dynamics (“over
fitting”).
To know we’ve modelled these, we need independent data to
test against. This will prove the model can represent similar
system states without re-calibration.
Preparing to model
Verification
Calibration/Optimisation
Validation
Sensitivity testing and dealing with error
Validation
Can you quantitatively replicate known data?
Important part of calibration and verification as well.
Need to decide on what you are interested in looking at.
Validation
If we can’t get an exact prediction, what standard can we judge
against?
Randomisation of the elements of the prediction.
eg. Can we do better at geographical prediction of urban
areas than randomly throwing them at a map.
Doesn’t seem fair as the model has a head start if
initialised with real data.
Business-as-usual
If we can’t do better than no prediction, we’re not doing
very well.
But, this assumes no known growth, which the model may
not.
Validation
Visual or “face” validation
eg. Comparing two city forms.
One-number statistic
eg. Can you replicate average price?
Spatial, temporal, or interaction match
eg. Can you model city growth block-by-block?
Visual
comparison
Price
Value (p)
68.00
68.00
(a) Agent Model
Value (p)
Price
72.49
68.00
(b) Hybrid Model
¯
Price (p)
Value
73.89
68.00
Kilometers
8
16,000
(c) Real Data
8,000
4
00
8
16,000
Comparison stats: space and class
Could compare number of geographical predictions that are
right against chance randomly right: Kappa stat.
Construct a confusion matrix / contingency table: for each area,
what category is it in reality, and in the prediction.
Predicted A
Predicted B
Real A
10 areas
5 areas
Real B
15 areas
20 areas
Fraction of agreement = (10 + 20) / (10 + 5 + 15 + 20) = 0.6
Probability Predicted A = (10 + 15) / (10 + 5 + 15 + 20) = 0.5
Probability Real A = (10 + 5) / (10 + 5 + 15 + 20) = 0.3
Probability of random agreement on A = 0.3 * 0.5 = 0.15
Comparison stats
Equivalents for B:
Probability Predicted B = (5 + 20) / (10 + 5 + 15 + 20) = 0.5
Probability Real B = (15 + 20) / (10 + 5 + 15 + 20) = 0.7
Probability of random agreement on B = 0.5 * 0.7 = 0.35
Probability of not agreeing = 1- 0.35 = 0.65
Total probability of random agreement = 0.15 + 0.35 = 0.5
Total probability of not random agreement = 1 – (0.15 + 0.35) = 0.5
κ = fraction of agreement - probability of random agreement
probability of agreeing not randomly
= 0.1 / 0.50 = 0.2
Comparison stats
Tricky to interpret
κ
Strength of Agreement
<0
None
0.0 — 0.20
Slight
0.21 — 0.40
Fair
0.41 — 0.60
Moderate
0.61 — 0.80
Substantial
0.81 — 1.00
Almost perfect
Comparison stats
The problem is that you are predicting in geographical space
and time as well as categories.
Which is a better prediction?
Comparison stats
The solution is a fuzzy category statistic and/or multiscale
examination of the differences (Costanza, 1989).
Scan across the real and predicted map with a larger and larger
window, recalculating the statistics at each scale. See which
scale has the strongest correlation between them – this will be
the best scale the model predicts at?
The trouble is, scaling correlation statistics up will always
increase correlation coefficients.
Correlation and scale
Correlation coefficients tend to increase with the scale of
aggregations.
Robinson (1950) compared illiteracy in those defined as in ethnic
minorities in the US census. Found high correlation in large
geographical zones, less at state level, but none at individual level.
Ethnic minorities lived in high illiteracy areas, but weren’t
necessarily illiterate themselves.
More generally, areas of effect overlap:
Road accidents
Dog walkers
Comparison stats
So, we need to make a judgement – best possible prediction for
the best possible resolution.
Comparison stats: Graph / SIM flows
Make an origin-destination matrix for model and reality.
Compare the two using some difference statistic.
Only problem is all the zero origins/destinations, which tend to
reduce the significance of the statistics, not least if they give
an infinite percentage increase in flow.
Knudsen and Fotheringham (1986) test a number of different
statistics and suggest Standardised Root Mean Squared Error
is the most robust.
Preparing to model
Verification
Calibration/Optimisation
Validation
Sensitivity testing and dealing with error
Errors
Model errors
Data errors:
Errors in the real world
Errors in the model
Ideally we need to know if the model is a reasonable version of
reality.
We also need to know how it will respond to minor errors in
the input data.
Sensitivity testing
Tweak key variables in a minor way to see how the model
responds.
The model maybe ergodic, that is, insensitive to starting
conditions after a long enough run.
If the model does respond strongly is this how the real system
might respond, or is it a model artefact?
If it responds strongly what does this say about the potential
errors that might creep into predictions if your initial data isn't
perfectly accurate?
Is error propagation a problem? Where is the homeostasis?
Prediction
If the model is deterministic, one run will be much like another.
If the model is stochastic (ie. includes some randomisation),
you’ll need to run in multiple times.
In addition, if you’re not sure about the inputs, you may need
to vary them to cope with the uncertainty.
Monte Carlo testing
Where inputs have a distribution (error or otherwise), sample
from this using Monte Carlo sampling:
Sample such that the likelihood of getting a value is equal to
its likelihood in the original distribution.
Run the model until the results distribution is clear.
Estimates of how many runs are necessary run from 100 to
1000s.
Identifiability
In addition, it may be that multiple sets of parameters
would give a model that matched the calibration data well,
but gave varying predictive results. Whether we can identify
the true parameters from the data is known as the
identifiability problem. Discovering what these parameters
are is the inverse problem.
If we can’t identify the true parameter sets, we may want to
Monte Carlo test the distribution of potential parameter
sets to show the range of potential solutions.
Equifinality
In addition, we may not trust the model form because
multiple models give the same calibration results (the
equifinality problem).
We may want to test multiple model forms against each
other and pick the best.
Or we may want to combine the results if we think different
system components are better represented by different
models.
Some evidence that such ‘ensemble’ models do better.
The frontier of modelling
Individual level modelling is now commonplace.
Data is in excess, including individual-level data.
Network speeds are fast.
Storage is next to free.
So, what is stopping us building a model of everyone/thing in
the world?
Memory.
Processing power.
Memory
To model with any reasonable speed, we need to use RAM.
Gender: 1bit (0 = male; 1 = female)
1 bit = 1 person
1 byte = 8 people
1Kb = 1024 x 8 = 8192 people
1Mb = 1,048,576 x 8 = 8,388,608 (10242x8) people
1 Gb = 1,073,741,824 x 8 = 8,589,934,592 people
Seems reasonable then. Typical models running on a PC have
access to ~ a gigabyte of RAM memory.
Memory
Geographical location (⁰ ′ ″ ‴N &W): 8 ints = 256 bits
1 Gb = 33,554,432 people
This isn’t including:
a) The fact that we need multiple values per person.
b) That we need to store the running code.
Maximum agents for a PC ~ 100,000 — 1,000,000.
Processing
Models vary greatly in the processing they require.
a) Individual level model of 273 burglars searching 30000
houses in Leeds over 30 days takes 20hrs.
b) Aphid migration model of 750,000 aphids takes 12 days to
run them out of a 100m field.
These, again, seem ok.
Processing
a) Individual level model of 273 burglars searching 30000
houses in Leeds over 30 days takes 20hrs.
100 runs = 83.3 days
b) Aphid migration model of 750,000 aphids takes 12 days to
run them out of a 100m field.
100 runs = 3.2 years
Ideally, models based on current data would run faster than
reality to make predictions useful!
Issues
Models can therefore be:
Memory limited.
Processing limited.
Both.
Solutions
If a single model takes 20hrs to run and we need to run 100:
a) Batch distribution: Run models on 100 computers, one
model per computer. Each model takes 20hrs. Only suitable
where not memory limited.
b) Parallelisation: Spread the model across multiple computers
so it only takes 12mins to run, and run it 100 times.
c) Somehow cut down the number of runs needed.
Analysis
Models aren’t just about prediction.
They can be about experimenting with ideas.
They can be about testing ideas/logic of theories.
They can be to hold ideas.