Transcript Slide 1

Uncertainty and Uncertainty
reduction Measures
How do we classify uncertainties? What are their
sources?
– Lack of knowledge vs. variability.
What type of measures do we take to reduce
uncertainty?
– Design, manufacturing, operations & postmortems
– Living with uncertainties vs. changing them
How do we represent random variables?
– Probability distributions and moments
Classification
of uncertainties
Aleatory uncertainty:
Inherent variability
– Example: What does
regular unleaded cost in
Gainesville today?
Epistemic uncertainty
Lack of knowledge
Source: http://www.ucan.org/News/UnionTrib/
– Example: What will be the average cost of regular unleaded
January 1, 2014?
Distinction is not absolute
Knowledge often reduces variability
– Example: Gas station A averages 5 cents more than city average
while Gas station B – 2 cents less. Scatter reduced when
measured from station average!
British Airways 737-400
A slightly different
uncertainty
classification
.
Type of
uncertainty
Definition
Causes
Reduction
measures
Error
Departure of
average
from model
Simulation
errors,
construction
errors
Testing and
model
refinement
Variability
Departure of
individual
sample from
average
Variability in
material
properties,
construction
tolerances
Tighter
tolerances,
quality
control
Distinction between Acknowledged and Unacknowledged errors
Modeling and Simulation
.
Error modeling
• Model qualification, verification, and validation
often provide estimates of the errors associated
with the use of simulation.
• Experience in modeling similar problems may
provide additional guidance.
• The most common model of the errors is simple
bounds. For example ±10%.
• We often settle for larger errors than possible
because of computational costs or analysis
complexity.
Uncertainty reduction
measures
Design: Refined simulation models, building
block tests. Aleatory or epistemic?
Manufacture: Quality control. A or E?
Operation: Licensing of operators, maintenance
and inspections. A or E?
Post-mortem: Accident investigations. A or E?
Living with uncertainties by using safety factors
Representation of
uncertainty
Random variables: Variables that can take
multiple values with probability assigned to
each value
Representation of random variables
– Probability distribution function (PDF)
– Cumulative distribution function (CDF)
– Moments: Mean, variance, standard
deviation, coefficient of variance (COV)
Probability density function (PDF)
• If the variable is discrete, the
probabilities of each value is the
probability mass function.
• For example, with a single die,
toss, the probability of getting 6 is
1/6.If you toss a pair of dice the
probability of getting twelve (two
sixes) is 1/36, while the
probability of getting 3 is 1/18.
• The PDF is for continuous
variables. Its integral over a
range is the probability of being in
that range.
Histograms
• Probability density functions have to be inferred from
finite samples. First step is a histogram.
• Histograms divide samples to finite number of ranges
and show how many samples in each range (box)
• Histograms below generated from normal distribution
with 50 and 500,000 samples.
5
2
x 10
1.8
14
1.6
12
1.4
10
1.2
1
8
0.8
6
0.6
4
0.4
0.2
2
0
0
7.5
8
8.5
9
9.5
10
10.5
11
11.5
12
5
6
7
8
9
10
11
12
13
14
15
Number of boxes
• Each time we sample, histogram will be different.
• Standard deviation (from sample to sample) of the height
n of a box is approximately 𝑛. Keep that below change
in n from one box to next.
• Histograms below were generated with 5,000 samples
from normal distribution.
• With 8 boxes s.d. relatively small (~25) but picture is
coarse. With 20 boxes it’s about right. With 50, s.d. is too
high (~10) relative to change from one box to next.
350
800
1800
300
700
1600
1400
600
1200
500
1000
250
200
400
150
800
300
100
600
200
400
50
100
200
0
-4
-3
-2
-1
0
1
2
3
4
0
-4
-3
-2
-1
0
1
2
3
4
0
-4
-3
-2
-1
0
1
2
3
4
Histograms and PDF
How do you estimate the PDF from a histogram?
Only need to scale.
P robability distribution function f :
P(a  x  a  da)  f ( x)da
Cumulative distribution function
x
F ( x)  P( X  x) 
Integral of PDF

f (t )dt

1
Experimental CDF
from 500 samples
shown in blue,
compares well to
exact CDF for
normal
distribution.
0.9
0.8
normal CDF
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-4
-3
-2
-1
0
x
1
2
3
4
Probability plot
• A more powerful way to compare data to a
possible CDF is via a probability plot (500
points here)
Probability plot for Normal distribution
0.9999
0.9995
0.999
0.995
0.99
Probability
0.95
0.9
0.75
0.5
0.25
0.1
0.05
0.01
0.005
0.001
0.0005
0.0001
-4
-3
-2
-1
0
Data
1
2
3
4
Moments
• Mean
• Variance
 ( X )   xf ( x)dx  E[ X ]
2
Var ( X )   ( x   ) 2 f ( x)dx  E  X    


• Standard deviation
• Coefficient of variation
• Skewness
 X   3 
E 
 

 

  Var ( X )

COV 

Questions
• Our random variable is the number seen
when we roll one die. What is the CDF of
2?
• Our random variable is the sum on a pair
of dice. What is the CDF of 2? Of 13?