Propagation of errors

Download Report

Transcript Propagation of errors

Propagation of Errors
(Chapter 3, Taylor)
Introduction
Example: Suppose we measure the current (I) and resistance (R) of a resistor.
Ohm's law relates V and I:
V = IR
If we know the uncertainties (e.g. standard deviations) in I and R, what is the uncertainty in V?
More formally, given a functional relationship between several measured variables (x, y, z),
Q=f(x, y, z)
What is the uncertainty in Q if the uncertainties in x, y, and z are known?
To answer this question we use a technique called Propagation of Errors.
Usually when we talk about uncertainties in a measured variable such as x, we write: xs.
In most cases we assume that the uncertainty is “Gaussian” in the sense that 68% of the time
we expect the “true” (but unknown) value of x to be in an interval given by [x-s, x+s].
BUT not all measurements can be represented by Gaussian distributions (more on that later)!
Propagation of Error Formula
To calculate the variance in Q as a function of the variances in x and y we use the following:
2
 Q 
 Q 
 Q  Q 
s  s    s y2    2s xy   
 x 
 x  y 
 y 
If the variables x and y are uncorrelated then sxy = 0 and the last term in the above equation is zero.
2
2
Q
2
x
We can derive the above formula as follows:
Assume we have several measurement of the quantities x (x1, x2...xN) and y (y1, y2...yN).
The average of x and y:
1 N
1 N
 x   xi and  y   yi
N i1
N i1
R Kass/SP07
P416 Lecture 4
1
define: Qi  f (xi , yi )
Q  f ( x ,  y ) evaluated at the average values
expand Qi about the average values:
 Q 
 Q 

Qi  Q(  x ,  y )  ( xi   x )
 ( yi   y )


x

y





 x , y
+ higher order term s
x , y
assume the measured values are close to the average values, and neglect the higher order terms:
 Q 
 Q 

Qi  Q  ( xi   x )
 ( yi   y )

 x   x ,  x
 y   x ,  x
s Q2 
1
N
1

N
N
 (Q  Q)
i 1
2
i
1
 Q 
( xi   x ) 



 x   x ,  x N
i 1
2
N
2
2
 Q 
2

( yi   y ) 


i 1
 y   x ,  y N
N
2
2
 Q 
 Q 
2
 Q 
 Q 



s 
 s y2 



 x   x ,  x
 y   x ,  x  x   x ,  y  y   x ,  y N
2
2
x
N
 (x  
i 1
x
x
)( yi   y )
N
 (x  
i 1
i
 Q 
 Q 


)( yi   y )

 x   x ,  x  y   x ,  x
i
Since the derivatives are evaluated at the average values (x, y) we can pull them out of the summation
If the measurements are uncorrelated the summation in the above equation is zero
2
 Q 
 Q 

s s  
 s y2 

x

y

  x , y

  x , y
2
2
Q
R Kass/SP07
2
x
uncorrelated errors
P416 Lecture 4
2
If x and y are correlated, define sxy as:
s xy 
1
N
N
 (x
i 1
i
  x )( y i   y )
2
 Q 
 Q 
 Q 
 Q 



s s  
 s y2 
 2
s xy


y

x

y
 x   x ,  y



  x , y
  x , y
 x , y 
2
2
Q
correlated errors
2
x
Example: Power in an electric circuit.
P = I2R
Let I = 1.0 ± 0.1 amp and R = 10. ± 1.0 
P = 10 watts
calculate the variance in the power using propagation of errors assuming I and R are uncorrelated
s P2
2
2
2 P 
2 P 
 s I    s R  
I I1
R R10
 s I2 (2IR)2  s R2 (I 2 )2  (0.1)2 (2 110)2  (1)2 (12 )2  5 watts 2
P = 10± 2 watts
If the true value of the power was 10 W and we measured it many times with
an uncertainty (s) of ± 2 W and Gaussian statistics apply then 68% of the
measurements would lie in the range [8,12] watts
Sometimes it is convenient to put the above calculation in terms of relative errors:
s P2
s I2  P  s R2  P  4s I2 s R2
 0.1   1 
2





4


(
0
.
1
)
(4  1)








2
2
2
2
2
P
P  I 
P  R 
I
R
 1   10 
2
2
2
2
In this example the uncertainty in the current dominates the uncertainty in the power!
Thus the current must be measured more precisely if we want to reduce the uncertainty in the power.
R Kass/SP07
P416 Lecture 4
3
It can be shown that if a function is of the form: f(x,y,z)=xaybzc then the relative
variance of f(x,y,z) is:
2
2
2
2
 s f   as x   bs y   cs z 
generalization

  
  
  

 f   x   y   z 
of Eq. 3.18
Example: The error in the average (“error in the mean”).
The average of several measurements each with the same uncertainty (s) is given by:
1
  (x1  x2 ...xn )
n
2
2
2
2
2
2
1 2
2
2  
2  
2  
2 1 
2 1 
2 1 
s   s x    s x   ...s x    s    s   ...s    ns  
1 x
2 x
n x
n 
n 
n 
n 
 1 
 2 
 n 
s
s 
“error in the mean”
n
We can determine the mean better by combining measurements.
But the precision only increases as the square root of the number of measurements.
n Do not confuse s with s!
n s is related to the width of the pdf (e.g. Gaussian) that the measurements come from.
n s does not get smaller as we combine measurements.
The problem with the Propagation of Errors technique:
In calculating the variance using propagation of errors:
We usually assume the error in measured variable (e.g. x) is Gaussian.
BUT if x is described by a Gaussian distribution
f(x) may not be described by a Gaussian distribution!
R Kass/SP07
P416 Lecture 4
4
l What does the standard deviation that we calculate from propagation of errors mean?
Example: The new distribution is Gaussian.
n Let y = Ax, with A = a constant and x a Gaussian variable.
y = Ax and sy = Asx
n Let the probability distribution for x be Gaussian:
p(x,  x ,s x )dx 
1
s x 2
e
(xx ) 2

2s x2

dx 
sy
A
1
e
2
 y y
 
A A
2


s 2
2 y 
 A 
1
1
dy 
e
A
s y 2

( yy ) 2
2s y2
dy  p(y,  y ,s y )dy
Thus the new probability distribution for y, p(y, y, sy), is also described by a Gaussian.
100
y = 2x with x = 10 ± 2
sy = 2sx = 4
dN/dy
80
Start with a Gaussian with  = 10, s = 2
Get another Gaussian with  = 20, s = 4
60
40
y
20
0
0
R Kass/SP07
10
20
30
40
P416 Lecture 4
5
u
n
Example: When the new distribution is non-Gaussian: y = 2/x.
The transformed probability distribution function for y does not have the form of a Gaussian.
100
y = 2/x with x = 10 ± 2
sy = 2sx /x2
dN/dy
80
Start with a Gaussian with  = 10, s = 2.
DO NOT get another Gaussian!
Get a pdf with  = 0.2, s = 0.04.
This new pdf has longer tails than a Gaussian pdf:
Prob(y > y + 5sy) = 5x10-3, for Gaussian 3x10-7
60
40
20
0
0.1
0.2
0.3
y
0.4
0.5
0.6
Unphysical situations can arise if we use the propagation of errors results blindly!
u
Example: Suppose we measure the volume of a cylinder: V = R2L.
n Let R = 1 cm exact, and L = 1.0 ± 0.5 cm.
n Using propagation of errors:
sV = R2sL = /2 cm3.
V = ± /2 cm3
n If the error on V (sV) is to be interpreted in the Gaussian sense then there is a
finite probability (≈ 3%) that the volume (V) is < 0 since V is only 2saway from 0!
Clearly this is unphysical!
Care must be taken in interpreting the meaning of sV.
l
R Kass/SP07
P416 Lecture 4
6