CHAPTER 10 Uncertainties and Statistics

Download Report

Transcript CHAPTER 10 Uncertainties and Statistics

CHAPTER 10 Uncertainties and
Statistics
 10.1 General Remarks

If there is one premise basic to instrumentation
engineering, it is this: no measurement is without error.
Hence neither the exact value of the quantity being
measured nor the exact error associated with the
measurement can be ascertained.
In engineering, as in physics, the uncomfortable principle
of indeterminacy exists. Yet as we have seen in our
discussion of interpolation methods (Section 9.3),
uncertainties can be useful and, like friction,are often a
blessing in disguise.
It is toward a methodical use of measurement
uncertainties as a guide to approaching true values that this
chapter is addressed.
The output in most experiments is a measurement. the
reliability of the measurement depends not only on
variations in controlled inputs,but also,in general, on
variations in factors that are uncontrolled and perhaps
unrecognized.
Some of these factors that might unwittingly affect a
measurement are the experimenter,the supporting
equipment,and conditions of the environment..
Thus in addition to errors caused by the device under
test, and in addition to errors caused by variations in the
quantity being measured, extraneous factors might
introduce errors in the experiment that would cloud the
results use of different measuring equipment.
Effects of those variables that are not part of the study
can be further minimized by taking observations in a
random order.
This is called randomization。
The important task of measuring the remaining
significant errors is approached by taking a member of
independent observations of the output at fixed values of the
controlled input. This is called replication。
Staling the above ideas in mathematical terms,each
measurement x can be visualized as being accompanied by
an error a such that the interval
x±δ
(10.1)
will contain the true value of the quantity being measured.
The measurement error , in turn, is usually expressed in
tents of two components, a random error e and a systematic
error ,such that
10.1.1
RANDOM ERRORS
When repeated measurements are taken, random errors
will show up as scatter about the average of these
measurements.
The scatter is caused by characteristics of the measuring
system and/or by changes in the quantity being measured.
Random errors always will be observed as long as the
readout equipment has adequate discrimination.
The term precision is used to characterize random errors.
Precision is quantified by the true standard deviation or of
the whole population of measurements or, more often,by
its estimator S,the precision index of the data available.
 These statistical teams will be defined shortly by working
equations. Now it is sufficient to understand that a large
precision index means a lot of scatter in the data, and
conversely, a small precision index means high precision.
10.1.2 SYSTEMATIC ERRORS
Over and above the random errors involved in all
measurements,there are also errors that are consistently
either too high or too low with respect to the accepted true
value.
Such errors, which are termed fixed errors or systematic
errors are characterized by the term bias,Systematic error
is quantified buy the true bias BATA or, more often, by B,
the estimate of the limit of the bias.
When bias can be quantified,it is used as a correction
factor to be applied to all measurements.
A zero bias implies that there is no difference between the
true value ,and the true mean of many observations.
However the zero-bias case is rare indeed; and experience
indicates a strong tendency to underestimate systematic errors.
All of these terms and symbols are shown geometrically in
Figure 10.1.
Systematic errors can be minimized by various
methods as, for example, by calibration (Figure 10.2).
Calibrations are usually accomplished by comparing a test
instrument to a standard instrument.
Since such comparisons are not always direct or perfect,
we may not succeed in totally determining the bias, that is,
the bias may have a random component, but it is essentially
fixed, and is never as random as precision errors.
10.2 STATISTICAL RELATIONS
There are cases in engineering practice, however, when we
can presume that the bias is removed,that all errors are of the
random type, and that hence the errors can be treated
statistically [1],[2].
In this section we overlook for a time the fixed (bias) errors
and consider only the random (precision) errors.
It is clear that, even in the absence of fixed errors,we are
to be denied by the nature of things the ability to measure
directly the true value of a variable.
 Thus it becomes our job to extract from the experimental
data at least two vital bits of information.
First we must from an estimate of the best value of the
variable. This will he denoted by .
Closely coupled with this requirement, we must give an
estimate of the intervals,centered on ,with in which the
true value is expected to lie.
This will be denoted by the uncertainty margin that we tack
on to [3]
10.2.1 BEST VALUE AT A GIVEN INPUT
When an output X is measured many times at a given input,
the mean value of X is simply
(10.3)
where Xk is the value of the kth observation (called
interchangeably the kth reading or measurement) and N is the
number of observations in the sample.
It is a mathematical fact that the arithmetic average defined
by equation(10.3) is the best representation of the given set of
Xk
Note that when the estimated best value of X is taken as
the sum of the squares of the deviations of the data from
their estimate is a minimum. (This is essentially the leastsquares principle.)
However, whereas represents an unbiased estimate of the
true arithmetic means niu of all possible values of X.
There is no assurance that is the true value , that is,in
any actual measurement, the bias would have to be
considered. (Thus good agreement, that is, high precision in
small sample replication does not imply that is close to that
is, high accuracy.)
Nevertheless, from any viewpoint the best estimate of the
true value of the population mean at a given input is the
average , of the available measurements (Figure 10.3).
10.2.2 CONFIDENCE INTERVALS
Having decided on the best available value of X (which
is ), we inquire next as to its worth as an estimate of the
true value of X〔 which is for the case of zero bias).
C. G. Darwin has noted in this regard [4]:"It seethed to
me that there was a defect in the habit of thought of many
in the engineering profession, some sort of campaign was
needed to inculcate in people's minds the idea that every
number has a fringe that it is not to be regarded as exact
but as so much a bit,and that the size of this bit is one of
its really important quantities."
This plus or minus fringe that accompanies every
measurement is called a confidence interval CI Thus a
confidence interval for the true value can be given by
X±CI(p), where X is the estimated best value of X, CI is
the confidence interval. And (p) is the probability statement
(and not a multiplier of CI).
To form these confidence intervals we need replicate data,
and we note in this regard that these intervals will differ
according to the size and number of sets available.
Sets of Very Large N
Many times in engineering, a tabulation of how the,
Various values of X occur in replication is well
approximated by the Gauss--Laplace normal distribution
relation [5]
(10.4)
where the factor has the normalizing effect of making the
integral of f(X) over all values of X equal unity,and where
represents the true standard deviation of X, which in turn is
well approximated by
(10.5)
The standard deviation of a normal distribution o-f X has
the following characteristics:
1. measure the scatter of X at a given input, that is, it is a
measure of the precision error .
2. has the same units as X.
3. is the square root of the average of the sum of the
squares of the deviations of all possible observations
from the true arithmetic mean
For any engineering applications this is not good
enough, and wider intervals must be expected to express
greater confidence.
For example, 95.46% of the data can be expected to fall
within the +2delta interval,and 99.73% within +3delta
(Figure 10.4).
 We are assured that X is a very good estimate of by the
large size of the sample.
 We may ask, however, how typical a single observation of
X. is as we have just seen, one answer is
X ±3σ(at 99.73%)
(10.6)
Statement(10.6) indicates that the interval is expected to
include 99.7% of the time.

It further brings out the important point that, to be most
meaningful, a measurement should be given three parts [6],
[7]. These are:
1. A magnitude (the indicate value of X)
2. A confidence interval [which is your estimate of what
the error might be: in statement (10.6)]
3. A probability statement [an indication of your
confidence that the true value will be within the
confidence interval chosen; 99.73% in statement (10.6)].
Single Set of Small N
We now face the unpleasant fact that, in a practical
experiment, we usually obtain only a relatively small
sample from all possible values of X.
This means that we cannot obtain the true arithmetic
mean ,and hence we cannot form the true standard
deviation instead of the inaccessible deviations( ),we
can determine only the residuals.
We note in this regard that the sum of the squares of the
residuals,being always a minimum according to
least-squares of principle previously mentioned, is always
less than the sum of the squares of the deviations.

The precision index of the single sample is defined in
terms of the residuals and is patterned after equation
(10.5) as
(10.7)
where the factor (N-1) is used in place of the usual N in
an attempt to compensate for the negative bias that results
from using X in place of in forming the differences.
However, a negative bias unfortunately still remains in
the small estimate of the standard deviation,and S, the
obtainable,does not equal delta,the desired.
Student's Distribution
Recognizing this deficiency,a method was developed
by the English chemist W.S.Gosset (writing in 1907 under
the pseudonym "Student"), by which confidence intervals
could be based on the precision index S of a single small
sample.
He introduced the "Student's statistic whose values have
been tabulated in terms of degree of freedom miu and the
desired degree of confidence (quantified by the probability
pi) (Table 10.1 and Figure10.5)
Careful perusal of these values will show that the t
statistic inflates the confidence interval (i.e.the uncertainty
margin) so as to reduce the effect of understand deviation
delta when a small sample is used to calculate S(Figure 10.6)
Degrees of freedom can be defined in general as the
number of observations minus the number of constants
calculate from the data. According to equation(10.3), X has
N degree of freedom, whereas by equation (10.7), S has N-1
degree of freedom because one constant, X, is used to
calculate S.
The answer to the question,how typical is a single
observation of X, is, in terms of S and t, (to a given
probability p)
X= ± Tv,pS
(10.8)
Statement(10.8) indicates that will be included in the
interval, to the probability p.
The counterpart of statement (10.8) was given in
statement 10.6 in terms of . the interval of statement 10.6
can be generalized in terms of the normal deviate z as
X±Zpσ (to a given probability P)
The plus and minus quantities in statements 10.8 and
10.9, that is, ±tS and ±z ,should be recognized as the
confidence intervals on the individual measurements of X.
Appropriate values of z are given in Table 10.2. The z and
statistics are compared in Figure 10.6.
Several Sets of Small N. A useful measure of scatter in
multiple set experiments is called the precision index of the
mean. This is patterned after equation 10.5 and 10.7 and is
defined in general as

M
  N i xi  x
S x   i 1 M

Ni


i 1

2





1
2
(10.10)
where M is the number of sets involved,and of the ith set,
is the mean and is the number of observations.
The factor is the grand average of the M sets, defined in
general as
M
X
N X
i 1
M
i
N
i 1
i
(10.11)
i
which is naturally to be used in all multiple set experiments
as the best estimate of the true value of the population mean
at a given input.
When a common number of observations to all M sets,
equations(10.10) and (10.11) reduce to
S X ,N
1

M
 X
M
i 1
i
X

2



1
2
(10.12)
And
XN
1

M
M
1
Xi 

MN
i 1
MN
X
k 1
k
(10.13)
where the subscript N signifies a common set size.
It is an observable feet that the means to of different sets
of measurements from the same population are always
much closer to each other than values of a single set.
Equation (10.12) defines the precision index of a set of M
values of , whereas equation (10.7) defines the precision
index of a set of N values of X.
Statistical theory gives us an important relation between
these two statistics, namely,
SX
SX 
N
(10.14)
where N is consistently the number of observations common
to all M sets.
Equation (10.14) says in effect: the average value of a
set has more precision than any of its parts by the factor .
Patterned directly after equation (10.14) is the precision
index of the grand average of M sets of observations, which
can be given as
SX 
SX
M

SX
MN
(10.15)
Equation (10.15) presents one apparent problem: which
should be used in a multiple-set experiment?
One answer is to choose any one , at random, but this
leads to wide variation in .
A more satisfying answer, and the one recommended here,
is to define a weighted average of the weights being the
appropriate degrees of freedom, that is, replace the of
equation (10.15) with defined as
1
2
M
2
1
v
S
2
2
2
 i i 

2
v
S

v
S

...

v
S
i 1
1 1
2 2
M M
 
S M

v

v

...

v


1
2
M

vi 

 i 1

(10.16)
Since it is often common practice to keep all sets of the
same size, that is, to keep the same for all sets, and since
this practice assures us that is a minimum,equation 10.6
can be rewritten for the cast of an N common to all M sets
1
as
 1 M 2 2
(10.8)
SN  
M
S
i 1
i


On the basis of the t statistic, best estimates of the interval
with contains the true average can be given in
X
t N 1, p S X
N
 X  t N 1, p S X
(10.18)
And in terms of the mean of M sets of measurements
as
X
tMN  M , p S
(10.19)
MN
The subscripts of t indicate degrees of freedom and
probability p. and the ±quantity indicates the confidence
interval CI of the best value estimate.
Note that degrees of freedom is now given by since not
one but M were obtained from the data.
Several examples are given here to illustrate the ideas
embodied thus far in these statistical relations.
Example 1
For the observations 7,8,7,6,5,6,7,8,6,9,8.find the best
estimate of the mean . The precision index of the mean ,
the precision index of the mean,and the 95% confidence
interval statement for [8].
Solution.
11
1
77
By equation 10.3,
X   Xi 
7
11 i 1
which is the best estimate of .
By equation 10.7
 1
11
1
2
14
2
SX  
X

7

 1.18



i

10
 N  1 i 1

11
which is the precision index of the mean.
By Table 10.1
tv , p  t10,0.95  2.228
which is the t statistic for the 95% confidence interval
statement.
Hence according to equation 10.18 should be in the
interval
X  tn1, p S X  7  2.228(0.357)  7  0.795
Example 2
t7 , 0.95  2.365
Express the best value and its 95% confidence for a single
sample experiment of eight observation.
Solution
According to equation 10.18 and Table 10.1. at N=8
and v=7
t7 , 0.95  2.365
Hence,
2.365S X
X
8
At 95%
Example 3
If five sets of the type of Example 2 were taken, how
much more confidence could be placed in the best value?
Solution
According to equation 10.19 and Table 10.1, at MN=40,
v=MN-M=35 and tv , p  2.031
Hence
2.031S X
X
40
at 95%
When the results of Examples 2 and 3 are rationed, there
results
2.365 8 0.836

 2.6
2.031 40 0.321
which indicates that the confidence interval could be
tightened by a factor of about 3 for the multiple-set case.
Example 4
From these sets of five measurements each, the following
table derived:
Find the best estimate of and its 95% confidence interval.
Solution
By equation 10.13
X
1
 0.2052  0.2232  0.1030   0.1085
3
By equation 10.17
1

S   (0.67192  0.63952  1.21682 ) 
3

1
2
By equation 10.19,
2.179(0.8834)
CI X 
 0.4970
15
Hence is within the interval of the time
Range
In addition to the CI statements based on, and tS, a third
type of CI statement can be based on the range estimate of S.
The range R is defined as the difference between the
largest and small measurements in a set, and can be used to
estimate the precision index S, that is,
(10.20)
S
R
 d 2* 
N
the average range of M sets, defined in turn by
1
R
M
M
R
i 1
(10.21)
i
For N common to all M sets, and where d2 is tabulated as
a function of N in Table 10.3.
There is a loss of degrees of freedom with this technique,
and the estimate of S is less precise than those given above,
but the range estimate of S as given by equation ( 10 .20)、
is often convenient.
Sometimes the range of a single set is used, via equation
10.20, to estimate S[10]-[12], and occasionally d2 is used in
place of in equation 10.20, where the degree of
approximation can be determined via Table 10.3.
Avoiding the determination of S entirely, confidence
interval statements for the case when the range is being used
can be given in terms of a substitute t statistic as the
counterparts of equation 10.8 and 10.19 as
X  N , p R
For the single set, and
X
 N,p R
(10.22)
(10.23)
M
In terms of the mean of M sets of measurements.
Example 5
Estimate the number of range by which can depart from
by reference to Table 10.4 and equation 10.22 for N=2,3 and
4.
Solution
For two observation in the sample,
Ranges at 95% CI X   2 R  6.353
Ranges at 99%  31.828
For three observation in the sample,
Ranges at 95%
CI X  1.304
Ranges at 99%
 3.008
Here we note how dramatically the confidence interval
tightens with one additional measurement.
For four observation in the sample,
Ranges at 95%
CI X   4 R  0.717
Ranges at 99%
 1.316
Thus we conclude that the use of three readings over two
greatly improves our understanding of the dispersion of X,
and the worth of additional measurements becomes
primarily an economic question.
Example 6
For two sets of five measurements each, the following
table is derived.
Find the best estimate of and its confidence interval.
Solution
By equation 10.13, X  1  0.4506  0.0478   0.2492
2
By equation 10.21,
1
R  1.516  0.815   1.1655
2
By equation 10.21,
CI X 
N R
M

0.507 1.1655
2
 0.4178
Hence is within the interval 0.2492±0.4178 95% of the
time.
Sample of small N with known
Often can be considered known in the sense of being
established by experience. In such favorable situations,
single small sample can yield reliable estimates.
For example, in terms of , the counterparts of equations
(10.18),(10.19) and of equations (10,22),(10.23) are
X
for the single set and
z
N
z
X
MN
(10.24)
(10.25)
for multiple sets.
All of the confidence intervals developed thus far are
summarized in Table 10.5
Example 7
A certain temperature measurement yields an average
value of 150.75 , with 95% assurance. What confidence
interval statement can be made concerning the true
temperature?
Solution
Based on a single measurement, according to
equation(10.9) and Table 10.2, should be in the interval.
T±1.96σ
95% of the time
Based on the mean of four measurements, according to
equation 10.24, should be in the interval
95% of the time
1.96
150.75 
4
or with 1.96 =0.5 , we can write the interval as
0.5
150.75 
2
 Thus can be given as
'
150.5 F  T  151.0 F
(95%)`
In words, the most believable value of T is 150.75 .
furthermore, 95% of the time the true temperature is
believed to lie between 150.5 and 151.0 .
10.3 UNCERTAINTY OF A SINGLE PARAMETER
In section 10.1 both systematic and random errors are
considered.
In Section 10.2 we dwell at length on the confidence
interval method. While systematic errors must be estimated
by no statistical methods.
It follows that these two types of errors should not be
joined together lightly.
Indeed,the best procedure to follow in describing the
uncertainty in a measured parameter is to quote the random
and systematic errors separately, and let it go at that..
However, there are often times in engineering when a
single number is required to describe the uncertainty of a set
of measurements.
That is,the random and systematic error accompanying
repeated measurements of a given parameter (like
temperature or pressure) are often combined to yield a
single number for the uncertainty. the number formed by
combining the bias error B and the precision error is called
the uncertainty U.
Since there can be no rigorous basis in statistics for the
required relation defining uncertainty, its formulation must
remain arbitrary.
So it is not surprising that two definitions for uncertainty
are in common usage[13].
The most conservative model for uncertainty is
(at 95%) (10.26)
U ADD  B  t95 S X
where the subscript ADD signifies that the bias and
precision errors are simply added,and the 99% figure in
parentheses indicates the percent coverage of the true value
expected of this model.
A more realistic estimate for uncertainty, presuming that
there will be some beneficial canceling of the errors or ,to
say it another way, assuming that all errors will not be in the
same direction, is based on the familiar root-sum-square
model as
1
2 2
2

(at 95%) (10.27)
U RSS  B   t95 S X  


 where the subscript RSS signifies that the bias and precision
errors are combined by taking the square root of the sum of
the squares of these errors,and the 95% figure in
parentheses indicates the percent coverage expected by this
model.
Either model indicates the expected error limit of a
measured parameter for a given coverage, that is the
uncertainty U is our best estimate of the total error of
equation 10.2.
The coverage indicates the expected probability that the
interval defined by the best value of the parameter plus
ormolus the uncertainty, ±U, will include the true value .
The uncertainty interval is shown in figure 10.7.
In engineering work, the RSS model of equation (10.27)
is used most often. and is she approach recommended here
for combining systematic and random errors. Sometimes the
mode is called the probable uncertainty.
Example 8
A given parameter P is measured with an estimated bias B
of 2 units and a calculated precision index S of l unit when
the number of observations N is 10.
Give the uncertainty interval statements based on 99%
and 95% coverage.
1
U RSS  22  (2.262 1)2  2  3.019
When the number of measurements of a given parameter
is extreme small, or when no statistical information is
available, one must estimate the uncertainty U in place of
the calculated uncertainty of equation 10.26 and 10.27.
In such cases the uncertainty represents the
experimenter’s best estimate of the maximum error to be
reasonably associated with the parameter.
Thus one could say, for example,that based on experience,
the uncertainty of a temperature measurement is (95% of the
tithe) without reference to any particular set of
measurements, without application of the statistics.
Or one could that all flow measurements made with
uncalibrated nozzles can be counted on to (95% of the
time). and use this as the uncertainty
Although it is true that the uncertain thus conceived
includes both systematic and random errors,the idea of
separating these errors and dealing with them separately is
too arbitrary to be practical, and we recommend considering
such uncertainties as describing systematic errors alone.
10.4 PROPAGATION OF MEASUREMENT
ERRORS INTO A RESULT
Often a result r is derived by combining a number of
independent parameters according to some functional
relationship
(10.28)
r  f P1 , P 2 ,..., P J


where
1
Pi 
N
N
P
k 1
ik
(10.29)
and the subscript J indicates the number of parameters
involved.
In some cases important question must be considered,
namely, how are the measurement of independent
parameters propagated into the result?
To ask this in another way,what are the precision and
bias errors of a derived result?
The uncertainty interval of interest is now
(10.30)
r U
r
where,patterned after equations(10.26)and(10.27),the
uncertainty of the result is defined as either
(10.31)
U
 B t S
r , ADD
or
U r , RSS
r
95
2
2

 Br   t95 Sr  


r
1
2
(10.32)
A recognized concept of statistics provides the answer to
one of our question The precision index of the result is
given by
2
2
2
 r
 r
 
  r

S r  
S P1   
S P2   ...  
SPJ  
 P1
  P2

 PJ
 
1
2
(10.33)
On an absolute basis, and by
1
2
2

J 
Sr
r r S Pi  
(10.34)
  
 
r  i 1  Pi Pi Pi  


Note once again the use of the root-sum-square principle.
The so-called sensitivity factors of equations (14.33) and
(10.34), namely the absolute
r
i 
Pi
(10.35)
and the relative
r r
 
(10.36)
Pi Pi
Must be evaluate (analytically or numerically) and used
as multipliers of the precision indices Si of each of the
parameters.
'
i
When all parameters have sample sixes greater than 30,
the number of degrees freedom of S, is -2, as seen by
reference to Table 10.1. For all usher cases the number of
degrees of freedom of S, is determined by the
Welch-Satterthwaite equation [15], [16]:

S
vr 
J

i 1
4
r
 S 
'
i
4
 Sr r 

J

Pi
vP
 S
'
i
Pi
Pi

4
(10.37)
vP
i 1
i
4
i
where vP  N  1
If equation(10.37) results in a non-integer number,it
should be rounded downward to the next integer.
By analogy with equations (10.33) and (10.34), but with
little basis in statistics, the bias of a result [17] is given by
1
J
2 2
(10.38)


Br    i BPi
 i 1


as an absolute basis,and by
2

J 


Br
B
P
    1' i  
r  i 1  P i  


1
2
(10.39)
It follows from equation 10.35 and 10.36 that
P22
r
1 
 12
P1 P3
and
r r
 
1
P1 P1
'
1
r 2PP
2 
 11 22
P2
P3
 2'  2
2
PP
r
3 
 1 32 2
P3 2 P3
1
 
2
'
3
The mean values, the sensitivity factors, and the biases for
each of the parameters are tabulated below
It follows from equation 10.40 that determine the bias of
the result in absolute and relative terms.
Solution
1
By equation 10.38
2
2
2 2



Br  1 B1

   B    B  
2
2
3
3
2
2
2
  5  0.005    2  0.05    0.1 0.5  


 0.11456
By equation 10.39
Br  ' B1   ' B 2   ' B 3  
  1     2
   3
 
r  P1   P2   P3  


2
2
2
2
2

 1 0.5    2 1   0.5  2  


 2.2913%
2
1
2
1
2
1
2
As a check on consistency, B can be determined from b/r
as
Br
Br   r  0.022913  5  0.11456
r
 Example 10
Using the relationship (10.49) between result and
parameters, and the values tabulated in Example 9, estimate
the precision error of the result in absolute and relative
terms if the Following information applies:
Solution
] By equation 10.33

Sr   1 S 1

   S    S  
2
2
2
2
2
3
1
2
3
2
2
2

  5  0.005    2  0.01   0.1 0.025  


 0.03211
By equation 10.34
2
2
2

 ' S1   ' S 2   ' S 3  
Sr
  1     2
   3  
r  P1   P2   P3  


2
2
2
 1 0.5    2  0.2    0.5  0.1 


 0.64226%
Check
Sr 
Sr
 r  0.0064226  6  0.03211
r
1
2
1
2
1
2
To get the precision error of the result (i.e., t95S,), it is
necessary to determine first the number of degrees of
freedom of S, via equation (10.37)
vr 
S
1
2
2
4
'
1
3
3
N2 1
N1  1
1
N1  1
N3  1
0.032114
 5  0.005 
4
19

 Sr r 

4
 S    S    S   S P    S P    S P 
4
4
4
1

4
r
1 0.005 
19
 27.7
4

 2  0.01
4

9
0.00642264
2  0.002 


9
4
 0.1 0.025 
4
4
0.5  0.001


4
4
'
2
4
2
N2 1
'
3
3
N3  1
4
And rounded downwards.
By Table 10.1 we have for the result and the precision
error PE of the result on an absolute basis is
PEr  t95,27 S r  2.052  0.03211  0.06589
On a relative basis,
PEr
Sr
 t95,27
 2.052  0.64226  1.3179%
r
Example 11 r
If the instrumentation used in determining the
measurements of the three parameters in equation (10.40)
was such that both the systemic and the random errors of
Examples 9 and 10 applied, what would be the maximum
and probable values of the uncertainty of the result on an
absolute and relative basis?
Solution
On an absolute basis, by equation 10.31
U r , ADD  U r ,max  Br  t95 Sr
U r ,max  0.11456  0.06589  0.18045
By equation 10.32
2
U r , RSS  U r , probable   Br2   t95 S r  


1
2
2
2
U r , probable   0.11456    0.06589  


1
2
On a relative basis
U r ,max
r
Br t95 S r


 2.2913  1.3179  3.61%
r
r
U r , probable
r
1
2
 Br  2  t95 S r  2 
    
   2.64%
 r   r  
As a check on consistency,
U r ,max 
U r ,max
r
 r  0.0361 5  0.1805
and
U r , probable 
U r , probable
r
 r  0.0264  5  0.132