Valid Analytical Measurement

Download Report

Transcript Valid Analytical Measurement

An Introduction to Quality
Assurance in Analytical
Science
Dr Irene Mueller-Harvey
Mr Richard Baker
Mr Brian Woodget
University of Reading
Part 2 - The Analytical Method
Contents:
•
•
•
•
•
•
VAM principles (slides 3&4)
Fit for purpose (slide 5)
Choice of method (slide 6)
Method development and validation (slide 7)
Method performance characteristics (slides 8 - 15)
Method validation (slides 16 - 25)
The presentation contains some animation which will be activated
automatically (no more than a 2 second delay), by mouse click or by use
of the ‘page down’ key on your keyboard.
VAM Principles (1)
The DTI’s programme on Valid Analytical Measurement (VAM) is an
integral part of the UK National Measurement System. The aim of the
VAM programme is to help analytical laboratories to demonstrate the
validity of their data and to facilitate mutual recognition of the results of
analytical measurements. You can find out more about VAM by logging
onto the web site [http://www.vam.org.uk/]
The VAM Bulletin is published
half yearly and is sent free to
all registered subscribers. You
may subscribe via the web
site.
VAM Principles (2)
There are six VAM principles:
Analytical measurements should be made
to satisfy an agreed requirement
Analytical measurements should be made
using methods and equipment which have been
tested to ensure they are fit for their purpose
Staff making analytical measurements
should be both qualified and competent
to undertake the task
There should be a regular independent assessment
of the technical performance of the laboratory
Analytical measurements made
in one location should be
consistent with those elsewhere
Organisations making
analytical measurements
should have well defined
quality control and quality
assurance procedures
Fit for Purpose
You have seen in part 1 of this presentation that it can be expensive
to generate good quality analytical data. The cost of producing data
can often be reduced by selecting analytical methods and technologies
that produce data in accordance with the stated objectives for
carrying out the analysis or test. Consider two examples:
A commercial limescale remover states that it contains between 5 to
15% w/v of sulphamic acid. Therefore any analysis for quality control
purposes needs only to show that the quantity present is indeed between
these two limits and does not need to be accurate to the nearest + 0.1%
The EU current statutory limit for lead in potable water is 25 μg/l. Thus the
use of a method to perform the analysis, which only provides a ‘less than’
value of 100 μg/l, is not suitable for this purpose
Choice of Method
It is important to appreciate the difference
between an ‘analytical method’ (combination
of steps illustrated by the ‘analytical process’)
and an ‘analytical technique’ (chemical or
instrumental procedure by which analytical
data is eventually obtained). In selecting a
method we shall need to consider the
following parameters:
• sample type (matrix) and size (lot or a little);
• data required (qualitative/quantitative);
• expected level(s) of analyte(s);
• precision & accuracy expected;
• likely interferences;
• number and frequency of samples for analysis.
Always use a
standard method if
one is available as
this will save on
development time.
However the method
must be checked to
prove that it suitable
for your
laboratory/situation.
Modification may
well be required.
Method development
and validation
‘Never attempt to re-invent the wheel!’
Before embarking on the development of a new method,
always research the chemical literature to see if a suitable one
already exists. If a suitable one is found, it will still be
necessary however to perform some method validation to
prove that the method can be successfully adapted to your
laboratory, equipment and personnel. More extensive
validation is required for a brand new method. Methods in any
field of analysis may be defined in terms ‘Method
performance characteristics’ and it is these parameters plus
a few others, that are quantified during a method validation
exercise.
Method performance
characteristics
A method’s performance is defined by a number of important
individual characteristics. There are:
Sensitivity
Accuracy
Precision
Limit of Detection (LoD)
Limit of Determination
Selectivity
Linear Range
Dynamic range
Bias
Accuracy and precision
The dictionary definition of both ‘accuracy’ and ‘precision’ are
roughly the same, indicating that these words may be used
synonymously. However in ‘Analytical Science’ they have two
separate meanings, the difference between them is best
illustrated by using target diagrams
Poor precision
poor accuracy
Good precision
poor accuracy
Good mean accuracy
poor precision
Good accuracy
good precision
Accuracy and
precision (2)
You saw from the previous slide, a set of results can be
either accurate and/or precise or can be neither accurate
nor precise. Thus accuracy may be defined as:
The closeness of the mean value from a replicate set of results to the
true or accepted value
Precision may be defined as:
The spread of results from a replicate set of measurements
The difference between the true value and the mean
measured value is termed bias. The spread of replicate data is
measured in terms of standard deviation (s) or variance (s2)
Random and
systematic errors
There are 2 types of error for which allowance may be made:
Random error
Systematic error
Random error arises from variations
in parameters which are outside the
control of the analyst, but which
influence the value of the
measurement being made. Because
these errors are statistically random,
the mean error should be zero if
sufficient measurements are made.
mean
Systematic error remains constant or
may vary in a predictable way over a
series of measurements and cannot be
reduced by making replicate
measurements. In theory, if known,
this error can be allowed for.
Eg: subtraction of blank values
mean
Bias and variance
A solution containing copper was
analysed 10 times using atomic
absorption spectroscopy.
The results obtained in ppm were:
10.08, 9.80, 10.10, 10.21, 10.14,
9.88, 10.02, 10.12, 10.11, 10.09
We can now calculate the precision
of the data as standard deviation
If the true value is known to be
10.00 ppm, we can also calculate
the bias
Cu in ppm
Cu by AAS
10.4
10.2
10
9.8
9.6
0
2
4
6
8
10
12
Replicate sample
Bias = Mean value - true value
= 10.06 - 10.00
= 0.06 ppm
Standard deviation (SD) = 0.12(4)
Relative SD = 100 X SD/10.00 = 1.2%
Conclusion - the method gives both good accuracy (low bias) and
acceptable precision (RSD of 1.2%)
Sensitivity and
selectivity
1200
1000
Signal
Sensitivity is the change in measured
signal for unit change in concentration
and can be obtained from the
calibration graph
Assessment of method sensitivity
800
600
dy
400
200
dx
0
Sensitivity = dy/dx
0
10
20
Concentration
Selectivity is the ability of a method
to discriminate between the target
analyte and other constituents of the
sample. In many instances selectivity
is achieved by high performance
separation using chromatographic or
electrophoretic techniques.
Hplc chromatogram
30
Limits of detection (LoD)
and determination
Example: In an analysis of trace
Cd by plasma emission
These values refer to the statistical limits spectrometry the following data
were obtained:
below which results of detection or
• mean blank (Bl) signal 4
accurate quantitative measurements
12
(determination) should not be reported. • SD of blank signal
2000
The levels of both are dependent upon • 500 ppb Cd
the variability of the signal when a blank
containing none of the analyte is being
measured. The signal generated under
these conditions is mostly signal noise
and is assumed to exhibit a normal
distribution pattern. Both the blank
signal and the standard deviation of
the blank signal need to be measured.
From this data we can calculate both
limits.
LoD = Bl signal + 3(SD of Bl)
= 4 + 3(12)
= 40
This equates to: [40/2000] X 500 ppb
= 10 ppb Cd
The limit of determination uses a
similar formula, replacing the
3 SD’s by 10. This gives the limit of
determination as 31 ppb Cd
Linear and Dynamic
ranges
These terms refer to the extent to which the method may
be used to produce accurate quantitative data
From the graph, it would appear that the
data is linear to about 25 ppm and
dynamic until about 75 ppm. After 75
ppm there is only minimal increase in
signal for increased concentration.
Graph showing linear & dynamic
ranges
1
0.6
Linear & dynamic ranges
0.4
0.2
0
0
50
100
150
200
Concentration/ppm
Signal
Signal
0.8
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
20
40
Concentration/ppm
Extent of dynamic range
60
Expanding the
lower section of
the graph however,
shows that nonlinearity starts at
around 20 ppm.
Top of linear
range
Method validation
Validation, is the proof needed to ensure that an analytical
method can produce results which are reliable and reproducible
and which are fit for the purpose intended. The parameters that need to be
demonstrated are those associated with the ‘Performance characteristics’
together with robustness, repeatability and reproducibility.
Many analytical methods appearing in the literature have not been
through a thorough validation exercise and thus should be treated with
caution until full validation has been carried out. Validation of a new
method (new to your laboratory), is a costly and time-consuming exercise,
however the result of not carrying out method validation could result in
litigation, failure to get product approval, costly repeat analysis and
loss of business and prestige.
You can now consider in more detail how validation is
carried out
Method validation linearity
Calibration graph
0.6
0.5
Signal
Check linearity between
50 - 150% of the expected
analyte concentration
0.4
0.3
0.2
0.1
0
0
5
10
15
20
Concentration
Linearity
Most analytical methods are of a comparative type and thus require
calibration against accurately known standards to generate quantitative
data. Where possible calibration data should show a linear relationship
between analyte concentration and measured signal, however it is
acceptable under some circumstances, to use a non-linear relationship
up to the limit of the dynamic range.
25
Method validation specificity
Loss of specificity can be due to interferences and matrix
effects.
All likely interferences should be investigated and their effects on analyte
response determined over a range of concentrations. Measures can then be
put into place to mask, eliminate or separate them from the analyte.
Standard addition
procedures can be used
to identify matrix effects
no
yes
Normal distribution curve
Method validation precision
You have seen already that,
precision is measured in
terms of standard deviation
(SD). Assuming that the
variability of the measurements
is totally random (obeys a
normal distribution curve) then
a formula derived from this
distribution may be used
to calculate standard deviation.
Frequency of
occurrence
100
80
60
40
20
0
0
50
100
150
Data points
Estimation of true mean
SD = [ Σ(xi - x)2/(n - 1)]1/2
where:
xi = individual data point
x = mean value of the data
n = the total number of data points
Σ = the sum of
In practice around 8 - 10 data points are used normally to
calculate the SD, although statisticians would recommend 50
Method Validation repeatability & reproducibility
Methods need to be shown to be both repeatable and
reproducible. A replicate set of data produced at a particular
time point by an operator working with a particular set of
equipment in a given laboratory will verify repeatability. To
show reproducibility, the method must produce similar
results when any of these parameters are changed. The most
likely changes are to time and operator.
Two different operators
analysing milk
using different pieces of
equipment at different
times. The laboratory is
the same.
Method validation reliability
The reliability of a method can be
tested in a number of ways
Test results from the new
method against an existing
method which is known to
be accurate
Add a known quantity of pure analyte
(spike) to a real sample or real sample
matrix and check that all of the added
substance can be measured
(recovered)
Selection of
reference materials
from LGC
The best way of demonstrating
accuracy is to analyse a reference
material or certified reference
material (CRM) if one is available
[see part 3 of this presentation]
Method validation detection & quantitation limits
A method is not acceptable for accurate detection or
quantitation if the analyte level is likely be fall beneath
the limit(s) calculated based upon the blank signal and
its standard deviation (Refer to the formula given in slide 14,
in this part of the presentation). Analyte pre-concentration
then becomes necessary.
Variation in
sample signal
Variation in
blank signal
Mean blank
signal
Mean sample
signal
Mean sample signal
must be sufficiently
larger than the blank
so that positive
detection or accurate
quantitation is
possible
Method validation robustness
Robustness of an analytical method refers to it’s ability
to remain unaffected when subjected to small changes
in method parameters.
For example
In an hplc analysis the mobile phase is defined in terms of % organic
modifier, pH of the mobile phase, buffer composition, temperature
etc. A perfect mobile phase is one which allows small changes in the
composition without affecting the selectivity or the
quantitation of the method.
Alter all major parameters in order to ascertain when the method ceases
to function in accordance with specifications
Method validation establish stability
In routine analysis where numerous samples and standards
are measured each day, it is essential to assess the stability of
the prepared solutions. Stability of these solutions should be
tested by repeat analysis over at least a 48 hour period.
Apparent onset of
solution instability
Signal
Time
Method validation additional reading
An article entitled “A Practical Guide to Analytical
Method Validation” was published in Analytical
Chemistry in 1996 [ Anal. Chem. (68) 305A309A]
The article may be downloaded free from the acs
web site:
http://pubs.acs.org/hotartcl/ac/96/may/may.html