Aberrant or Atypical Results
10 Aberrant or Atypical Results
Christopher Burgess
10.1 Laboratory Failure Investigation
The purpose of an analysis of a sample for a particular analyte is to predict the value of that property for the entire lot or batch of product from which the sample was taken. Assuming that the sample is both representative and homogeneous, the sample is analysed using an analytical procedure. This procedure is itself a process, just as the manufacturing operation is a procedure [1]. All analytical measurements are subject to error. We are therefore faced with the situation of using one process (the analytical one) to judge the performance of another, the manufacturing process. Ideally we would like to use a measurement process which is infinitely precise and of known accuracy. If this were the case, any aberrant or atypical result (AAR) would be attribut-
ed to sampling or manufacturing process variation and not to the measurement pro- cess itself. From a regulatory perspective, the concern is primarily whether an out-of- specification result relates to the manufacturing process which would lead to batch rejection, or whether it results from some other assignable cause. The possible assignment of attributable cause is a major part of laboratory failure investigations as required particularly by the FDA [2]. Failure to identify or establish attributable analyt- ical cause within the laboratory triggers a full-scale failure investigation (Fig. 10-1).
ANALYST IDENTIFICATION
SUPERVISOR/ ANALYST EVALUATION
FAILURE INVESTIGATION LABORATORY PHASE
Figure 10-1 Stages for the
FULL FAILURE INVESTIGATION
investigation of atypical or aberrant results.
Method Validation in Pharmaceutical Analysis. A Guide to Best Practice. Joachim Ermer, John H. McB. Miller (Eds.) ISBN: 3-527-31255-2
10 Aberrant or Atypical Results
The role and responsibilities of the analyst and the supervisor are critical to the performance of within-laboratory failure investigations. The analyst’s role and responsibilities are as follows:
1. The first responsibility for achieving accurate laboratory testing results lies with the analyst who is performing the test.
2. The analyst should be aware of potential problems that could occur during the testing process and should watch for problems that could create AARs.
3. The analyst should ensure that only those instruments meeting established specifications are used and that all instruments are properly calibrated [3] (see also Chapter 4).
4. Analytical methods that have system suitability requirements which, if not met, should not be used or continued. Analysts should not knowingly con- tinue an analysis they expect to invalidate at a later time for an assignable cause (i.e., analyses should not be completed for the sole purpose of seeing what results can be obtained when obvious errors are known).
5. Before discarding test preparations or standard preparations, analysts should check the data for compliance with specifications.
6. When unexpected results are obtained and no obvious explanation exists, test preparations should be retained and the analyst should inform the supervi- sor.
The analyst’s direct line manager or supervisor must be informed of an AAR occurrence as soon as possible. The supervisor is then involved in a formal and doc- umented evaluation. Their role and responsibilities are as follows:
1. To conduct an objective and timely investigation and document it.
2. To discuss the test method and confirm the analyst’s knowledge of the proce- dure.
3. To examine the raw data obtained in the analysis, including chromatograms and spectra, and identify anomalous or suspect information.
4. To confirm the performance of the instruments.
5. To determine that appropriate reference standards, solvents, reagents and other solutions were used and that they met quality control specifications.
6. To evaluate the performance of the testing method to ensure that it is per- forming according to the standard expected based on method validation data.
7. To document and preserve evidence of this assessment.
8. To review the calculation.
9. To ascertain, not only the reliability of the individual value obtained, but also the significance of these AARs in the overall quality assurance program. Laboratory error should be relatively rare. Frequent errors suggest a problem that might be due to inadequate training of analysts, poorly maintained or improperly calibrated equipment or careless work.
10. When clear evidence of laboratory error exists, the laboratory testing results should be invalidated.
10.2 Basic Concepts of Measurement Performance
When evidence of laboratory error remains unclear, a laboratory failure investiga- tion should be conducted to determine what caused the unexpected results. This process could include the following points:
1. Re-testing the original solutions.
2. Re-testing a portion of the original laboratory sample – the decision to re-test should be based on sound scientific judgement.
3. Use a different analyst in conjunction with the original analyst.
4. A predetermined testing procedure should identify the point at which the testing ends and the product is evaluated. Testing into compliance is objec- tionable under the CGMPs.
5. If a clearly identified laboratory error is found, the re-test results would be substituted for the original test results.
6. The original results should be retained, however, and an explanation recorded.
7. The results and conclusions should be documented. This chapter is concerned not only with out-of-specification analytical measure-
ments, but also those that do not meet expectations or are discordant. In order to discuss whether or not a result is aberrant or atypical, it is firstly necessary to define what a result is and secondly to specify what constitutes typical behaviour. Once these criteria have been defined it is possible to review the methods available for detecting and evaluating atypical behaviour. We need to be concerned about AARs because, when they are included in our calculations, they distort both the measure of location (usually but not always the mean or average value) and the measure of dispersion or spread (precision or variance).
10.2 Basic Concepts of Measurement Performance
Analytical measurements are the outcomes of scientifically sound analytical meth- ods and procedures. These methods and procedures are themselves dynamic pro- cesses. It is important to recognise that, when analyses are carried out with the objective of measuring manufacturing process performance, the problem is essen- tially of one process being used to assess another. For the purposes of this discus- sion we will ignore the sampling process and assume that the test sample, drawn from a laboratory sample from which the analytical signal derives, is representative of the lot or batch of material under test. In order to describe the characteristics of analytical measurements and results, a basic vocabulary of unambiguous statistical terms needs to be firmly established. Concepts such as accuracy and precision are widely misused and misunderstood within the analytical community [4]. The impor- tance of a commonly agreed terminology cannot be underestimated. Figure 10-2 illustrates some of the basic concepts and definitions.
All measurements and responses are subject to error. These errors may be ran- dom or systematic, or a combination of both. As an example, we will assume that
10 Aberrant or Atypical Results
Analytical Measurement Signal Time
Measured values at fixed times
Mean value
Precision Bias Method
Standard
value
Accuracy= Measured value – Standard value
Figure 10-2 Basic definitions and concepts for analytical measurements.
the analytical measurement signal shown in Figure 10-2, represented as a varying black line, is the analogue voltage output from a UV spectrophotometric absorbance measurement of a sample solution. This signal is sampled or recorded as a series of measurement values, in time, represented by the dots. This might be by an A/D con- verter, for example. The amplitude of the natural and inherent variability of the instrument measurement process allows an estimate of the random error, associated with the measurement, to be made. The random error estimate is a measurement of precision. There are many types of precision (see Section 2.1.2). The one estimated here is the measurement or instrument-response precision. This represents the best capability of the measurement function.
As analytical data are found to be [5] or assumed to be normally distributed in most practical situations, precision may be defined in terms of the measurement var- iance V m , which is calculated from the sum of squares of the differences between the individual measurement values and the average or mean value. For a measure- ment sequence of n values this is given by Eq. (10-1).
V 2 m ¼ ðX i X X Þ
i ¼1
and hence the standard deviation is given by s
¼ 2 V ffiffiffiffiffiffiffi m
and the Relative Standard Deviation by s m
RSD ¼ 100 (10-1)
X X Precision is about the spread of data under a set of predetermined conditions.
There are other sources of variability within an analytical procedure and hence dif- ferent measurements of precision from the one discussed above and these will be discussed later (Section 10.4). However, it should be noted that the instrumental or measurement precision is the best which the analytical process is capable of achiev- ing. With increasing complexity, the additional variance contributions will increase the random component of the error.
10.3 Measurements, Results and Reportable Values
Accuracy is defined in terms of the difference between a measured value and a known or standard value. In our example, this would be the difference between the measured absorbance value and the assigned value of a solution or artefact estab- lished by, or traceable to, a National Laboratory (for example, NIST or NPL). This definition implies that the accuracy of measurement varies across a measurement sequence and contains elements of both random and systematic error.
For this reason, it is best analytical practice to combine a number of measure- ments by the process of averaging in order to arrive at a mean value. Conventionally, the difference between this mean value and the standard or known value is called the bias. However, the International Standards Organisation (ISO) have defined a new term, trueness [6] to mean the closeness of agreement between an average value obtained from a large series of measurements and an accepted reference value. In other words, trueness implies lack of bias [7].
dures. This is because the outcome of such processes is subject to an estimate of measurement uncertainty [8]. This measurement uncertainty estimate contains con- tributions from both systematic and random errors and is therefore a combination of accuracy and precision components.
(Decreasin IMPROVING ACCURACY
g uncertainty)
IMPROVING TRUENESS
Figure 10-3 Accuracy, precision
IMPROVING PRECISION
and trueness (redrawn from [4]).
Examination of Figure 10-3 reveals that the traditional method of displaying accu- racy and precision, using the well known target illustration, is not strictly correct. It is trueness (or lack of bias) which is relatable to precision not accuracy.
10.3 Measurements, Results and Reportable Values
Thus far we have only considered the instrumental measurement process and basic statements of measurement performance. We need to extend these ideas into the
10 Aberrant or Atypical Results
overall analytical process from the laboratory sample to the end result or reportable value [9]. The purpose of any analysis is to report upon the sample provided. This entails comparing the reportable value(s) relating to the sample and comparing it (them) to a set of limits (a specification). This implies that the selected analytical method or procedure is fit for its intended purpose.
Test Sample Dispense and weigh
Test Portion
Iterate in
preparation
accordance with method
Sample
Test solution
Calculation of test result(s)
Data output;
and reportable value(s)
recording and reporting
Figure 10-4: Analytical process flow.
procedures are validated and that this validation has been performed using equip- ment and systems which have been qualified and calibrated. In addition, all compu- terised systems involved in generating data and results have been subjected to ade- quate verification and validation.
Although the analytical measurement is at the heart of the analytical process, it is not the only source of error (systematic or random) which affects the overall true- ness of the end result. Consider the analytical process flow shown in Figure 10-4. It is apparent that one analytical measurement does usually not constitute a route to a reportable value. Additionally, there are variance contributions which arise from other parts of the process, particularly in sample preparation and sub-sampling.
Generally speaking, analytical measurements are derived from the sampling of an analytical signal or response function. Analytical results are based upon those analytical measurements given a known (or assumed) relationship to the property which is required, such as a concentration or a purity value. Reportable values are predetermined combinations of analytical results and are the only values that should
be compared with a specification.
10.4 Sources of Variability in Analytical Methods and Procedures
An analytical method or procedure is a sequence of explicit instructions that describe the analytical process from the laboratory sample to the reportable value. Reportable values should be based on knowledge of the analytical process capability determined during method validation. This will be discussed in Section 10.5.
10.4 Sources of Variability in Analytical Methods and Procedures
Examination of Figure 10-4 reveals some of the additional sources of variability which affect an analytical method. The ICH Guidelines [10] define three levels of precision when applied to an analytical procedure that need to be established during method validation; i.e., repeatability, intermediate precision and reproducibility. The magnitude of these precisions increases with the order. In the laboratory, a fourth kind of precision is encountered, that of instrument or measurement precision. This represents the smallest of the precisions and is an estimate of the very best that an instrument can perform, for example, the precision obtained from a series of repeated injections of the same solution in a short space of time. This measurement of instrument repeatability is often confused with the ICH repeatability, which refers to a complete sample preparation.
The most important factors in the determination of repeatability, intermediate precision and reproducibility are, for a given method: laboratory, time, analyst and instrumentation [1] (Table 10-1).
Repeatability is the closest to the instrument precision discussed earlier. This is determined using a series of replicate measurements over a short time period (at least six at a concentration level, or nine if taken over the concentration range) on the same experimental system and with one operator. Intermediate precision is a mea- sure of the variability within the development laboratory and is best determined using designed experiments. Reproducibility is a measure of the precision found when the method is transferred into routine use in other laboratories. The determi- nation of reproducibility is normally achieved via a collaborative trial. The random error component increases from repeatability to reproducibility as the sources of var- iability increase (Table 10-1).
Table 10-1 Factors involved in precision determinations. Type of precision to be determined
Factors to vary Repeatability
Factors to control
L, T, A, I
Intermediate precision (within-laboratory reproducibility)
T, I and A Reproducibility (between-laboratory reproducibility)
L, T, A, I Abbreviations:
L = laboratory T = time
A = analyst I = instrumentation
10 Aberrant or Atypical Results
These measurements of precision are made either at one concentration or over a narrow range of concentrations. In the latter, it is assumed that the variance does not change over the concentration range studied. This is a reasonable assumption for analytical responses which are large. If the analytical responses approach the limit of quantitation, for example, with impurities, then this assumption should be checked using an F test for homogeneity of variances.
Analytical chemists have long been aware that the relative standard deviation increases as the analyte concentration decreases. Horwitz [11] at the FDA undertook the analysis of approximately 3000 precision values from collaborative trials which led to the establishment of an empirical function, RSD = – 2 (1–0.5logC) , which when plotted yields the Horwitz trumpet. This function is illustrated in Figure 10-5 and clearly shows that the assumption of constant variance with concentration is only reasonable at high concentrations and narrow ranges.
These considerations lead us to the idea that analytical process capability is critical in defining an aberrant or atypical result.
n 30 tio Pesticide Residues
ia
e v 20 Drugs
D 10 in Feeds APIs Drug Products a rd
Figure 10-5
10.5 Analytical Process Capability
Process capability is a statistical concept. It requires two things:
1. a knowledge of the randomness and trueness of the process;
2. a set of boundary conditions under which the process is required to operate.
10.5 Analytical Process Capability 363
The first of these requirements have been discussed in the first two sections. The second requirement is normally called a specification or tolerance limit. Our defini- tions of AARs will depend upon the type of boundary condition imposed on the pro- cess. The different types will be discussed in Section 10.6.
For the moment, let us assume a specification for release of a drug product of 95% – 105% of labelled claim of an active material. Let us also assume that the ana- lytical method we are using to generate analytical measurements is unbiased, i.e.,
by the precision, as defined by a standard deviation, arising from all the sources of variability considered. The analytical process undertaken in shown in Figure 10-4.
In our example we will define that a reportable value is derived from a single ana- lytical result. For our purposes, let us assume that the analytical process standard deviation lies between 1 and 3% (note that 2% is a value often found for HPLC methodologies, see Section 2.1.3.2). We use the symbol s as the estimate of the pop- ulation standard deviation r. This estimate is normally obtained from the intermedi- ate precision.
We can now calculate what the distribution of (single) reportable values would look like by generating the normal distribution curves for each of the standard deviations and marking the upper and lower specification limits. The resulting plot is shown in Figure 10-6. By visual inspection, it is immediately apparent, without the necessity for further calculation, that if our analytical process had an s = 1% then we would be reasonably confident that, if a value lay outside the specification limits, it was unlikely to be due to the inherent variability in the method. In contrast, when s = 3%, such a method would not be suitable because a large percentage (in this instance about 10.6%) would lie outside the limits due to the measurement process itself. Clearly it is scientifically unsound to attempt to monitor a manufacturing process with a defined analytical process which is not fit for that purpose. For s = 2% we have the situation where only a small amount of data will lie outside the limits (approximately 1.5%). So this begs the question: how good does our method have to be?
Any analytical method must be capable of generating reportable values which have a sufficiently small uncertainty to be able to identify variations within the man-
Figure 10-6 Simulation of (single)
Reportable value(s)
reportable values for s =1, 2 and 3%.
10 Aberrant or Atypical Results
ufacturing process. This leads us naturally into measures for process capability. The process capability index, C p , is calculated from Eq.(10-2).