Validation of Analytical Assays and Test Methods for the Pharmaceutical Laboratory

By Robert V. Sarrio and Loui J. Silvestri, PhD
AccuReg

Overview

Analytical procedures used to measure the quality of pharmaceutical products span almost the entire range of currentlyavailable technologies and techniques. From immunoassay and electrophoretic techniques used to characterize protein moeities, and chromatographic and potentiometric methods used to evaluate the qualities of small molecules, the variety of procedures (and approaches necessary to prove these methods' validity and usefulness) can be overwhelming. However, when evaluating available procedures to determine which are best for your intended use, it is important to keep in mind that the most important aspect of any analytical method is the quality of the data it ultimately produces.

Perhaps the most useful and widely-consulted guidance in the industry is the USP's General Chapter 1225entitled, "Validation of Compendial Methods". This Chapter opens by referencing the Federal Food, Drug and Cosmetics Act (and hence, stressing the legal status of USP test procedures), then continues with a formal definition of "validation" as it applies to analytical methods. Directly quoted, the Chapter states that "Validation of an analytical method is the process by which it is established, by laboratory studies, that the performance characteristics of the method meet the requirements for the intended analytical applications."

The most significant point raised by this definition is that the validity of a method can be demonstrated only through laboratory studies. It is not sufficient to simply review historical results; instead, laboratory studies must be conducted which are intended to validate the specific method, and those studies should be pre-planned and described in a suitable protocol. The protocol should clearly indicate the method's intended use and principles of operation, as well as the validation parameters to be studied, and a rationale for why this method and these parameters were chosen. The protocol also must include pre-defined acceptance criteria and a description of the analytical procedure, written with sufficient detail to enable persons "skilled in the art" to replicate the procedure.

Validation Parameters - Assays

USP General Chapter 1225, as well as the ICH Guideline for Industry (Text on Analytical Procedures), provide cursory descriptions of typical validation parameters, how they are determined, and which subset of each parameter is required to demonstrate validity, based on the method's intended use. For example, it would be inappropriate to determine limits of detection or quantitation for an active ingredient using an assay method intended for finished product release. However, if the method was intended to detect trace quantities of the active ingredient for purposes of a cleaning validation study, then knowledge of the detection and quantification limits are appropriate and necessary. For this reason, validation of each assay or test method should be performed on a case-by-case basis, to ensure that the parameters are appropriate for the method's intended use. This is even more important when validating stability-indicating assay methods, because these validations are more complex - for example, they may require forced degradation, samples spiked with known degradates, literature searches, etc.

The following definitions, taken from the ICH Guideline for Industry (Text on Analytical Procedures), will provide a background for subsequent discussion:

Analytical Procedure.

The analytical procedure refers to the way of performing the analysis. It should describe in detail the steps necessary to perform each analytical test. This may include, but is not limited to, the sample, the reference standard and the reagents preparations, use of the apparatus, generation of the calibration curve, use of the formulae for the calculation, etc.

Specificity.

Specificity is the ability to assess unequivocally the analyte in the presence of components which may be expected to be present. Typically, these might include impurities, degradants, matrix, etc. Lack of specificity of an individual analytical procedure may be compensated by other supporting analytical procedure(s).

This definition has the following implications:

Accuracy.

The closeness of agreement between the value which is accepted either as a conventional true value or an accepted reference value, and the value found.

Note: When measuring accuracy, it is important to spike placebo preparations with varying amounts of active ingredient(s). If a placebo cannot be obtained, then a sample should be spiked at varying levels. In both cases, acceptable recovery must be demonstrated.

Precision.

The precision of an analytical procedure expresses the closeness of agreement (degree of scatter) between a series of measurements obtained from multiple sampling of the homogeneous sample under the prescribed conditions. Precision may be considered at three levels: repeatability, intermediate precision and reproducibility.

Precision should be investigated using homogeneous, authentic (full scale) samples. However, if it is not possible to obtain a full-scale sample it may be investigated using a pilot-scale or bench-top scale sample or sample solution.

The precision of an analytical procedure is usually expressed as the variance, standard deviation or coefficient of variation of a series of measurements. Refer to this month's "The Regulatory Clinic" for a discussion of AccuReg's consensual interpretations of the following terms that express precision:

a. Repeatability. Repeatability expresses the precision under the same operating conditions over a short interval of time. Repeatability is also termed intra-assay precision.

b. Intermediate Precision. Intermediate precision expresses within-laboratories variations: different days, different analysts, different equipment, etc.

c. Reproducibility. Reproducibility expresses the precision between laboratories (collaborative studies usually applied to standardization of methodology).

Detection Limit.

The detection limit of an individual analytical procedure is the lowest amount of analyte in a sample which can be detected but not necessarily quantitated as an exact value.

Quantitation Limit.

The quantitation limit of an individual analytical procedure is the lowest amount of analyte in a sample which can be quantitatively determined with suitable precision and accuracy. The quantitation limit is a parameter of quantitative assays for low levels of compounds in sample matrices, and is used particularly for the determination of impurities and/or degradation products.

Linearity.

The linearity of an analytical procedure is its ability (within a given range) to obtain test results which are directly proportional to the concentration (amount) of analyte in the sample.

Note: Measurements using clean standard preparations should be performed to demonstrate detector linearity, while method linearity should be determined concurrently during the accuracy study. Classical linearity acceptance criteria are 1) that the correlation coefficient of the linear regression line is not more than some number close to 1, and 2) that the y-intercept should not differ significantly from zero.

When linear regression analyses are performed, it is important not to force the origin as (0,0) in the calculation. This practice may significantly skew the actual best-fit slope through the physical range of use.

Range.

The range of an analytical procedure is the interval between the upper and lower concentration (amounts) of analyte in the sample (including these concentrations) for which it has been demonstrated that the analytical procedure has a suitable level of precision, accuracy and linearity.

Robustness.

The robustness of an analytical procedure is a measure of its capacity to remain unaffected by small, but deliberate, variations in method parameters and provides an indication of its reliability during normal usage.

Note: Ideally, robustness should be explored during the development of the assay method. By far the most efficient way to do this is though the use of a designed experiment. Such experimental designs might include a Plackett-Burman matrix approach to investigate first order effects, or a 2k factorial design that will provide information regarding the first (main) and higher order (interaction) effects.

In carrying out such a design, one must first identify variables in the method that may be expected to influence the result. For instance, consider an HPLC assay which uses an ion-pairing reagent. One might investigate: sample sonication or mixing time; mobile phase organic solvent constituency; mobile phase pH; column temperature; injection volume; flow rate; modifier concentration; concentration of ion-pairing reagent; etc. It is through this sort of a development study that variables with the greatest effects on results may be determined in a minimal number of experiments.

The actual method validation will ensure that the final, chosen ranges are robust.


Other points to consider include:

System Suitability

In addition, prior to the start of laboratory studies to demonstrate method validity, some type of system suitability must be done to demonstrate that the analytical system is performing properly. Examples include: replicate injections of a standard preparation for HPLC and GC methods; standardization of a volumetric solution followed by assays using the same buret for titrimetric methods; replicate scanning of the same standard preparation during UV-VIS assays, etc. When the method in question utilizes an automated system such as a chromatograph or an atomic absorption spectrophotometer, a suitable standard preparation should be intermittently measured during the sample analysis run. The responses generated by the standard should exhibit a reasonable relative standard deviation. This is done primarily to demonstrate the stability of the system during sample measurements. System suitability for dissolution studies should be performed using both USP non-disintegrating and disintegrating tablets prior to the validation of dissolution methods.

Validity Checks - General Tests

It is important to realize that assays are not the only tests important in evaluating the qualities of a drug product. The USP contains numerous identity tests of a chemical nature. In these types of tests, one should treat a placebo preparation with the reaction reagent to ensure a negative result is achieved. Otherwise, the test has no meaning. Dissolution tests, for instance, should be evaluated for adequate sink conditions (i.e., adequate solubility in an adequate volume of the dissolution media) prior to development.

Protocols

As mentioned earlier, prior to initiating a validation study, a well-planned validation protocol should be written and reviewed for scientific soundness and completeness by qualified individuals. The protocol should describe the procedure in detail, and should include pre-defined acceptance criteria and pre-defined statistical methods. Following approved by the appropriate corporate and Quality Control authorities, the protocol should be executed in a timely manner. A typical assay validation will require the preparation of product placebo(s), standards, and many samples.

How many times should an assay be repeated to ensure "validity"? Although 3 sequential replicates are often considered the "magic number," a far more definitive number is one produced by a sound scientific rationale, usually with the assistance of statistical analyses.

Subsequent to the execution of the protocol, the data must be analyzed with results, conclusions and deviations presented in an official validation summary report. Provided the pre-defined acceptance criteria are met, and the deviations (if any) do not affect the scientific interpretation of the data, the method can be considered valid. A statement of the method's validity should be placed at the beginning of the final summary report, along with the signatures and titles of all significant participants and reviewers.


In the final analysis, the purpose of validating methods is to ensure the procurement of high quality data. After all, if the quality of data is questionable, no meaningful conclusions can be reached about the quality of the product - which will have seriously detrimental affects on stability study data reviews, process validation data reviews, and annual batch reviews, to name a few. Time invested in validating analytical methods in the beginning pays big dividends in the long run.

Write us at accureg@regulatory.com
Visit AccuReg's On-Line Brochure
Return to The Regulatory Forum

© 1996 BioSearch, Inc. d/b/a AccuReg


7501 Northwest 4 Street, Suite 210
Plantation, Florida 33317, USA
Telephone: 954-641-6400  Fax: 954-641-6410
Email: accureg@regulatory.com

Contact webmaster

 

Copyright 2001 AccuReg, Inc. All rights reserved.
No portion of this site may be reproduced without express written consent of AccuReg, Inc.