## Y0bs y

where n is the number of solutions used to prepare the standard curve and m is the number of measurements made on the test sample. Figure 2.7 shows that the confidence interval for a concentration determined from a linear, unweighted calibration curve is narrowest at the centroid (x,y) and increases on either side.

In addition to allowing calculation of the confidence interval for a measured concentration, eq. 2.24 also allows the analyst to calculate the effect of increasing the number of repeat measurements made on the test sample, m (Fig. 2.8). Although Figs. 2.7 and 2.8 show that the standard error of a measurement reaches a minimum at the centroid (x,y), pharmaceutical analysts are usually more interested in the relative error (i.e. the RSD) rather than in the absolute value of the variance. Figure 2.9 shows that the value of sXcalc is relatively constant if the value of xCalc is equal to or greater than about 25% of x.

y 100

y 100 Figure 2.8

Effect of number of replicates (m, eq. 2.25) on the 95% confidence interval of a concentration of an analyte calculated from a linear, unweighted calibration curve. The data shown in Fig. 2.7 have been normalized in this figure by dividing the y values by y and the x values by x (both the x and y values are expressed as a percentage)

Figure 2.8

Effect of number of replicates (m, eq. 2.25) on the 95% confidence interval of a concentration of an analyte calculated from a linear, unweighted calibration curve. The data shown in Fig. 2.7 have been normalized in this figure by dividing the y values by y and the x values by x (both the x and y values are expressed as a percentage)

Figure 2.9 also highlights the degree of uncertainty that exists in measurements made by extrapolation to values of xCalc that are less than the lowest concentration calibration (xmin,ymin) solution. Extrapolation to values that exceed the highest point on the calibration curve xmax,ymax is not to be recommended and should never be attempted if the linearity of the method has not been established at values greater than xmaxjmax- If the concentration of the test sample substantially exceeds the value of xmax then it should be diluted and reanalyzed. Figure 2.9

Effect of number of replicates (m, eq. 2.25) on the RSD of a concentration of an analyte calculated from measured peak height (yi) and a linear, unweighted calibration curve (xcalc). These plots are based on the data shown in Fig. 2. 7, therefore, the shape of these plots are general but the absolute values are not.

Figure 2.9

Effect of number of replicates (m, eq. 2.25) on the RSD of a concentration of an analyte calculated from measured peak height (yi) and a linear, unweighted calibration curve (xcalc). These plots are based on the data shown in Fig. 2. 7, therefore, the shape of these plots are general but the absolute values are not.

In biological-fluid analysis, the analyst is often faced with very limited samples raising the question as to how much uncertainty is introduced by the analysis of a test sample on the basis of a single measurement. Clearly, eq. 2.24 and Fig. 2.8 show that the confidence interval for a measured concentration will be reduced if the number of measurements made on the test sample is increased. However, the 95% confidence intervals for xCalc at the centroid (xcalc/ x=100%) for m=l, 2, 5 or °° are 5.9, 4.4, 3.7 and 1.7%, respectively and only modest gains in precision are realized if the number of analytical measurements on the test sample is increased from one to two or from two to five. Therefore, if a large number of test samples are to be measured, a substantial amount of work can be saved at only a small cost to the precision if single rather than multiple determinations are made. In contrast very little time will saved if the number of calibration solutions is reduced from a typical value of 12 shown in Fig. 2.7. Furthermore, using a small number of calibration solutions increases the chances of losing the highest or lowest calibration solutions, which define the linear range of the assay. On the other hand, the effect of increasing the number of calibration solutions to a number greater than 12 will have a minimal effect on the confidence interval of the concentrations measured. Equation 2.24 indicates that there is no particular advantage in terms of the precision if all the calibration solutions are prepared at different concentration levels, compared with the preparation of duplicate solutions at fewer concentrations. However, the preparation of duplicate solutions at least at the upper and lower concentration levels eliminates the possibility of losing one of these important calibration solutions due to contamination or spillage (K. Selinger, Glaxo Welcome, personal communication, 1994).

b. Two-point calibration approaches

Having established the linearity of a method and the significance of the intercept, the analyst may choose to assay test samples by comparison with a two-point calibration curve2. This is the preferred method for the analysis of pharmaceutical samples such as drug substance, raw materials and finished formulations, when the target concentration and the allowable limits are well established. This is also the approach used in most pharmacopeial assays.

In this method of calibration the two standards consist of a blank (xi=0) and a second solution containing the analyte or analytes of interest. Ordinarily the second standard is prepared so that it contains a concentration of the analyte (x2=Cstd) that is as close as possible to the target concentration of the analyte in the test sample. Assays are usually conducted by alternating the measurements of standards and test samples to correct for any instrumental drift that

2 This approach is sometimes referred to as external standardization.

might result in a systematic change in response factor. The response for the standard (ystd) is then taken as the average of readings obtained for standards measured before and after each sample. This method of calibration is often referred to as "bracketing" of standards. If the instrument drift is small then several samples may measured between the standards. If the intercept is statistically insignificant (i.e. the reading for the blank is close to or effectively zero), which will usually be the case in chromatographic methods of analysis (see Sec. 2.2.3 on specificity and selectivity), then the blank need only be measured once as part of the system suitability test.

0 25 50 75 100 125 Standard concentration (% Target)

Figure 2.10

Two-point calibration curve showing typical range of 90-110% of target concentration  Standard concentration (% Target) Figure 2.11

Potential errors associated with the inappropriate use of a two-point calibration curve. Upper graph: errors due to an intercept; lower graph: errors due to the curvature of the calibration curve

Standard concentration (% Target)

### Figure 2.11

Potential errors associated with the inappropriate use of a two-point calibration curve. Upper graph: errors due to an intercept; lower graph: errors due to the curvature of the calibration curve

The principle of the two-point calibration approach is demonstrated in Fig. 2.10, for which the equation for the calculation of the concentration of analyte in the test sample (xcalc=Ccalc) is given by:

ystd 100

where y0 and ystdare the average responses for the blank and the standard bracketing the test sample, respectively; D is the dilution factor and P is the percent purity of the analytical standard. If the intercept is zero, eq. 2.25 reduces to:

This method of calibration requires that measurements be made by extrapolation if y0bs-> ystd- However, this is not cause for concern if the prior validation experiments have demonstrated that the method is linear over the range of the assay. On the other hand, if the actual response function is not linear or a significant intercept is ignored, the potential errors (Ax) increase with increasing difference between the measured value of the response for the test sample (yobs) and the standard (ystd) (Fig. 2.11).

Debesis et al.  have calculated the maximum method and system precision (RSDmax) allowable, using the two-point calibration approach for LC assays, as a function of the acceptable assay range (Table 2.1, Fig. 2.12). This was accomplished by assuming that the absolute difference between the true mean and the sample mean is no more than 50% of the specified acceptable range. Thus the method precision is given by:

where n is the number of sample measurements and z is taken from tabulated values for normal distribution, i.e. 1.96 or 2.58 for 95 or 99% confidence limits, respectively.

Table 2.1

Maximum allowable method RSDs for single or duplicate determinations*

Acceptance Single Duplicate

Range Determinations Determinations

Acceptance Single Duplicate

Range Determinations Determinations

98.5-101.5

0.77

0.58

1.12

0.82

97-103

1.53

1.16

2.23

1.64

95-105

2.55

1.94

3.72

2.74

90-110

5.10

3.88

7.44

5.48

85-115

7.65

5.81

11.1

8.22

75-125

12.8

9.69

18.6

13.7

50-150

25.5

19.4

37.2

40 35 30

1 15

40 35 30

1 15

 : \ Lower limit r vX r

Figure 2.12

Relationship between the maximum allowable method precision and acceptable assay limits for single and duplicate determinations. The four lines represent a) CI = 95%, n = 2; b) CI = 99%, n = 2; c) CI = 95%, n = 1; d) CI = 99%, n = 1

Equation 2.27 is particularly useful in deciding whether single or duplicate determinations are necessary. For example, if the acceptance range for an assay is 98.5-101.5% of labeled claim, then the values of RSDmax are 0.77% (n=l) and 1.08% (n=2) at the 95% confidence level. At the 99% level, the values are 0.58% (n=l) and 0.82% (n=2). Table 2.1 and Fig. 2.12 show the relationship between the maximum allowable method precision and the acceptable assay ranges for single and duplicate determinations.

c. Area Normalization

In the absence of authenticated reference standards or where the availability of such standards is very limited, the concentration of related substances in a mixture (CCalc,l) may be determined by peak area normalization:

where Pa is the peak area of the individual components (1, 2, 3 ) in the sample and b is the response function for each component. If authenticated standards are not available, then the assumption must be made that the response function is the same for each component.

Area normalization may be applied to all forms of chromatographic analysis and is particularly suitable for techniques such as gas chromatography with a detector whose response function is independent of the analyte, i.e. b (eq. 2.28) is a constant. This approach is also appropriate for the analysis of data obtained by capillary electrophoresis; however, the peak areas must be corrected for elution time (eq. 2.29) because the velocity of material passing through the detector is inversely proportional to its elution time [11-

Pa,l

Ccalc,1=100

bi b2 b3

tibi t2b2 tßbg tibi t2b2 tßbg

2.2.2.2 Non-linear response functions and weighted regression analysis a. Chemical assays

The subject of non-linearity of calibration curves in chromatographic analysis has been treated extensively in the literature [14-24 and Chapter 10], and a number of alternative mathematical models have been proposed, including:

which is equivalent to:

as well as various quadratic and complex polynomial versions of eqs. 2.31 and 2.32 such as:

Despite the availability of several possible mathematical models for assay calibration and the ready access to user-friendly computer programs, Burrows and Watson  have stated in a recent article on calibration that "...to obtain optimum results the regression should not only match the actual shape of the response curve but also include appropriate weighting factors to compensate for the error distribution if it is non-homoscedastic. Non-linear calibration routines are often included as part of the capabilities of (commercial) chromatographic data systems but may only be suitable for assays covering a small dynamic range and exhibiting a marked degree of curvature. As these regressions are often only available in their unweighted forms they usually impart no improvement in assays covering wide dynamic ranges and showing only small deviations from linearity" . In addition to the lack of weighted forms of complex non-linear calibration curves, the solutions to the equations for the calculation of concentrations are complex or non-existent. For example, the solution for xi to the quadratic calibration curve, eq. 2.32, has two roots:

Similarly, Burrows and Watson have proposed the complex nonlinear eq. 2.35 for calibration of methods using gas chromatography with electron-capture detection (ECD), in which the sensitivity of the detector increases with increasing concentration:

for which no exact solution in xi exists and concentrations must be read from the calibration graph itself or by calculation using the Newton-Raphson iteration procedure .

Calibration of assays over large concentration ranges is often necessary for the analysis of drugs and related substances in biological samples (see also Chapter 10). In this case, the variance may not be homoscedastic (homogeneous) over the entire concentration range and weighted regression analysis on the calibration data is appropriate [18, 25], in which additional weight is given to the lower concentration values. One of the simplest methods of weighting calibration data is according to the reciprocal of the concentration, in which case eq. 2.12 becomes

The values of the coefficients, a and b, are then obtained by regression analysis of yi/xi on 1/xi. Weighted regression analysis for the data shown in Fig. 2.7 gives values of 144.9 and 502.0 for a and b, respectively, which compares with values of 156.0 and 501.1 obtained by unweighted analysis of the data. Clearly these changes in the slope and the intercept coefficients have a greater effect on the concentrations calculated (xcalc) at the lower end of the curve. However the main value of weighted regression analysis is that it narrows the confidence interval for the calculated values of concentration (xcalc) at the lower end of the curve (Fig. 2.13). The most appropriate weighing factor is generally determined by computer analysis of calibration data (e.g. by SAS® or RS-1), which is also used for the estimation of the standard deviations and the confidence intervals of xCalc • Figure 2.14 shows the reduction in the RSD for low values of xCalc obtained by weighting the calibration data . If y, = böxiln(xi) + bßxi + a

Xi Xi weighted regression analysis is used, then the concentrations of the calibration solutions should be unevenly spaced (e.g. if the data are weighted by a 1/xi, then the concentrations should be spaced so that intervals for the values of 1/xi are approximately equal, i.e. 1, 2, 5, 10, 20, 50 etc.). Figure 2.13

Comparison of the 95% confidence interval for calculated values of concentration (xCalc) for weighted and unweighted regression analysis. The data in Fig. 2.7 have been normalized by dividing the y values by y and the x values by x

Figure 2.13

Comparison of the 95% confidence interval for calculated values of concentration (xCalc) for weighted and unweighted regression analysis. The data in Fig. 2.7 have been normalized by dividing the y values by y and the x values by x Figure 2.14

Comparison of the RSDs of a concentration of an analyte calculated from measured peak height (yi) and a linear calibration curve (xCalc) for weighted and unweighted regression analysis jMOO (o/o) x

### Figure 2.14

Comparison of the RSDs of a concentration of an analyte calculated from measured peak height (yi) and a linear calibration curve (xCalc) for weighted and unweighted regression analysis b. Receptor binding assays

Receptor-binding assays, such as radio-immunoassays, are based on the competition between an unlabeled analyte and a labeled ligand for a specific receptor, which may be described by eqs. 2.37 and 2.38:

where L is the free ligand, B is the bound ligand, R is the receptor, the sub-script, s, refers to specifically bound ligand and the asterix (*) refers to labeled material .

The concentration of bound labeled ligand, [B*], is given by:

where Kn* is the association constant for the non-specific binding of labeled ligand (Bn*), and R0 is the total receptor concentration. For the purposes of calibration, eq. 2.39 may be simplified to give eq. 2.40, which relates the concentration of bound labeled ligand to the concentration of unlabeled analyte added (L):

where IC50 is the concentration of analyte that causes a 50% decrease in the concentration of bound labeled ligand and [B*]0 is the concentration of bound labeled ligand in the absence of unlabeled ligand. In the absence of non-specific binding the bound (fb*) and free fractions (ff*) of labeled ligand are given by eqs. 2.41 and 2.42, respectively:

IC50

Calibration data are usually plotted in the form of the fraction of bound labeled ligand versus log [L] (Fig. 2.15). Figure 2.15

Calibration curve for a receptor binding assay. Adapted from  and reproduced by permission from Elsevier Science Ltd.

### 2.2.2.3 Internal standards

Internal standards are compounds added, in equal concentrations, to all standards and test samples. The original application of internal standards can be traced to gas chromatography  where they were used primarily to correct errors arising from manual injection. The practice of including internal standards was later extended to liquid chromatography, mass spectrometry, capillary electrophoresis and other related techniques. When internal standards are included, the analytical response is defined as the peak height (or area) ratio, Pr,i:

yis where yi, and yis are the responses obtained for the analyte and internal standard, respectively. Ideally, the internal standard should have physico-chemical and analytical properties that are similar to those of the analyte. They can be used in methods employing calibration curves or two-point calibration approaches. For example if a two-point calibration approach is used with an internal standard eq. 2.26 becomes:

Pr,std 100

Because the internal standard is added at the same concentration to all the standards and test samples, its actual concentration is not needed for the calculation of unknowns. In principle, the analytical response of the internal standard should be linearly related to concentration. However, any errors arising from non-linearity are likely to be small unless the recovery of the analytes and the internal standard varies widely between samples.

Whereas the original intention was to correct for variation in injection volumes in gas chromatography, the present-day automatic injectors used in gas chromtaography and liquid chromatography make the use of internal standards for many applications unnecessary. Moreover, the additional steps involved in the volumetric addition of an internal standard may actually reduce the precision of the method (see Sec. 2.2.1). However, the use of internal standards in capillary electrophoresis is strongly recommended because the reproducibility of automated injectors for capillary electrophoresis does not yet approach that achievable in gas chromatography or liquid chromatography ,

Applications where internal standards are essential include assays involving sample preparation where the extraction efficiencies are low or those involving chemical derivatizations with low yields of reaction. On the other hand if the sample preparation step is straightforward and the extraction efficiency is close to 100% (e.g. simple protein precipitation for the pre-treatment plasma samples) then an internal standard is advisable but, in certain circumstances, may be unnecessary. The appropriateness of adding an internal standard should be determined as part of the method validation.

### 2.2.3 Selectivity and specificity3

Selectivity describes the ability of analytical method to differentiate various substances in the sample and is applicable to methods in which two or more components are separated and quantitated in a complex matrix. Thus the term selectivity is appropriately applied to chromatographic techniques in which the components of a mixture are physically separated from each other. Selectivity may also be used to describe spectroscopic or spectrophotometric methods in which separate signals are obtained for the different components in a mixture. In contrast to selectivity, specificity, describes the ability of the method to measure unequivocally the analyte of interest in the presence of all other components, which may be expected to be present. Thus, the term specificity is appropriately applied to analytical techniques in which only a single parameter can be measured: examples include the measurement of radioactivity in a radioimmunoassay or the volume of titrant in a titration. The selectivity or the specificity of a method is compromised by the presence of potential interferences including related compounds (dégradants, metabolites, impurities etc.) and components of the matrix (formulation excipients, endogenous substances etc.).

3 The descriptions given here represent the International Union of Pure and Applied Chemistry (IUPAC) definitions of specificity and selectivity described in the official journal of the Union (Pure Appl. Chem., 35, 553-556, (1983)) as follows: "It is therefore proposed to use the adjectives selective and specific and the substantives selectivity and specificity, when not followed by other substantives, as means to express, qualitatively the extent to which other substances interfere with the determination of a substance according to a given procedure. In this connection specificity is considered to be the ultimate of selectivity, it means that no interferences are supposed to occur". Notwithstanding the IUPAC defintions of specificity and selectivity, these terms are often used interchangeably. For example the USP definition of specificity is " its ability to measure accurately and specifically the analyte in the presence of components that might be expected to be present" and the recent ICH Guidelines define specificity as "...the ability to assess unequivocally the analyte in the presence of components which may be expected to be present" - definitions more akin to the classical definition of selectivity.

### 2.2.3.1 Chromatographic methods

The selectivity of a chromatographic method may be defined by the use of relative retention indices of which the most widely used is the selectivity factor, a:

where the capacity ratio of a given peak k'i, is related to its retention time, ti, and the time for an unretained compound to elute, to, by:

Substitution of eq. 2.45 into eq. 2.46 gives:

Chromatographic resolution is also affected by column efficiency, which is usually defined by the number of theoretical plates (N):

Since the width of a Gaussian peak at its base (wi) corresponds to approximately 4a (Sec. 2.1.3, Fig. 2.3), the number of theoretical plates may be calculated from:

Measurement of wi according to eq. 2.49 requires extrapolation of tangents drawn at the inflection points on the trailing and leading edges of peak to the baseline (Fig. 2.16), which is subject to error. This has led to an alternative form of eq. 2.49, which relies upon measurement of the peak width at half height, wo.5, (eq. 2.50).

0 50 100 150 200 250 200 205 210 215 220 225 230 235 240 Time (s)

Figure 2.16

Parameters for the calculation of chromatographic resolution

For Gaussian peaks, eqs. 2.49 and 2.50 are equivalent. However, for peaks that are significantly asymmetric, then eq. 2.50 tends to underestimate the true column efficiency, because peak tailing and fronting affect the base of the peak to a greater extent than the center. Equations 2.49 and 2.50 are the definitions of column efficiency preferred by the USP. The BP  and the EP [30, 31] define column efficiency in terms of the number of theoretical plates per unit column length (plates/meter), n:

L L Vwo.5y

The resolution of two components in a mixture (Figure 2.16) is defined in the USP  and in the BP  and EP [30, 31] by eqs. 2.52 and 2.53, respectively:

The differences in these two equations arise from the different ways in which the widths of the peaks are measured, the USP preferring the width at the base and the BP and the EP preferring the width at half-height. As with the measurement of column efficiency, the two methods for calculations of resolution are equivalent if the peaks are Gaussian, but the USP approach is (eq. 2.52) is less susceptible to errors arising from peak asymmetry than are the BP or EP approaches (eq. 2.53). Errors arising from the manual measurement of column efficiency and resolution may be reduced by the automatic calculation of these parameters using commercial chromatographic software; however, the analyst should be familiar with the algorithms used because different software packages can give different results. This is a particular problem when transferring chromatographic methods between laboratories.

Figure 2.17 shows the effect of resolution on the detectability of a secondary component eluting close to a major component. If the resolution is equal to or greater than 2, then the trace component is detectable at all concentrations. However, if the separation deteriorates only slightly (Rs<2) then the detectability of the secondary component becomes a function of its concentration. As the resolution approaches a value of one, impurities present at 0.5% or less are indistinguishable from the peak tail. Clearly, the problems associated with the detection of trace impurities shown in Fig. 2.17 are accentuated by any tailing of the major component. Consequently, the preferred elution order in trace analysis in LC is one in which the minor component elutes before the major component.

The use of the peak separation index, S, (Fig. 2.18 and eq. 2.54) represents a useful alternative to resolution for the description of chromatographic or electrophorectic separations. This empirical approach is particularly useful for situations where one or both of the peaks are asymmetric and has been widely used for the optimization of chiral separations and trace analysis. The peak separation index approach is based on the relative depth of the valley between two adjacent peaks:

Two approaches have been described for the calculation of the parameters, A and B (Fig. 2.18). The first approach, which is preferred for separation of two components present in similar proportions, involves drawing a line perpendicular to the x-axis through the minimum to intersect a tangent drawn between the apices of the two peaks. For trace analysis, drawing a tangent between the apices of the two peaks is difficult, so a line is drawn parallel to the baseline through the apex of the minor peak to intersect a line drawn through the valley perpendicular to the x-axis (Fig. 2.18). 190 200 210 220 230 240 190 200 210 220 230 240 Time (s) Time (s) 190 200 210 220 230 240 190 200 210 220 230 240 Time (s) Time (s)

Figure 2.17

Effect of impurity levels (0.05, 0.10, 0.20, 0.50 and 1.0%) on chromatographic resolution Figure 2.18

Peak separation indices. 1. is suitable for separations of two components present in similar proportions and 2. is more suitable for trace components

Figure 2.18

Peak separation indices. 1. is suitable for separations of two components present in similar proportions and 2. is more suitable for trace components

Although these approaches are useful to assess the quality of a chromatographic separation, excessive tailing of peaks is indicative of poor chromatography, which can lead to unreliable integration of peak areas and poor precision , The USP, BP and EP define the tailing factor, T, as:

where wo.05 is the width of the peak at 5% of the height and f is measured according to Fig. 2.19. The pharmacopeial definition of the tailing factor varies slightly from the classical definition of the peak asymmetry factor, As , which is generally measured at 10% of the peak height and is given by: 206 208 210 212 214 216 218 Time (s)

Figure 2.19

Parameters for the calculation of peak asymmetry

206 208 210 212 214 216 218 Time (s)

Figure 2.19

### Parameters for the calculation of peak asymmetry

Both the peak tailing factor, T, and the asymmetry factor, As, have values of one for perfectly symmetrical peaks, values of greater than one for tailed peaks and less than one for fronted peaks. It should be noted that Foley and Dorsey  have recommended the use of eq. 2.57 for the precise calculation of column efficiency, which includes a correction for peak asymmetry:

41.7

41.7

The selectivity of chromatographic methods and other techniques such as mass spectrometry are normally assessed by spiking experiments in which the analyte and all known or suspected potential intereferences are added to the sample matrix (a placebo) at appropriate concentrations. The spiked placebo is then compared with the representative blanks. The elution times of all significant compounds are then recorded and the resolution of the critical pair or pairs of peaks determined. The critical pair or pairs are then used to set the limits for the resolution checks to be used as part of the system suitability tests.

The main problem in the determination of assay selectivity and specificity arises from unknown interferences that are present in the test samples but not in the placebo. Thus the determination of peak homogeneity plays a key role in the assessment of chromatographic selectivity. Whereas the mass spectrometer provides a very high degree of selectivity in gas chromatography, the diode-array detector (DAD) and the use of spectral deconvolution techniques are the principal tools for the determination of peak purity in liquid chromatography. Unfortunately, spectral deconvolution requires that the UV/visible spectra of the co-eluting peaks be different and commercially available systems are generally incapable of detecting any co-eluting interference that is present at concentrations much less than 1%. With the increasing availability of reliable interfaces, online mass spectrometric detection is expected to play an increasingly important role in the determination of peak purity in liquid chromatography. Spiked amount of TA-0910 (pg/assay)

### Figure 2.20

Binding curves for the radioimmunoassay of a thyroid-releasing hormone analog (TA-01910) in buffer and in plasma. Reproduced from  with permission from Elsevier Science

Spiked amount of TA-0910 (pg/assay)

### Figure 2.20

Binding curves for the radioimmunoassay of a thyroid-releasing hormone analog (TA-01910) in buffer and in plasma. Reproduced from  with permission from Elsevier Science

The specificity of immunoassays and other receptor binding assays is determined in the same way as the selectivity of chromatographic methods by spiking experiments and by comparison with the appropriate blanks. For example, Fig. 2.20 shows the receptor-binding curve for the assay of the thyroid-releasing hormone analog, TA-01910 in buffer and rat plasma, showing the absence of interference from the matrix. The problem of matrix interference in receptor binding assays is of particular concern for the analysis of drugs in plasma because it involves plasma-protein binding of both the labeled and the unlabeled ligand  (Fig. 2.21). Interferences from related substances (metabolites, degradants etc.) is generally assessed by the comparison of the IC50 values of potential interferences with the IC50 value for the analyte of interest. 