Guest Column | August 1, 2025

The Building Blocks Of A Robust Analytical Assay

By Rishi Ramesh Kothari

HPLC Vials-GettyImages-90658964

An analytical assay is a scientific method or test designed to determine the presence, concentration, or activity of a specific substance, referred to as an analyte, within a sample. These assays play a crucial role across various domains, including medicine, pharmaceuticals, environmental science, and biotechnology, to ensure the quality, quantity, safety, and effectiveness of substances.

Common techniques employed in analytical assays include chromatographic methods such as high-performance liquid chromatography (HPLC) and gas chromatography (GC); spectroscopic techniques like UV-visible spectroscopy, mass spectrometry, and infrared (IR) spectroscopy; immunological methods such as enzyme-linked immunosorbent assay (ELISA) and radioimmunoassay (RIA); and molecular biology techniques like polymerase chain reaction (PCR) and next-generation sequencing (NGS).

The importance of analytical assays arises for multiple reasons:

  • Quality control: Ensure the consistency and quality of products.
  • Regulatory compliance: Meet the requirements of regulatory bodies like the FDA or EMA.
  • Research and development: Support the discovery and development of new drugs, chemicals, and technologies.

The development of an analytical method is closely tied to the drug manufacturing process, as it is essential for guaranteeing the quality, safety, and effectiveness of the final product. The number of analytical assays needed varies based on the type of drug being developed. For instance, small molecules typically require around 10 assays, monoclonal antibodies may need approximately 25, cell therapies often involve about 30 assays, and adeno-associated viruses generally demand around 35 different assays. However, the precise number of assays can differ depending on regulatory requirements, the stage of development, and the unique characteristics of the product being tested.

The time required to develop an analytical assay can vary significantly depending on several factors, including the complexity of the assay, the type of analyte being measured, and the intended application. The characterization and validation of an assay can take up to 12 to 18 months.

Strategy To Develop A Robust Analytical Assay

Define the objective

  • Clearly establish the objective of the assay, such as quantifying, identifying, or detecting specific analytes.
  • Understand the regulatory requirements for the assay's intended use (e.g., clinical diagnostics, quality control, research).

Strategize and execute method development

  • Review and explore existing literature, as there are numerous publications available for each analytical method. This can provide valuable insights for assay development and troubleshooting.
  • After finalizing the strategy, conduct a high-level risk assessment to pinpoint factors that may impact the assay outcomes. Utilize tools like risk matrices, strengths, weaknesses, opportunities, threats (SWOT) analysis, or bowtie analysis for this purpose.
  • Use data generated from risk assessments to design your experiments.

The development of the method largely depends on the proposed method. The commonly tested parameters are:

  • Sample preparation: Optimize sample extraction, purification, and handling to minimize interference.
  • Assay parameters: Test and optimize parameters such as pH, temperature, buffer conditions, and reagent concentrations.
  • Instrument settings: Determine optimal instrument settings (e.g., wavelength, flow rate, injection volume).

Conduct pre-validation studies

  1. Pre-validation studies for an analytical method involve preliminary experiments carried out prior to the formal validation process. The goal of these studies is to fine-tune and define the method’s parameters, ensuring it is robust, reliable, and prepared for comprehensive validation. This step is essential in analytical method development, as it allows for the early detection and resolution of potential challenges, ultimately conserving time and resources during the validation stage.

Objectives of Pre-validation Studies

  • Optimization: Fine-tune method parameters (e.g., pH, buffer composition, detection wavelength, plate type, column type).
  • Feasibility assessment and robustness testing: Ensure the method is suitable for its intended purpose.
  • Preliminary performance evaluation: Assess key attributes such as accuracy, precision, linearity, and specificity.
  • Troubleshooting: Identify and address potential challenges before full-scale validation.

Perform robustness testing

  1. By intentionally altering assay parameters, such as temperature, pH, or reagent lots, the reliability of the assay is evaluated under varying conditions. These studies, recognized as critical by ICH Q2(R1), are conducted to assess the method's robustness and performance when subjected to non-ideal or deviated conditions.

Validation studies

  1. Validation is a formal process to confirm the assay's reliability and reproducibility. ICH Q2(R1) provides detailed guidance for validating analytical methods, including definitions, experimental approaches, and acceptance criteria. Key parameters include:
  • Specificity: Specificity refers to the ability of an analytical method to measure the analyte of interest accurately and specifically in the presence of other components, such as impurities, degradants, matrix components, or other potential interferences.

How to test specificity? For a given analytical assay, identify any sample components that could potentially interfere with the measurement. Then, intentionally add (spike) this component into the buffer to observe whether it produces a false positive result. Additionally, calculate the recovery of the analyte in these matrices, with a recovery range of 80%-120% considered acceptable.

  • Linearity: Linearity refers to the ability of an analytical method to produce results that are directly proportional to the concentration of the analyte within a specified range. It ensures that the method provides accurate measurements across different levels of analyte concentration.

How to test linearity? Prepare a series of solutions with known concentrations of the analyte, covering the expected range of concentrations. Plot the analyte concentration (x-axis) against the measured response (y-axis) and ensure the calibration curve is fitted to an appropriate fit model (depending on assay type). The R² ≥ 0.95 is set as a typical acceptance criterion.

  • Limit of Detection and Limit of Quantification: The limit of detection (LOD) is the lowest concentration of an analyte that can be detected, but not necessarily quantified, under the method's defined conditions. The limit of quantification (LOQ) is the lowest concentration of an analyte that can be quantitatively measured with acceptable precision and accuracy.

How to determine LOD and LOQ? From the calibration curve and applying an appropriate fit model,

LOD = (3.3 * RMSE of the fit) / Slope of the fit

LOQ = (10 * RMSE of the fit) / Slope of the fit

An analytical method with the lowest LOD and LOQ is considered a good analytical method.

  • Accuracy: Accuracy refers to the closeness of the measured value obtained by the analytical method to the true value or the accepted reference value. It is essential for ensuring the reliability of the method in quantifying the analyte.

How to test accuracy? Prepare samples with known concentrations of the analyte (spiked samples) or use reference standards. Compare the measured value to the known concentration and calculate the percentage recovery using the formula:

Accuracy (% Recovery) = (Measured Value/True Value) X 100

  • Precision: Precision refers to the closeness of agreement between a series of measurements obtained from multiple samplings of the same homogeneous sample under prescribed conditions. It evaluates the reproducibility and consistency of the analytical method.

Type of Precision:

  1. Repeatability:
    • Assesses precision under the same operating conditions over a short time (intra-day precision).
    • Example: Same analyst, same instrument, same laboratory.
  2. Intermediate Precision:
    • Evaluates variability within the same laboratory but under different conditions.
    • Example: Different analysts, instruments, or days.
  3. Reproducibility:
    • Assesses precision across different laboratories (inter-laboratory studies).
    • Often required for collaborative studies or regulatory submissions.

How to test precision? Use the same homogeneous sample to prepare multiple aliquots. Measure the analyte concentration in each sample under the same conditions in replicates of three at a minimum. Use the measured values to calculate the standard deviation and relative standard deviation (RSD) or coefficient of variation (CV).

SD or CV (%)  =  (Mean/Standard Deviation) X 100

About The Author:

Rishi Ramesh Kothari is a process development engineer at Regeneron’s Pre-Clinical Manufacturing and Process Development team focusing on development of analytical assays and achieving process development goals for adeno-associated viruses going into the clinic. He received his master’s degree in biotechnology from Drexel University College of Medicine.