By Anna Rose Welch, Chief Editor, Biosimilar Development
If I had to pick a question manufacturers and members of the global biosimilar industry ask regularly (apart from what in the heavens is going on in the U.S.?), it’s definitely, “how much data is actually too much data?” Over the past 13 years, regulators have seen large quantities of data from biosimilar development candidates that have been approved and are currently boasting much positive real-world evidence — think the infliximabs, filgrastims, and perhaps even the adalimumabs. All of this positive biosimilar progress has raised questions about the future regulatory pathway for biosimilars and which recommendations may no longer be necessary in the long run.
Are we close to seeing the biosimilar regulatory pathway become more efficient? The jury is still out on that one, and depending on who you talk to, the answer is not exactly black-and-white. But in the U.S., there has been some positive buzz generated around the FDA’s latest Comparative Analytical Assessment and Other Quality-Related Considerations guidance. In a recent article, Fouad Atouf, the VP of global biologics, science and standards, for U.S. Pharmacopeia (USP), expressed his hope that this guidance is the FDA’s stepping-stone toward eliminating certain preclinical and clinical needs in the future. In order to get to that point, however, the FDA is requesting quite a bit of data, which may be off-putting to manufacturers in light of current clinical needs, as well as the decade-plus of positive experiences we’ve had with biosimilars.
But from conversations I’ve had in the past, I think an equally important question for manufacturers to ask is, what data may still be missing — and why? I found myself asking this particular question after attending a recent event and hearing that EU regulators, in particular, have found the quality data presented in the European Medicines Agency’s pilot program for tailored development has been too immature to lessen clinical data requirements. As I’m not a regulator, a regulatory professional, or a fly on the FDA’s walls, I can’t know what data is or is not being presented during regulatory reviews that sends scientists back to their labs. Based on anecdotes such as these, it seems there can be a disconnect between what agencies ask for and what companies end up providing — and my question, inevitably, is why does this disconnect exist?
During our conversation about the FDA’s most recent guidance, I picked Atouf’s brain about the challenges of establishing an analytical development program, as well as why certain types of data may be more difficult to come by than others. These challenges are responsible for further complicating the question about which types of data are still necessary, and why.
What Is Enough Data — And How Do We Get There?
Like many other industries, the pharmaceutical industry (rightfully) likes to take a risk-based approach. Establishing an analytical program may not be an easy project, but manufacturers are not typically in the dark about which tools will provide them with the data that regulators want to see. In fact, a manufacturer may have the specific tools and technology available in its labs. But, as Atouf acknowledged, there are a number of good reasons for manufacturers — especially in this industry — to want to take a more risk-based approach and not provide data that may not be necessary. So, one of the reasons behind immature data could be as simple an explanation as the manufacturer chose not to provide data from each of the nine factors the FDA outlined in the first half of its comparative analytics guidance. As Atouf explained, once a company has characterized the reference product and demonstrated that its molecule has the appropriate amino acid sequence or glycosylation pattern, it may present a rationale that it shouldn’t need data on a certain factor outlined in the guidance (e.g., target-binding, functional activity, etc.).
We could also make assumptions that an insufficient number of lots — either of the reference product or the biosimilar product — are presented. Should there be too few lots of either product, the company and regulator will have limited insight into each product’s variabilities, and, in turn, the degree of similarity between the reference product and its biosimilar.
These are just a few possible explanations for why we have yet to see a successful tailored approach out of the EMA pilot and why we may be hearing the frustrating phrase “there was immature data” from regulators. Of course, these are far from the only possibilities, and arguments could be made (and are) that demands from regulators are too high for the level of experience and amounts of data regulators have already seen with certain biosimilars. In its recent guidance, Atouf acknowledged that the FDA is recommending a hefty list of data. But he believes by doing so, the FDA is opening the door for manufacturers in the future to approach with less data and be able to progress more rapidly through development.
Assay Development: Still Far From Standardized, But Possibilities Exist
When we’re discussing the analytical development of biosimilars and what that entails, the term assay is likely to be at the forefront of the discussion. Briefly, for the non-scientists among us, these are the analytical tests used to qualitatively or quantitatively assess the presence, amount, and distribution of a biologics’ critical quality attributes (CQAs). The assays selected will either provide numerical-based data (quantitative) — for instance data on the potency of a batch — or non-numerical data (qualitative). For example, you’d use qualitative assays to determine the amino-acid sequence of a molecule, which is a strand of letters, or to identify the glycans (sugars) attached to a protein.
(Very) generally speaking, once a company selects a molecule — for instance, Herceptin — it will characterize the molecule and identify the amino acid sequence of that molecule and its CQAs. In its guidance, the FDA provides the list of nine factors to consider when developing the analytical methods — including the manufacturing process and the expression system, which Atouf singled out as being exceptionally important. These two factors in a biosimilar program are likely to be different from those used to manufacture the originator. In turn, the physical properties of the biosimilar may be slightly different, even if the amino acid sequence is similar. (The type of cell-line chosen, for instance, informs the glycosylation pattern of a protein.) To detect and measure any differences, the company first needs to determine which analytical tools or assays to use for comparability, as well as account for the limitations of the assays (e.g., some are too sensitive and others not sensitive enough — think Goldilocks and the Three Bears).
Fortunately, standard assays, tools, and testing procedures exist, in large part because of the work of organizations like USP, other pharmacopeias, and the World Health Organization (WHO). Still, as Atouf pointed out, regardless of whether you’re a biologic or biosimilar maker, in the absence of publicly available tools and standards, each company is likely to build its own tools. This not only adds more time to the development process, but it also introduces a lack of consistency in measurement approaches, in turn, making the comparability exercise more challenging.
That’s not to say USP and other standards-setting organizations aren’t working to develop new standards. But the main challenge is determining which orthogonal methods will demonstrate comparability between the biosimilar and the originator product. As Atouf explained to me, in order to get the most detailed image of a biosimilar and to compare its physiochemical and functional properties to that of the innovator, you will need to run multiple tests to measure one particular aspect of that molecule — for instance, the secondary structure or protein aggregation (clumping). Just like I could not use just a business card to identify myself and secure future business partners, so, too, the same can be said of using only a single analytical test to examine each CQA. The goal is to have multiple tests providing you with the same answer.
For some attributes, the sequence for testing may be simple. For others, the sequence is more complicated since not all CQAs of a product are easily analyzed using orthogonal approaches. For example, Atouf highlighted the use of circular dichroism (CD) for measuring protein aggregation of monoclonal antibodies. When using this specific tool, a developer may measure the percentage of the protein subject to aggregation. However, the use of another tool, such as size exclusion chromatography, to measure the same attribute may reveal a different value for the percentage of aggregation. Though you may be measuring the same attribute, the different assays are not calibrated against each other, meaning they won’t provide you with the same readout.
This is where USP as a standard-setting organization can support the industry and regulators by creating standards that introduce more of these orthogonal methods — especially for complex attributes where, after multiple tests, there are conflicting results, Atouf offered.