I’ll admit, when I read the headlines that broke late last week, I did a double-take. I couldn’t believe I’d actually read the words “FDA Withdraws Draft Guidance on Statistical Approaches to Evaluate Analytical Similarity.” But there it was, published on multiple websites and bouncing around on social media. It immediately buoyed my spirits — in fact, I had been loudly scoffing at an article about yet another ridiculous legal turn-of-events that could impact biosimilars and was in desperate need of some good news.
A number of articles already provide a synopsis of this reversal and the FDA’s future plans. But I felt it was necessary to add one more article to this conversation, not only because the FDA's action is unprecedented thus far in the biosimilar industry, but it also suggests an important turning point in the agency's biosimilar oversight.
Recap: Why This Guidance Was Concerning
There were a number of concerns highlighted in the comments submitted to the agency, including sourcing 10-plus lots of U.S. licensed comparators in the age of Risk Evaluation and Mitigation Strategy (REMS) abuses and orphan drug development. (After all, these costly drugs with limited lot production will need biosimilars someday.) But the concerns most troubling to me had to do with the recommendations for the statistical analysis of certain critical quality attributes.
I was introduced to this complicated topic of discussion when I attended the 2016 DIA biosimilars conference. During his presentation, Martin Schiestl, CSO of Sandoz, urged the FDA to reconsider publishing what was the yet-to-be-released statistical analysis draft guidance. Despite his argument that this guidance could lead to highly similar biosimilars being rejected for wrong reasons, the FDA proceeded to release the guidance in September 2017.
Following its publication, Schiestl spoke once more at the DIA 2017 Biosimilars conference prior to the end of the guidance’s commenting period. As he reiterated, this guidance would ultimately leave biosimilar makers “chasing ghosts.” His biggest concern was over the FDA’s insistence that tier one critical quality attributes (CQAs) — the most important to the molecule’s clinical performance — be analyzed using equivalence testing. (Tier two and three methods were less controversial.) This method of testing would require biosimilar makers to compare the means (averages) of biosimilar and reference product batches to ensure the biosimilar fit within a range established by the FDA. The hitch? The reference product mean has been found to shift over time, meaning the biosimilar maker could be caught in the middle of a wild goose chase. And this goose chase not only equates to extra time, effort, and costs, but it could also incorrectly suggest a biosimilar is not actually equivalent to its originator.
The impact of this lot-to-lot variability is best seen in the FDA’s review of Sandoz’s Enbrel biosimilar candidate, Erelzi. As I’ve written in the past, this molecule faced an interesting scientific and regulatory journey given the reference product’s misfolded protein and the work Sandoz did to reshuffle these variants in its biosimilar and educate the FDA. But on the statistical level, Erelzi did not demonstrate statistical equivalence to the U.S.-sourced Enbrel.
If you look at page 28 in the FDA’s Erelzi briefing document, you’ll see that Sandoz secured 31 lots of U.S.-licensed Enbrel and was unable to demonstrate statistical equivalence. That said, it was found equivalent to the 43 batches of EU-licensed Enbrel. Similarly, in terms of lot-to-lot variability, Figure 7 on page 79 also reveals the 19 batches of Erelzi were much closer in range (strikingly so) compared to the lots of U.S.- and EU-licensed Enbrel.
As we all know, Erelzi was approved, despite its lack of statistical equivalence to the U.S.-licensed Enbrel. This decision reinforced the FDA’s dedication to the totality-of-evidence approach. Indeed, the FDA argued the guidance’s recommendations would be simply another facet of the totality of evidence and would not be used to outright reject a biosimilar candidate should the testing not provide a clear thumbs-up to analytical similarity. But at the end of the day, if the results of these recommended analyses needed to be taken with a grain of salt on a case-by-case basis, what’s the point of this possible “ghost hunting?”
What Could This Mean In The Long Run?
Overall, there are a number of reasons to celebrate the FDA’s withdrawal of this guidance. But what’s more important to celebrate than the elimination of the controversial guidance itself is the FDA’s change of heart. The industry has been offering its two-cents to the FDA each time a draft guidance has been released. Even in situations where there have been larger concerns — especially with the notion of regulating interchangeability — the FDA has largely stuck to its guns and released a challenging guidance. Given the wealth of long-term safety and efficacy data, the FDA’s efforts to put an individual stamp on biosimilar regulation with interchangeability, and, until last week, statistical approaches, are a great example of what one day could be a striking wall of red tape. Now, the FDA has yet to release a finalized guidance for interchangeability following industry comments, so how that guidance will evolve has yet to be seen. But the agency’s decision last week and the verbiage for describing this move suggest we’ve reached a notable turning point in the FDA’s role.
In the past year or so, the FDA has done a great job emphasizing the importance of the biosimilars industry, as well as the need to streamline the regulatory pathway. And it seems a large part of this latest decision was to ensure the agency practices what it has been preaching. The FDA’s Scott Gottlieb wrote after withdrawing the guidance, “As the cost of developing a single biosimilar product can reach hundreds of millions of dollars, it’s important that we advance policies that help make the development of biosimilar products more efficient and patient and provider acceptance more certain.”
From the sounds of it, the FDA will be revisiting this guidance in the upcoming years, with the hopes of releasing one that takes into account the “most current and relevant science” and originator lot-to-lot variabilities. Biosimilar makers are only growing smarter when it comes to analytical science and finding efficient new ways to work within the abbreviated biosimilar development model. Because of this, I’d argue a blanket guidance at this stage in the game as opposed to one-on-one discussion with the FDA is likely to take away from these advances.
Over the years, the FDA has emphasized its identity as the scientific gatekeeper to the market, and this remains true. Being scientifically challenged myself (outside of the chemistry of mixing cream into coffee), I’m always impressed by the work regulators do. But as an observer, I’ve often felt the agency likes to have all its bases covered — and then some. This attitude is best reflected in the efforts biosimilar makers have to undertake specifically in the U.S. (e.g., interchangeability and a new naming system). After 12 years of biosimilar development, regulation, and safe use abroad, these differentiating regulatory efforts seem extraneous and are unlikely to remain as relevant or necessary in the U.S. as they may seem today.
As such, this latest decision shows the regulator’s increased comfort in making the biosimilar development process more efficient. Now, this isn’t to say I expect the FDA officials to announce the dismissal of any other contested draft guidances any time soon. But allowing the industry’s concerns to greatly impact the fate of this particular guidance is something I hope to see more of as the FDA continues to fine-tune its biosimilar policies.