From The Editor | June 5, 2019

Why We Should Celebrate The FDA's Biosimilar Comparative Analytics Guidance

anna rose welch author page

By Anna Rose Welch, Editor, Biosimilar Development
Follow Me On Twitter @AnnaRoseWelch

biosimilar industry

Over the last few weeks, I’ve been working my way (slowly) through the FDA’s Comparative Analytical Assessment and Other Quality-Related Considerations draft guidance. Though it certainly doesn’t make for light reading, I quite enjoyed reading and learning more about the comparative analytical portion of the biosimilar development process, especially since it’s the bread and butter of the biosimilar development process. (Not to mention, as a poet, with each page I turned, I felt more and more like a rocket scientist — even though I know that the information and processes outlined here are not rocket science for those of you who are scientifically and statistically inclined. #Unicorns)  

Despite the importance of the process the FDA is outlining in the guidance, I’ve surprisingly heard little chatter — positive or negative — about what the agency is now outlining and what this may mean for biosimilars and the biosimilar regulatory paradigm moving forward. (And this guidance is much more important than the FDA’s interchangeability guidance, which, as you may have heard, still exists.)

For those of you who have been closely following the FDA’s guidances over the past few years, you’ll know that something unprecedented happened last year right around this time: the agency withdrew its highly contentious Statistical Approaches to Evaluate Analytical Similarity draft guidance. Following this news, I wrote an article unpacking why this guidance raised red flags for many in the industry and why its elimination was worthy of a celebration. For instance, one of the biggest points of contention was the use of equivalence testing for tier one, or the highest ranked, critical quality attributes (i.e., the ones that are likely to have the most significant clinical impact). The poet in me particularly liked the argument that equivalence testing would leave the biosimilar industry “hunting ghosts” as reference products and their quality attributes may shift slightly over time. And, in some cases, this method may have meant highly similar biosimilars would not be able to show statistical equivalence, which was a migraine both the agency and industry were no doubt keen to avoid.  

Fast forward to May 22 and we now have the FDA’s second take on this guidance. And folks, I’m thrilled to report that, from what I’m reading and hearing, the guidance should leave us feeling encouraged. That’s not to say there aren’t certain areas that will require some clarity or that may raise a few concerns depending on how the agency proceeds. But overall, this latest attempt showcases what the FDA has long emphasized for biosimilar development — a sound, scientific approach which could usher us towards a future of more tailored biosimilar development.

Quality Considerations Emphasize FDA’s Growth

As many of you may already be aware, this draft is partially a revision of a final guidance published in 2015 — Quality Considerations in Demonstrating Biosimilarity to a Reference Protein Product. The first half of the guidance (until page 18) comprises the information previously presented to industry with a few revisions added.

Overall, the most important thing to note about the first 18 pages of the guidance for the non-scientists among us is that it essentially functions as a recipe for putting together a solid biosimilar analytical assessment program. The guidance lists nine factors altogether that biosimilar makers should evaluate to demonstrate high similarity to the reference product. These factors include the expression system, the manufacturing process, the physiochemical properties, the functional activities, target binding, impurities, reference product and reference standards, finished drug product, and stability. As such, the FDA is likely to anticipate data providing insight into all of the factors outlined (as appropriate for the molecule in question). These data will not only showcase just how similar the biosimilar molecule is to its originator, but they will also reveal any differences that may exist between the molecules and provide scientific justification as to why those differences will not be clinically relevant. 

Now, from what I’ve heard, many of the changes made to the 2015 subject matter were smaller sentence-level clarity changes. That said, I hardly wish to undermine the power of small revisions; for a regulatory document, a slight language change could lead to exciting new worlds of progress or sheer and utter confusion. But none of these verbiage changes were revolutionary, by which I mean they do not create a higher (or lower) barrier for biosimilar makers to overcome in order to attain biosimilarity. In fact, I’m sure many of you would hardly call what the FDA outlines here as a quick and easy, non-resource-intensive task to begin with.

Perhaps the largest change to note in this portion of the guidance was the addition of several paragraphs in the section on the reference product and reference standards. In short, reference standards ensure manufacturers have an analytical method in place that will promote data consistency and control drifts in the product overtime. The information added outlines the process biosimilar makers should follow to best use these reference standards. For example, in lines 586-588, the FDA writes, “There should be an evaluation across the history of multiple reference standard qualifications to address potential drift.” As we all know, biosimilar development occurs over the course of a few years. Therefore, it’s critical biosimilar analytical data is compared to that of the reference product throughout the course of development.   

But if we’re looking at these efforts from a high level (my favorite dwelling place when science is involved), the FDA’s efforts to make these changes and clarification is worthy of praise. For one, this demonstrates the agency’s willingness to take the information it has gleaned since 2015 about the biosimilar analytical and CMC process and tweak certain recommendations as it sees fit. One of our biggest biggest demands of any regulatory agency is that regulators continue to revisit and revise their policies following scientific advancements and their own lessons learned. The FDA’s revision of this older guidance in accordance with its experiences and increasing knowledge of scientific and technological advancement is exactly what we’ve been pushing for all along.

Comparability Recommendations Promote Well-Known Standards, Flexibility

The second half of the guidance discussing comparative analytical assessment (from page 18 to the end) marks the FDA’s new efforts to fill in the blanks on what it tried to accomplish with its withdrawn statistical analysis guidance. Ever since the FDA withdrew its statistical analysis guidance, I’ve been curious to see in which direction the FDA would go with its future guidelines. And so far, what they have outlined has been hailed as a more flexible, open approach to analytical assessment. It’s also much more in line with several of the standards with which manufacturers already comply, meaning this portion of the guidance should leave the industry feeling quite pleased overall.

For example, the move from what the FDA originally termed “tiering” of the critical quality attributes (CQAs) to now “ranking” the CQAs may seem like a small, perhaps insignificant change. However, this change is significant, given that this new draft guidance now reflects language that regulators have published in the past and that has been well understood by industry for at least the past decade — that of ranking the CQAs in order of their impact on safety, efficacy, pharmacokinetics and pharmacodynamics, and immunogenicity. One of the issues with the withdrawn statistical analysis guidance was the recommendation of tiering the CQAs, which left many in the industry unclear as to how exactly the FDA was planning on defining such a tiering system. (In fact, I’ve heard this term was a bit thorny for even the FDA to nail down when asked.) As I mentioned above, employing new terms in guidance documents can be a tricky thing for both regulators and manufacturers to maneuver, hence the happiness around this reversion to a standard principle.

It’s also worth noting the FDA’s flexibility in terms of the number of lots biosimilar makers should acquire. Though the FDA specifies they’d like to see at least 10 reference product lots spanning a period of several years, this does not limit manufacturers to only 10 or some fixed number above 10 lots. Given that we observe slight shifts in reference products over time, with some products encountering larger shifts, it’s helpful to enable a manufacturer to source lower or higher numbers of lots as needed, depending on the product and the variability of the molecule. For those products that typically remain consistent over time, it can be assumed a lower number of lots will be suitable. But should there have been an upper limit placed on the number needed, this could have hindered the biosimilar manufacturers’ knowledge of the molecule, as well as any scientific discussions with the agency.   

The biggest — and most welcome — change in the FDA’s approach, however, is their recommendation to use the quality range (QR) approach as opposed to the highly contested equivalence testing. (That’s not to say equivalence testing isn’t still an approach the FDA would accept, however. It’s still mentioned as an option in this new draft guidance.) The QR approach is to be used when analyzing comparative analytical data to determine that each moderate- to high-risk CQA is similar to those of the reference product.

To visualize this approach, think of a bell curve with data points scattered on it. These data points are gleaned from characterizing the reference product, determining the variabilities of the molecule between different lots, and calculating its standard deviation. The bell curve and these data points essentially specify the lower and upper ranges within which a biosimilar’s own data must fall.

As I mentioned above, this change was welcomed, seeing as the QR approach is consistent with the agency’s current regulation of the manufacturing process. Though the equations needed to determine standard deviation may look like something from a Harry Potter novel, this is actually a simple statistical equation, and the results are easily interpreted and well understood by the industry. This is also a more flexible approach, as the manufacturer can easily justify its methods for analyzing its biosimilar (i.e., whether it chose two or three standard deviations), and the size of the bell curve can be adjusted if more scrutiny of the molecule is deemed necessary.

Biosimilar And Biologic Regulatory Equality Must Remain Top Of Mind

It’s a concept we’re well familiar with in this industry and one that enters every regulatory conversation at conferences: it is critical that biosimilars not be faced with additional regulatory scrutiny that is not also applied to the reference product itself. There is only one portion of the guidance that is worth paying attention to in regards to this potential caveat — and much of it will depend on how strictly the FDA plans to oversee this particular requirement.

There is language in this new guidance that recommends biosimilar manufacturers “target the centers of distribution” of the reference product’s CQAs. If we’re picturing our trusty bell curve again, the biosimilar data is going to want to be located as close to the center or peak of the bell curve as possible. As the FDA suggests, should the distribution of the biosimilar’s CQAs be farther to the left- or right-hand side (or the lower edges) of the reference product’s distribution, there could be a need to see more data — especially should the CQAs be especially critical to the clinical function of the molecule. So, even though the QR approach is a much more favorable approach compared to equivalence testing, this emphasis on hitting the center of the distribution could be problematic given that the data distribution — and, therefore, the bell curve — of certain reference products shifts over time. This, in turn, could lead to some more ghost-chasing.  

Now, whether or not the language can — or even should be changed — is yet to be seen. In principle, the goal is to maintain the center of the bell curve over long periods of time as it ensures a consistent, quality biologic AND biosimilar manufacturing process. However, everything will come down to how rigorously FDA will be interpreting this standard for biosimilars. Should the agency be quite strict with its expectations that biosimilars will and should always hit the center of the bell curve, we could run into the danger that biosimilars will be held to a higher standard than their reference products. After all, the biologic has the entire quality range within to work. If there is one area of discussion we must, and no doubt will continue to have with the agency as we move forward is that these policies that are favorable on the page are also applied rationally.