Guest Column | September 25, 2017

How Interested Is The FDA In Real-World Evidence?

How Interested Is The FDA In Real-World Evidence?

When the Clinical Leader team attended the 2017 DIA Annual Meeting in June, the topic we heard discussed more than any other was real-world evidence (RWE) — that is, information about a drug that is collected outside of clinical trials. RWE is not a new concept, but there are good reasons for all the current attention being paid to it in the pharma industry. For one, the 21st Century Cures Act, enacted into law in December 2016, seeks to speed the FDA drug and medical device approval processes by shifting some of the evidentiary requirements, in certain instances, from clinical trials to post-market  — in other words, to “the real world.”

For this and other reasons, RWE is an area that an increasing number of CROs have moved into. So we decided to reach out to top executives at four of the largest CROs to get their perspectives on RWE and its growing importance in the industry:

  • Radek Wasiak, Ph.D., VP and GM, real-world evidence, Evidera
  • David Thompson, Ph.D., SVP, real world evidence and insights, INC Research/inVentiv Health
  • Nancy A. Dreyer, Ph.D., SVP and global chief, scientific affairs, QuintilesIMS Real-World Insights
  • Haley Kaplowitz, Ph.D., executive director, safety, epidemiology, registries, and risk management, UBC

In parts 1 and 2 of this three-part roundtable Q&A, these experts shared their insights on the growth of RWE, the 21st Century Cures Act’s role (current and future) in that growth, and the potential for RWE to be incorporated in Phase 2 and 3 trials. In future installments, we hear from them about the FDA’s interest in RWE, and where pharma companies stand in the RWE experience curve.

Are you seeing a lot of increased interest in this research from the FDA?

Wasiak: Over the last year or so, the FDA has made a focused effort at considering different types of evidence, and RWE is one of the areas it is emphasizing. The agency is currently evaluating how RWE should be incorporated in the drug development process. Although we do not yet see many requests coming directly from the FDA, we are observing pharma and medical device companies preparing for such requests to happen in the future. Many already have the experience of working with other regulators, in particular EMA, which has taken a significant RWE role in the post-marketing phase.

Thompson: Yes, without a doubt. As a matter of fact, the Clinical Trials Transformation Initiative (CTTI) — a joint undertaking by the FDA and Duke University — launched an RWE workstream a few months ago.  I’m involved in that initiative, and it’s interesting to see how trialists are thinking about real-world data sources, their use-cases for trials, and so forth.

Dreyer: The FDA is giving strong signals about its interest in real-world research, stimulated in large part, I suspect, by the 21st Century Cures Act. The agency welcomes the opportunity to use real-world data for post-approval study requirements and is interested in understanding their application for label expansions.

By engaging with regulators early in the planning process, biopharma leaders can begin to reframe their conversations and explore new ways to incorporate real-world evidence.  As a senior FDA regulator recently told me, industry leaders need to “give us the chance to say yes.” But these discussions take time, and the use of real-world evidence for regulatory decision support is largely in its infancy. We need to educate regulators on how this data can be used and where it is reliable, as well as where it is too weak to be useful.

We also need to continue to explore the boundaries of real-world evidence.  For example, some regulators recently proposed a definition of real-world data in a JAMA opinion piece in which they proposed that it only includes data that was created for non-research purposes, like electronic medical records and health insurance claims data, but not structured assessments like pain scores administered systematically for study purposes. It’s not clear that these distinctions promote the use of real-world evidence, but agreement on when a study is interventional is important since it has broad impact on study requirements, especially in Europe with the new clinical trial regulations.  Since large biopharmaceutical companies often conduct global research programs using a harmonized protocol across many regions, these fine distinctions take on increasing importance due to varying requirements about when treatments need to be provided free-of-charge to patients, and whether blinded products need to be used for study.

Kaplowitz: The FDA is supportive of observational studies, such as registries, to meet a postmarketing requirement related to safety — when such an observational study design is an appropriate way to answer the specific research objective(s). The agency has been understandably skeptical about the use of existing data sources, which are typically created for purposes other than research, as an appropriate alternative.  However, as it’s gained greater understanding of the advantages and disadvantages of this approach, and as new analytical methods have been developed, the agency has certainly become more interested and more accepting of such studies.  In terms of postmarketing commitments, the FDA has essentially created a hierarchy of approaches:  spontaneous reporting, use of the FDA pharmacovigilance system, long-term observational study using electronic healthcare databases, observational studies with primary data collection (e.g., registries), and randomized clinical trials.

Does pharma have expertise in conducting this type of research? Will that expertise vary by therapeutic area?

Kaplowitz: Pharma has been conducting real-world studies for a long time, generally through their epidemiology, safety or outcomes research departments.  However, new analytic methods and technologic capabilities are being applied to existing healthcare databases to appropriately utilize these data sources.  Expertise won’t necessarily vary by therapeutic area per se, but perhaps by whether the needed information exists in one or more of these data sources.  For example, a company specializing in over-the-counter products will not find their products in the existing prescription insurance claims databases since they’re not reimbursed by payers.

Wasiak: RWE as a research area is not new. Studies evaluating treatment in routine clinical practice have been ongoing for many years, led by epidemiologists and statisticians either within pharma companies, academia or consultancies. As such, there is a pool of experts to draw on who either already are working in pharma companies or able to contribute from other organizations. In response to the growing need for RWE and the need to improve internal organization, most pharma companies have restructured and created cross-functional RWE teams aimed to optimize real-world data use. We expect pharma to continue to rely heavily on the expertise of external partners to design and conduct RWE studies.

Thompson: Top 20 pharmaceutical companies certainly do, but the small-to-midsize companies are still struggling to meet this need. Even companies who do have in-house expertise in RWE seem to have internal turf battles over jurisdiction—should the RWE function reside within medical affairs, health economics & outcomes research (HEOR), pharmacoepidemiology, or clinical? The problem is that all of these departments can legitimately lay claim to at least a piece of real-world research so something’s got to give moving forward.

Dreyer: Real-world data has been a mainstay of safety studies, but there is little regulatory experience in its use for labelling outside of rare disease research. The difficult questions all center around when the study design and data are good enough to be relied upon. Real-world data scientists understand the strengths and limitations of Big Data, including understanding how issues come to medical attention and are recorded in various databases — and equally important, when important events are not likely to be recorded, such as treatments not covered by health insurance.

Those who have experience in classical clinical trials often try to force real-world studies into a clinical trial-like paradigm. For example, if some source data verification is good, why not resort to the classical clinical trial paradigm and monitor 100% of the data?  Similarly, real-world studies need to use different methods for addressing missing data, since there is a limitation to what can be collected during a typical medical encounter as opposed to a lengthy clinical trial visit.

One of the largest differences between clinical trials and non-interventional, real-world studies is how differences between groups are managed. Clinical trials use randomization to assure balance, while real-world studies accept whatever treatments are decided on by patients and physicians and then use statistical tools to quantify the impact of bias on the study results to show how much of the effects could be explained by systematic error.  While we should be justifiably cautious about claiming that small effects are causal, sometimes an effect is just too strong to be dismissed because there may be bias.

Extracting the full value of real-world data for meaningful research requires a more flexible approach than has been traditionally used in drug development.  Rather than measuring an outcome exactly the same way from Phase 2 through 4, biopharma companies need to think about how things are actually evaluated in real-world settings.  For example, a clinical researcher may rate a dermatologic condition by conducting a detailed assessment of the extent and severity of inflammation by body part, whereas in a real-world study, a researcher might screen for disease severity by simply asking, “Does itching keep you awake at night?”

To be successful, biopharma companies must understand that real-world evidence is a foundational tool that is integral to the enterprise, from R&D through to commercial.  Like clinical research, real world research requires expert design coupled with knowledge about what relevant data are accessible and what needs to be collected, and how to do that efficiently in a manner that will be recognized as “good enough” by regulators, payers, clinicians and health systems.