top of page

Two regulators, two data frameworks. One target trial specification.

Adigens Health helps sponsors design evidence strategies that meet the standard; from target trial specification to decision-ready confirmatory evidence packages. Get in touch at info@adigenshealth.com.


Drug developers seeking approval on both sides of the Atlantic have always had to manage regulatory divergence. Different endpoints, different benefit–risk thresholds, different post‑marketing expectations. Most experienced teams have learned to navigate this.


What is less well understood is that the EMA and the FDA have also diverged in how they evaluate the feasibility of real‑world evidence, with the gap that may be viewed as widening. As RWE plays an increasingly prominent role in confirmatory evidence packages, particularly as FDA continues to explore how RWE can supplement a single pivotal trial in appropriate circumstances, getting this wrong has material consequences.


The good news is that the target trial framework, properly applied, is the one design discipline that speaks credibly to both regulators. The challenge is understanding where the divergence lies and why it matters.

The shared starting point: can this study actually be done?

Before any question of analytical method or comparator choice, both the FDA and the EMA ask a more fundamental question: is the proposed RWE study feasible? Can the available data support the causal question being asked? This feasibility question is the gateway through which all RWE submissions must pass. Both regulators have developed ways of answering it, but they have developed different ones, reflecting different data infrastructures, different institutional priorities, and different views on what rigorous feasibility assessment looks like in practice. The discussion that follows reflects how experienced sponsors and methodologists increasingly interpret these frameworks in practice, rather than any formal policy statement by either regulator.


The FDA's answer: fit for purpose

The FDA's approach to RWE feasibility is organised around a single governing concept: fit for purpose. A data source is fit for purpose if it can reliably support the specific study being proposed; not if it is generally high quality, not if it has been used in other studies, but if its specific characteristics allow the key study variables to be validly measured for this question.


In practice, fit‑for‑purpose assessment requires demonstrating that the data source can support credible operationalisation of four things: the eligibility criteria that define who enters the study, the treatment exposure that defines what is being studied, the outcome that defines what is being measured, and the covariates needed to address confounding. If any of these cannot be reliably derived from the available data, the study is not feasible, regardless of how large the database is or how sophisticated the analytical plan.


This is a more demanding standard than it might initially appear. Electronic health record data may capture diagnoses but miss the clinical nuance needed to define eligibility precisely. Claims data may capture dispensing but not actual consumption. A database that looks adequate at the level of a feasibility checklist may fail when the specific variables required by the target trial specification are examined closely. The FDA expects this assessment to be done rigorously, documented and completed before the study begins. Teams that discover feasibility problems mid‑analysis and attempt to adapt their design in response will likely face significant credibility problems with reviewers.


The target trial framework is particularly well suited to FDA‑style feasibility assessment because it forces the key variables to be specified before any data are examined. A team that has written a target trial protocol knows exactly what it needs from the data. The feasibility question becomes concrete and answerable rather than vague and deferrable.

The EMA's answer: a networked data infrastructure

The EMA has taken a different but complementary approach. It has supplemented sponsor‑led feasibility work by investing in a centralised infrastructure for generating and evaluating real‑world evidence across European health data networks. At the centre of that infrastructure is DARWIN EU (the Data Analysis and Real World Interrogation Network). Established by the EMA and now operational across a growing network of European data sources, DARWIN EU provides a coordinated mechanism for running RWE studies using systematically assessed data sources, standardised analytical environments, and oversight from the EMA itself.


For drug developers, DARWIN EU represents both an opportunity and a constraint. The opportunity is access to high‑quality European data sources that are being evaluated for their ability to support pharmacoepidemiological research. The constraint is that studies conducted through DARWIN EU operate within a framework defined by the EMA, which shapes what questions can be asked, how they can be answered, and on what timeline. DARWIN EU does not replace traditional sponsor‑run European RWE studies, but it creates a structured channel through which some questions can be addressed using a coordinated network. The feasibility question, in the DARWIN EU context, is therefore partly answered in advance by the network's data inventory, but sponsors must still ensure that their specific causal question is compatible with the network’s current data and methods.


Where this creates practical tension

In practice, sponsors still perform feasibility work for both regulators, but the dominant organising concepts differ: fit‑for‑purpose assessment at FDA and a networked data infrastructure, including DARWIN EU, in Europe. For teams designing RWE studies with multi‑jurisdictional ambitions, this divergence creates a practical planning challenge. A study designed around a US claims database, optimised to meet FDA fit‑for‑purpose expectations, will not automatically translate to the European data environment. The variables available, the coding systems used, the population covered, and the follow‑up periods achievable may all differ in ways that affect the core study design. This is an obvious statement given different healthcare systems and environment but the ability to access a large target population is potentially helpful regardless of the geography. A target trial specification written with US data in mind may require substantial revision before it can be executed using European sources. Conversely, a study designed to run through DARWIN EU, taking advantage of the network's data sources and standardised analytical environment, may not be replicable in the US data landscape in a way that satisfies FDA expectations. This is not an argument against using RWE in multi‑jurisdictional programmes. It is an argument for treating the data strategy as a design decision that needs to be made early, explicitly, and in light of the specific requirements of each regulator.


Why the target trial framework remains the common language

Despite the divergence in how the FDA and EMA approach feasibility, both regulators respond positively to one thing: a clearly specified causal question, articulated before any data are touched. The target trial framework provides this. By requiring teams to specify, in advance, the hypothetical trial their observational study is emulating it produces a document that both regulators can evaluate on its own terms. For FDA reviewers, the target trial specification maps directly onto the fit‑for‑purpose assessment: every element of the specification generates a data requirement that can be checked against the available source. For EMA reviewers and DARWIN EU study teams, the same specification provides the analytical brief against which European data sources can be assessed. The target trial framework does not eliminate the need for jurisdiction‑specific adaptation. But it provides the stable foundation on which that adaptation can be built and it demonstrates, to both regulators simultaneously, that the study was designed to answer a specific causal question rather than to mine whatever data happened to be available.


What this means in practice

Three principles follow for evidence teams planning RWE studies with ambitions in both jurisdictions.

  1. Start with the target trial specification, not the data. Defining the causal question first — before any database is selected or any feasibility assessment is run — produces a specification that can be evaluated against both FDA fit‑for‑purpose standards and European data inventories, including DARWIN EU where relevant. Starting with the data produces a study that answers the question the data can support, which is rarely the same as the question the regulator needs answered.

  2. Appropriately consider geographical context. The data infrastructure, coding systems, and population coverage differ enough that an assessment not taking geography into account will not land well. Build time into the programme to plan for assessments against bothFDA and EMA expectations.

  3. Engage DARWIN EU early if European approval is a priority. The network's study timelines and governance processes are not trivial. Teams that approach DARWIN EU late in the evidence planning process frequently find that the timeline implications are incompatible with their regulatory strategy.


The divergence between EMA and FDA on real‑world evidence feasibility is real, but it is navigable. The target trial framework provides the foundation on which both assessments can be built, and the common language in which both conversations can be had.


 
 
 

Comments


bottom of page