Skip to main content

2021-4649

Research Proposal

Project Title: 
A statistical methodology study to evaluate External Comparator Arm study results versus Randomized Controlled Trials in cancer drug development
Scientific Abstract: 

Background: The Gold Standard design for drug approval studies is the RCT. However, there are cases where RCTs are either unethical or unfeasible. In these circumstances SATs might be conducted and submitted to regulatory authorities for drug approval. Since SATs lack results for control patients, information from external data sources can be compiled and utilized to provide context for better interpretability of study results.
Objective: In this project, we seek to evaluate current statistical methodologies in the specific context of external comparator arm (ECA) studies to provide evidence-based methodological recommendations. Ultimately, this will also lead to recommendations on when ECA studies would provide sufficient scientific evidence to support regulatory decision-making.
Study Design: The experimental arm of the RCT is taken as a hypothetical SAT. The requested 4 prostate cancer RCTs are carefully selected using multiple criteria, e.g. significant overall survival benefit, availability of endpoints, and sufficient sample size. External data collected in the real world for patients with the same indication will be organized in parallel to serve as an ECA. This set-up enables a comparison of RCT treatment effect estimates versus RW treatment effects estimates.
Participants: Prostate cancer patients in selected RCTs and external real-world data collected in parallel.
Main Outcome Measure: Overall survival.
Statistical Analysis: Different analytical methods (e.g. propensity score methods) will be used to assess best approaches to estimate treatment effects.

Brief Project Background and Statement of Project Significance: 

Oncology drug development has evolved in the past 10 years with an increased focus on personalized therapies, sometimes sub-setting a larger oncology indication into a grouping of essentially rare diseases, created by biomarkers or genetic testing. There has also been an active movement in the development of treatments for advanced and specialty cancers. Competition for oncology patients in the research setting has become a barrier to traditional study enrolment and traditional study design. Cancer clinical trials design and methodology has had no choice but to change to meet the progress in clinical sciences and the unmet medical need. One of these evolutions has been the growth of use and acceptance of external comparators in this active research space.
A number of new clinical trial designs have been introduced in oncology based on clinical scientific insights (e.g. biomarkers) and there has been interest in exploring ways to augment clinical trial data with existing data from other clinical trials or RW databases. One specific use of such data is to contextualise results of clinical trials by providing external data. This is considered particularly useful in the case of trial designs without a parallel randomised control group, such as single-arm trials, with the generated evidence being used as pivotal evidence of efficacy in the context of a marketing authorisation application. A number of analysis methods have been proposed to address some of the shortcomings of using external comparators in clinical trials for comparison of efficacy in a specific indication, including methods that require individual data, see the given references. These analysis methods aim to incorporate external comparators in the analysis with minimum bias.
Eichler et al. in a paper from 2016 presented a framework for situations in which RCTs are not feasible to establish the treatment effect in a proposed indication. Simulations studies, including different sensitivity analyses, were put forward as a way to further define available methodologies that can provide insight in different sources of bias. For these simulation studies, validation would consist of use of conventional RCTs to concurrently analyse results as single-arm trials with historical comparators and then compare results from the randomized and nonrandomized analyses. These validation exercises would provide insight in where new methodologies are adequate and where not. Over the recent years as a result of efforts to increase transparency, access to data from completed clinical trials has increased and this makes it feasible to start working on these type of simulation studies.
To facilitate regulatory decision making this study aims to explore the impact of using external comparator data by means of a simulation study. Results of this simulation study will contribute to developing recommendations regarding methodologies to be applied and characteristics of situations where data from a single arm study with historical controls can form a basis for drug approval.

Specific Aims of the Project: 

The general objective of this research project is to assess various statistical approaches in the specific context of ECA studies and to provide evidence-based recommendations when analysing ECA studies.
Objectives
1. To simulate SATs on the basis of completed randomised-controlled clinical trials, and to compare External Comparator Arm study results with the concurrent control results from the randomised controlled trials.
2. To assess the impact of non-randomised trials in a clinical development program. It is of interest to know how repeated single-arm trials (with a potential external comparator arm) would perform as compared to Randomised Controlled Trials with respect to bias, precision, variability of the treatment effect estimation, sample size and the probability to come to false positive conclusions.
3. To develop recommendations about methods and situations when it may be informative or not to use ECA RW data in various situations of data availability and the natural history of the disease.

What is the purpose of the analysis being proposed? Please select all that apply.: 
Develop or refine statistical methods
Research on clinical trial methods
Software Used: 
I am not analyzing participant-level data / plan to use another secure data sharing platform
Data Source and Inclusion/Exclusion Criteria to be used to define the patient sample for your study: 

All patients from the full analysis set from the selected RCTs will be used, therefore following the inclusion/exclusion criteria from the individual study protocols. The same inclusion/exclusion criteria are intended to be used (as available) to select external comparators from RW data sources.

Main Outcome Measure and how it will be categorized/defined for your study: 

Overall Survival, defined as time from randomization until death from any cause. Survival time of living patients will be censored on the last date a patient is known to be alive or lost to follow-up.

Main Predictor/Independent Variable and how it will be categorized/defined for your study: 

The main predictor is the treatment received. Other important baseline covariates will be included in statistical models to adjust for selection bias.

Other Variables of Interest that will be used in your analysis and how they will be categorized/defined for your study: 

Other endpoints to be analysed may include overall response, duration of response, time to next treatment, progression free survival.
Demographic and other baseline covariates will be used for model building (e.g. propensity score modelling).

Statistical Analysis Plan: 

A study protocol detailing the key methodological and operational aspects of the study, such as clinical trial selection, RW data source selection, needed sample size for a priori defined hypotheses, analysis methods, study timelines, analysis plan, study conduct is being created and will be further detailed when access to selected clinical trial data is confirmed. IQVIA will ensure that the study protocol complies with local Acts and Regulations within each country of the participating data partners. This includes obtaining ethics committee approval as appropriate and where necessary.
Specifically, the used RCT and RW data will be outlined, including how to develop recommendations for future ECA studies:
·Develop recommendations for statistical approaches: Assess operating characteristics (performance measures) of approaches: type I error, power (type II error), bias, mean-squared error, coverage of confidence intervals
·Develop design recommendations
·Develop recommendations for the level of sensitivity analysis to ensure robust conclusions
·Develop recommendations for regulatory requirements
The data from the RCT will be re-analysed following the analysis specified in the protocol. That is, distribution of time-to-event variables will be estimated using the Kaplan-Meier product limit method. Median event times with two-sided 95% confidence intervals will be estimated, together with event rate estimates, e.g. at 6 and 12 months. The Cox proportional hazards model or similar methodology will be used for the estimation of treatment effects and the associated 95% confidence interval.
The data from the treatment arm of the RCT will be used as if it was a single-arm trial where it will be analysed together with the External Comparator Arm. Analytical methods like propensity score (PS) methods will be used to compare different approaches to estimate the treatment effect.
Substantial additional sensitivity analyses will be performed, including applications of different estimands, different handling of missing values, subgroup analyses (e.g. by country, line of therapy, severity of disease, sex, age group) and sensitivity analyses for unmeasured confounding.
The RCT data will also be used as a basis to simulate artificial data that follows realistic assumptions. The simulated data will then be intentionally modified to measure the effect of different situations on the bias of the estimated treatment effect.
Descriptive analyses of the simulation results will be undertaken to assess the performance of the methods. Further analyses will investigate the distribution of the point estimates of the effect size across the repetitions of the simulation study in addition to the estimated bias to assist in spotting outliers and methods that, while being unbiased, have undesirable properties. Monte Carlo standard errors will be reported for all estimates to quantify the simulation uncertainties and visualizations of the simulation performance results will display 95% CIs. By presenting estimates of uncertainty, the presentation of results will acknowledge that the simulations are themselves an empirical experiment and therefore the performance measures (e.g. bias, mean squared error) are themselves estimated and subject to error. Careful consideration will be given to the number of repetitions needed in each simulation study for minimizing simulation errors.
Since contracting with RW database owners is currently work in progress, only data access on the YODA platform (but not outside YODA) is requested at the moment, deviating from the original request. When having finalized contracting with database owners, there might be the necessity to request data access outside of YODA, but it is hoped that the contracts to be developed will allow shifting RW data to YODA servers. If it will not be possible to shift the RW data to YODA servers, it might be asked at a later time point to move the RCT data to IQVIA’s high-security servers

Narrative Summary: 

The European Medicines Agency (EMA) contracted IQVIA to advance the knowledge around using single-arm trials (SATs) with external comparators for cancer drug development. Based on simulation studies using data from randomized controlled trials (RCTs) and real world data, the project will develop recommendations on how to incorporate external comparators in the analysis in the best way. This is important because single-arm trials have an increasing role in cancer drug development for (subtypes of) cancers that are rare or have a high unmet medical need.

Project Timeline: 

• Preliminary Study Protocol/Outline: Apr 2021
• Study Start-up (incl. ethics approval, RW data owner contracting): Apr – Jun 2021
• Study Protocol: Apr – May 2021
• Data Collection / Preparation: Jun – Aug 2021
• Start of Data Analyses: Aug – Oct 2021
• Study Report: Oct – Nov 2021
• Manuscript: Nov – Dec 2021

Dissemination Plan: 

Upon completion of the study, IQVIA will create a detailed study report including a description of the used methods, the generated results and their interpretation according to the agreed statistical analysis plan. Further, recommendations as per Section 7 will be derived. The final study report will serve as a basis for the main publication of the study.
IQVIA will draft one or more manuscripts ready to be submitted for publication in cooperation with the EMA.

Bibliography: 

General references to External Comparator Arm studies (not specifically referenced in text).
1. Gray et CM, Grimson F, Layton D, Pocock S, Kim J: A framework for methodological choice and evidence assessment for studies using external comparators from real-world data. Drug Safety 43: 623-633, 2020.

2. Arone, B: The argument for external comparator adoption. Pharmaceutical Executive 39(6): 30-31.
https://www.pharmexec.com/view/argument-external-comparator-adoption. Accessed 19 Jan 2021.
3. Mack C, Pavesio A, Kelly K, et al: Making the most of external comparators. A study of fracture healing in patients at risk of nonunion. Poster presented at: International Society of Pharmacoeconomics & Outcomes Research, 2017.

4. Burcu M, Dreyer NA, Franklin JM, Blum MD, Critchlow CW, Perfetto EM, Zhou W: Real-world evidence to support regulatory decision-making for medicines: Considerations for external control arms. Pharmacoepidemiology and Drug Safety 29: 1228-1235, 2020.

5. Ghadessi M, Tang R, Zhou J, Liu R, Wang C, Toyoizumi K, Mei C, Zhang L, Deng CQ, Beckman RA: A roadmap to using historical controls in clinical trials – by Drug Information Association Adaptive Design Scientific Working GROUP (DIA-ADSWG). Orphanet Journal of Rare Diseases 15(1): 1-19, 2020.

6. Largent JA and Velentgas P. Abstract#4848 External comparators supporting regulatory submissions 2017-2019. Pharmacoepidemology & Drug Safety 29(S3): 431-432, 2020.

7. Friends of Cancer Research, Beckers F, Capra W, Cassidy A, Cihon F, Davi R, Jain R, Joshi A, Kanapuru B, Koontz L, Labriola D, LeBlanc M, Mahoney N, Menefee M, Mishra-Kalyani P, Nadpara R, Omel J, Pulkstenis E, Rassen J, Reitsma D, Rosner G, Samant M, Wang C, Yin, Xiang: Characterizing the use of external controls for augmenting randomized control arms and confirming benefit. White Paper, 2019. https://www.focr.org/sites/default/files/Panel-1_External_Control_Arms20.... Accessed 1 Jan 2021. Accessed 19 Jan 2021.

8. Schmidli H, Häring DA, Thomas M, Cassidy A, Weber S, Bretz F: Beyond Randomized Clinical Trials: use of external controls. Clinical Pharmacology & Therapeutics, 2019.

9. Mack C, Christian JC, Brinkley E, Hall M, Dreyer NA: When context is hard to come by: external comparators and how to use them. Therapeutic Innovation & Regulatory Science, 1-7, 2019.

10. Thorlund K, Dron L, Park JJH, Mills EJ: Synthetic and external controls in clinical trials–a primer for researchers. Clin. Epidemiol. 12: 457–467, 2020.

11. Lodi S, Phillips A, Lundgren J et al: Effect estimates in randomized trials and observational studies: comparing apples with apples. Am. J. Epidemiol. 188: 1569–1577, 2019.

12. Khozin S, Blumenthal GM, Pazdur R: Real-world data for clinical evidence generation in oncology. JNCI J Natl Cancer Inst 109(11): 1-5, 2017.

13. Skovlund E, Leufkens HGM, Smyth JF: The use of real‐world data in cancer drug development. Eur. J. Cancer 101: 69-76, 2018.

14. Cave A, Kurz X, Arlett P: Real‐World Data for Regulatory Decision Making: Challenges and Possible Solutions for Europe. Clinical Pharmacology & Therapeutics 106(1): 36-38, 2019.

15. Eichler HG, Koenig F, Arlett P, Enzmann H, Humphreys A, Pétavy F, Schwarzer-Daum B, Sepodes B, Spiros V, Rasi G: Are novel, nonrandomized analytic methods fit for decision making? The need for prospective, controlled, and transparent validation. Clinical Pharmacology & Therapeutics 107(4): 773-779, 2020.

16. Baumfeld Andre E, Reynolds R, Caubel P, Azoulay L, Dreyer NA: Trial designs using real‐world data: The changing landscape of the regulatory approval process. Pharmacoepidemiology and Drug Safety 29 (10): 1201-1212, 2020.

17. Eichler HG, Koenig F, Arlett P, Enzmann H, Humphreys A, Pétavy F, Schwarzer-Daum B, Sepodes B, Spiros V, Rasi G: Are novel, nonrandomized analytic methods fit for decision making? The need for prospective, controlled, and transparent validation. Clinical Pharmacology & Therapeutics 107(4): 773-779, 2020.
18. Eichler HG: Bloechl-Daum B, Bauer P, Bretz F, Brown J, Hampson LV, Honig P, Krams M, Leufkens H, Lim R, Lumpkin MM, Murphy MJ, Pignatti F, Posch M, Schneeweiss S, Trusheim M, Koenig F: “Threshold-crossing”: A useful way to establish the counterfactual in clinical trials? Clinical Pharmacology & Therapeutics 100(6): 699-712, 2016.
19. Suissa S: Single-arm trials with historical controls. Study designs to avoid time-related biases. Epidemiology, September 28, 2020 - Volume Publish Ahead of Print.