Around half of trials that supported new cancer drug approvals in Europe between 2014 and 2016 were judged to be at high risk of bias, which indicates that treatment effects might have been exaggerated, according to a study published by the British Medical Journal this week.
In a linked editorial, University of South Australia researcher Dr Agnes Vitry and Assoc Prof Barbara Mintzes from the University of Sydney argue that the flawed evidence can cause “direct harm if patients risk severe or fatal adverse effects without likely benefit, or forgo more effective and safer treatments.”
The researchers say that inaccurate evidence also encourages false hope and creates a distraction from needed palliative care. They highlight the need to improve the design, conduct, analysis, and reporting of cancer drug trials.
In the European Union, the European Medicines Agency (EMA) is responsible for evaluating the clinical effectiveness and safety of new medicines.
In 2017, more than a quarter (24 of 92) of EMA approvals were for cancer drugs, most of which were based on evidence from randomised controlled trials, considered to be the “gold standard” for evaluating treatment effectiveness.
However, flaws in the design, conduct, analysis, or reporting of randomised controlled trials can distort estimates of treatment effect, potentially jeopardising the validity of their findings.
To evaluate these flaws in more detail, a team of international researchers examined the design, risk of bias, and reporting of randomised controlled trials that supported European approvals of cancer drugs from 2014 to 2016.
During this period, the EMA approved 32 new cancer drugs on the basis of 54 studies including 39 randomised controlled trials with available publications. Only 10 trials (26%) measured overall survival as a main (primary) endpoint. The remaining 29 trials (74%) evaluated indirect (surrogate) measures of clinical benefit, which do not always reliably predict whether a patient will live longer or have a better quality of life.
Overall, 19 trials (49%) were judged to be at high risk of bias because of deficits in their design, conduct, or analysis. Trials that evaluated overall survival were at lower risk of bias than those that evaluated surrogate measures of clinical benefit.
The researchers point to several limitations of their study. It remains a possibility that the methodological deficits identified did not lead to biased findings. Some of the bias might be unavoidable because of the complexity of cancer trials.
Dr Vitry says the findings add weight to existing research that raises serious concerns about low standards of evidence supporting new cancer drugs.