Prof. Hughes: Patient Safety at Regulatory Innovation Core

UK Gov

Professor Henrietta Hughes reflects on how MHRA strategy must prioritise patient safety via listening to lived experiences, fostering collaboration & innovation.

Foreword

As the MHRA develops its upcoming corporate strategy in an era of rapid medical advancement and technological change, patient safety must remain its unwavering compass.

In the next of our guest blogs, England's Patient Safety Commissioner Professor Henrietta Hughes reflects on the essential truth that the most powerful insights into safety come not only from data, trials, or algorithms, but from the lived experiences of patients themselves. When patients are listened to, when their perspectives are valued from the outset, harm can be prevented and genuine improvement achieved.

This piece calls for a culture of listening - to patients, to professionals, and to evidence - and for a new model of collaboration that places patients at the heart of every regulatory decision. True innovation is not only about what technology can achieve, but about how safely and equitably it serves those who rely on it.

MHRA Strategy Development: Putting Patient Safety at the Heart of Regulatory Innovation

As Patient Safety Commissioner, I have seen how harm can persist when patients aren't listened to and how transformative it can be when regulatory bodies truly listen to those they serve. The Medicines and Healthcare products Regulatory Agency (MHRA) stands at a pivotal moment in healthcare regulation, balancing innovation and access to novel treatments with an unwavering commitment to patient safety. This balance becomes even more critical as we navigate emerging technologies like artificial intelligence and enhanced post-market surveillance.

Patients and the public bring perspectives that no clinical trial or regulatory framework can fully capture. They experience how medications and medical devices perform in diverse populations, across different health conditions, and over many years in real-world settings. Post-market surveillance must evolve to detect subtle safety signals earlier, particularly for vulnerable populations who may experience different risk profiles.

The views of patients and the public must be trusted, respected and acted upon. Their lived experiences help identify safety signals that might otherwise remain hidden in statistical noise or missed by traditional safety monitoring. But we can't wait until harm has occurred, we must open our mindset to the value of incorporating the patient's perspective as the best way to keep people safe. In my work with patients with sensory impairments on The Safety Gap report, it was clear that involving patients and their representatives from the start can prevent harm and drive improvement.

The Patient Safety Principles I published last year provide the framework to include all patients' perspectives in all healthcare decisions: developing a culture of safety which puts patients at the heart of everything, promoting equity and identifying health inequalities, identifying and mitigating risks, being transparent and accountable and using data and information to improve patient care and outcomes. These principles set out a way of involving patients, not as a tick box exercise, but genuine partnerships in regulatory decision making.

Artificial intelligence as a medical device presents unprecedented regulatory challenges. Systems can learn and evolve, potentially changing their behaviour post-approval in ways traditional devices cannot. From the patient safety perspective, this requires greater vigilance and new approaches to risk proportionate regulation. These questions, amongst many others, will become the focus of the National Commission on the Regulation of AI in Healthcare of which I am the co-chair. The Commission, bringing together patients, researchers, clinicians and tech leaders, will help us to understand the benefits and risks of AI for patients and those who care for them.

Misinformation is rife and a quote from Lawrence Tallon's blog published last month was a chilling portent of the vital role of a trusted regulator.

We must also be ready to speak with authority and clarity in an age of insidious misinformation.

I was pleased to see the MHRA, as a trusted regulator, provide swift, accurate and credible information about the safety of paracetamol-based products.

With the opportunities for artificial intelligence to improve patient safety, device design and medicine design we need safeguards about which content will drive the algorithms and how bias and misinformation will be removed. As patients and the public turn to AI chatbots for help with health conditions, credible sounding advice needs to be based on credible evidence. There are already examples of gender bias and unsafe advice provided by AI systems. Patients and healthcare professionals are crucial partners in monitoring, as they may be the first to identify when an AI system's outputs seem inappropriate or potentially harmful.

The Patient Safety Principle of transparency and accountability becomes particularly relevant here. Firstly, transparency about AI system limitations, uncertainty bounds and decision-making processes and clarity about who takes the responsibility for decisions which lead to patient harm. Should it be the clinician, the healthcare organisation or the software developer? Patients have a right to understand when AI influences their care and how systems are monitored for safety and bias.

Collaborative working with patients, healthcare professionals, researchers, and industry partners will create regulatory frameworks that are both protective and enabling. By embedding patient voices throughout the regulatory lifecycle, from initial strategy formulation through post-market monitoring, we can ensure that patient safety remains at the centre of all decisions.

The future of medical regulation lies not in choosing between innovation and safety, but in recognizing that true innovation serves patients best when it is built on a foundation of trust, robust safety measures and genuine partnership.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.