Professor Alastair Denniston: Future Regulation Of AI In Healthcare

UK Gov

Creating a framework that is safe, fast and trusted.

MHRA foreword

Most generations of healthcare professionals witness a moment that changes the landscape of medicine forever. For some, it was the discovery of antibiotics. For others, the arrival of MRI scanners. Today, we stand at the threshold of another transformative era - one shaped by the power of Artificial Intelligence.

In the next article in our strategy blog series, Professor Alastair Denniston outlines some of the core principles that should be used to guide the regulation of AI in healthcare as it weaves itself into the fabric of modern medicine.

This blog invites a national conversation about the kind of healthcare system we want to build together. It reminds us that that the future of AI in healthcare will not be defined by machines, but by the values we bring to their use.

Professor Alastair Denniston is a practising consultant ophthalmologist; Professor of Regulatory Science and Innovation, University of Birmingham; and Executive Director, Centre of Excellence for Regulatory Science in AI & Digital HealthTech (CERSI-AI). He is also the Chair of the MHRA's new National Commission on the Regulation of AI in Healthcare .

Guest blog: Professor Alastair Denniston

One of my greatest privileges as a doctor is to be trusted by patients and their families to be there for them. There at the time of need. There diagnosing and treating disease or injury. There helping them move towards recovery, health and well-being. I have been a doctor for nearly three decades now, and I am so glad about the progress we have made. The diagnostic tests I use now are way more accurate, the treatments more effective (with fewer side effects) and our whole approach to healthcare is more holistic, with the patient much more in the driving seat.

But there is so much more to do. We want diagnostic tests that are faster and more accurate. We want treatments that are safer and more personalised to the patient. We want to put health decisions even more firmly in the hands of the patient, and being able to make healthcare work around them. And a massive challenge is to make all of this affordable, by helping our precious NHS resources go further.

What's this got to do with AI? Well, every so often in medicine, a new tool comes along that enables us to make a massive step forward in healthcare. Discoveries like X rays and MRI scans enabled faster and more accurate diagnosis. Discoveries like antibiotics and monoclonal antibodies provided more effective and safer treatments. All of these were massively disruptive to how healthcare was delivered before, but all of these positively transformed healthcare, touching the lives of millions of people around the world.

It is still early days, but it looks like the arrival of AI is the 'X ray moment' of our time, the new discovery that enables a dramatic step forward in the quality of healthcare we can deliver. When X rays were discovered 130 years ago, they were unlike anything that had been used in healthcare before. People could see their potential for diagnosis and treatment, but it took time to work out how to use them safely, ensuring we could maximise their benefits, whilst protecting people from the harm of radiation.

We are at a similar stage with AI. We have had enough time to start to see how it can help in healthcare, but not quite long enough to get all our regulatory and safety systems optimised for the AI systems of today and tomorrow. And this is one of the challenges of AI: that it is not just one thing, but rather a whole pipeline of new technologies, with major new advances coming forward at speed. The arrival of X rays, CT scanners and MRI scanners was spread over decades. The arrival of different levels of AI technology is happening over months and years.

So how do we unlock the benefits of AI, whilst also ensuring patient safety? Earlier this year, the Government published the NHS 10 Year Plan, which included a vision of how AI will improve the quality and efficiency of healthcare. But in order to ensure that such technologies are only brought to patients when they are ready, the Government also recognised the need for a new regulatory framework that addresses the specific requirements of AI in health and established a new 'National Commission' to set out what this new framework should look like. The Commission brings together leading experts from across the UK and beyond, committed to creating a framework that is ready for the needs of today, and the opportunities of tomorrow.

What will this new framework look like? It is early days, so what I am going to present here is more like compass points rather than a map, more like the frame rather than the details of the picture itself.

Principle 1: Safe

AI in healthcare needs to be safe, and regulation is a really important tool to achieving this. The future regulatory framework must have the safety of patients and public at its heart. But what does this mean in practice? In normal life we do not try to live in 'absolute safety'. Every day we take risks, even just crossing the road or driving to work. We weigh up the potential benefits and harms of different options, and make a choice.

We might like to have 'absolute safety' in healthcare, but that's not really an option. In life, we make decisions that are 'proportionate to the risk', and we should expect regulation to align to this.

To stand still is not always safer than to move forward. A failure to innovate can be just as much of a risk to patient safety, as a failure to regulate.

Principle 2: Fast

The process of bringing beneficial technologies to patients and public should be as fast as possible. Moving from an idea to a product in the NHS can be a slow and challenging process. The new framework should ensure that all parts of that process - including regulation - to be as fast and 'frictionless' as possible. Delay is not just an inconvenience: it is a potential harm (or at least loss of benefit) to patients, public and the health service, and can threaten the very existence of the companies that are building these technologies. These are often small companies who cannot survive long delays, and their loss is felt by individuals, the NHS and the UK as a whole..

Principle 3: Trusted

Whether you're a patient or a healthcare professional, you need to know whether you can trust an AI technology to be safe and of high quality. Trust is earned, and any new regulatory framework needs to show how it will robustly ensure safety. The process needs to be transparent, including the reporting of any safety issues that arise after a technology starts being used in routine care. The regulatory system also needs to be trusted to be able to keep up sufficiently with the advances in AI, and can ensure safety for tomorrow's AI technologies as well as today's.

Moving forward

A new technology brings new opportunities and it brings new risks. We have to learn to control the risks, whilst unlocking the opportunities. It was true for X rays. It was true for antibiotics, It was true for every test or treatment we have in the NHS. And it is true for AI.

So what do you think? Soon, the National Commission will be issuing a 'call for evidence', an opportunity for you to tell us what you think about the regulation of AI in healthcare. You don't need to be an AI expert. We just want to know what you think, and what matters to you. The National Commission is an opportunity for all of us to shape the future of AI in the NHS. Let's do this together. Thank you.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.