More than a decade of underfunding by successive governments has left the UK's justice system in crisis. There is now a significant backlog in cases and court dates are being cancelled due to logistical problems.
Authors
- Morgan Currie
Lecturer in Data & Society, University of Edinburgh
- Alexandra Ba-Tin
PhD Student, School of Social and Political Science, University of Edinburgh
- Ben Collier
Lecturer in Digital Methods, University of Edinburgh
Powerful voices in UK politics, including the Tony Blair Institute and Policy Exchange think tanks, have put their weight behind artificial intelligence (AI) as a potential solution to problems being experienced across the public sector. Some of those voices believe that AI could liberate staff from bureaucratic workloads and give them more time to concentrate on the human aspects of justice, such as face-to-face engagement with clients.
In January, the Labour government announced a plan to "unleash" AI across the UK in a bid to "turbocharge" growth, boost living standards and revolutionise public services.
So how might AI affect the UK's justice system?
The current focus on AI has been largely driven by developments in large language models (LLMs). This is the technology behind AI chatbots such as ChatGPT. But automation, machine learning, and other AI tools are not novel features of the justice system.
Older tools such as Technology Assisted Review used a form of AI to help lawyers predict the probable relevance of documents to a particular case or matter. More controversially, risk-scoring algorithms have been used in probation and immigration cases.
Critics of the last example have warned that these systems entrench inequalities and affect people in life altering ways without their knowledge.
However, these automated risk scoring systems are substantially different in nature to the productivity tools based on LLMs that are aimed at streamlining administrative processes. The latter can draft statements as well as scheduling and transcribing meetings.
They can also retrieve and summarise sources for document reviews and case law. Apparent success stories include the Old Bailey saving £50,000 by using AI to process evidence overviews for court cases.
How and why these tools are implemented - the institutional context - matters enormously. When digital tools are used not to provide more space for the human aspects of justice, but instead to cut costs, the harms fall especially heavily on vulnerable clients .
This is because even these seemingly routine administrative uses of AI require human reviewers to catch plausible, but wrong, information produced by these tools and to exercise expert judgment.
Evidence from a small scale Home Office pilot scheme shows why this is important. The pilot scheme used LLMs to summarise asylum case documents and transcripts to support asylum decisions.
Some 9% of the results were found to be inaccurate and missing interview references. Another 23% of users testing the scheme did not feel fully confident in the summaries, despite significant time savings.
Justice and digitisation
In July 2025, the Ministry of Justice published its AI Action Plan for Justice . While Microsoft's Copilot Chat is already available for judicial office holders, the strategy document promised to roll out AI tools to 95,000 justice staff by December.
The plan acknowledges the many limitations of AI. It also establishes a chief AI officer, creates AI guidelines and emphasises that AI should "support, not substitute" human judgment.
It emphasises a cautious method towards roll-out, including an effort to gather feedback from trade unions and the public. It also stresses transparency through a new website and ethics framework.
The plan continues to promote more controversial uses of the technology, including assessing a person's risk of violence in custody. Nevertheless, it focuses more heavily on LLMs for time saving tasks in administration.
However, could the new strategy lead to the adoption of LLM tools by the justice system before there is a mature understanding of how they are best applied? Decisions based in part on AI generated evidence are likely to offer new grounds for complaints and challenges. This could add to, rather than reduce, the backlog in cases.
In June 2025, a senior UK judge warned lawyers against the use of LLM tools because of the potential for those tools to "hallucinate" - generate fictitious information. There have been a number of cases elsewhere in the world where fictitious AI-generated material has apparently been filed in court cases.
Given their limitations, any benefits of these tools will generally be seen in those parts of the system where resources and time for human oversight are at their highest. The risks will hit hardest where human time and resources are low and where clients have less money and time to challenge decisions.
This unequal access to justice is not solely an AI issue. Previous waves of digitisation used to reduce the bureaucratic load included allowing some guilty pleas to be lodged online and automatic online convictions for some crimes, which would otherwise have required a court hearing.
As Gemma Birkett, lecturer in criminal justice at City St Georges University, argues , these automated systems particularly affect marginalised women, who are far more likely to plead guilty to crimes they did not commit.
Papering over the cracks
There are powerful arguments to be made in favour of using bespoke, carefully developed technology to remove the administrative burden on justice system staff, so that they can concentrate on the aspects of their work best delivered by people.
But when the current system is struggling, adopting LLMs (or other forms of rapid digitisation) will not fix the deep underlying problems caused by years of austerity. Rather than reducing bureaucracy, they risk papering over the cracks in a dysfunctional system.
Ben Collier receives funding from the Scottish Institute for Policing Research and is the Chair of the Foundation for Information Policy Research.
Alexandra Ba-Tin and Morgan Currie do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.