AI Tools to Revolutionize Policing?

Police officers often work with partial information under severe time constraints in situations that can change in seconds. Whether investigating a crime or patrolling a neighbourhood, they regularly have to make predictions based on instinct.

Authors

  • Federico Iannacci

    Senior Lecturer in Management, University of Sussex Business School, University of Sussex

  • Stan Karanasios

    Professor in Information Systems, The University of Queensland

This "gut policing" isn't just guesswork - it's fast pattern recognition. It comes from training and years of dealing with real incidents, learning from colleagues, and building an instinctive sense of what matters and what doesn't.

But instincts are no longer the only way police connect the dots. Many police forces are investing in AI-enabled tools , including predictive policing algorithms that forecast crime hotspots and offender assessment systems designed to support decision-making.

This reflects a wider global trend: police forces are integrating AI into everyday policing. These AI-enabled tools draw on large volumes of data and patterns that would be impossible for any single officer to analyse in real time. The aim is straightforward: to help ensure decisions are based on strong evidence and reliable data, rather than relying solely on instinct or experience.

Many people appear to accept the use of AI technology by police forces - so long as there are clear guidelines in place first.

AI has long been discussed as a threat to jobs and livelihoods. But what's the reality? In this series , we explore the impact AI is already having on specific occupations - and how people in these jobs feel about their new AI assistants.

In England, police forces are already using AI tools in day-to-day work. These include Untrite Thrive , which helps staff in police control rooms decide how to allocate resources. Another example is Qlik Sense , used by Avon and Somerset Police for monitoring the likelihood of reoffending or perpetrating a crime. These developments align with a broader government agenda focused on efficiency and cost reduction.

But once you swap human judgment for more automated predictions, the value of officers' traditional connect-the-dots police logic can be lost. There have been plenty of examples where AI tools have flagged the wrong people, the wrong places, or the wrong risks.

Unverified information

A House of Commons select committee recently highlighted serious failings in West Midlands Police's use of the AI assistant Microsoft Copilot in its decision to stop Israeli fans of Maccabi Tel Aviv football club from travelling to Birmingham for a Europa League match against Aston Villa last November.

Claims made by this force about alleged disorder involving Maccabi fans at past matches were based on inaccurate information generated by Copilot, including a supposed game between the Israeli club and West Ham United that never happened.

"Information that showed the Maccabi fans to be a high risk was trusted without proper scrutiny," explained the committee's chair Karen Bradley. "Shockingly, this included unverified information generated by AI."

This inaccurate AI‑generated information was repeated by senior police officers in safety advisory group meetings and even in oral evidence to MPs, demonstrating a lack of due diligence and overreliance on unverified AI outputs. The case is now subject to an investigation by the Independent Office for Police Conduct.

And this was not an isolated incident. The Harm Assessment Risk Tool deployed by Durham Constabulary was found to have displayed many flaws , from overestimation of the likelihood of reoffending to discrimination in its datasets.

And the Metropolitan Police's now-discontinued Gang Matrix, a database that recorded intelligence related to alleged gang members, was heavily criticised by the Information Commissioner's Office for unfairly labelling young black men as high‑risk based on flawed scoring.

Relying on AI-driven tools can be a double-edged sword in policing. They can improve decisions, but can also reinforce bias and amplify mistakes . In our experience of working with police forces in England, AI‑supported decision‑making works best when police officers combine their operational experience with data‑driven insights.

Reinforcing biases

Our ongoing study of AI use in policing shows that uncritical reliance on AI risks reinforcing existing biases, disproportionately affecting the poorest and most marginalised communities.

Our research, which is yet to be published, suggests that effective use of AI requires a difficult balance: officers must both trust and mistrust AI recommendations at the same time, maintaining a vigilant mindset.

To prevent biases creeping into AI‑supported decisions, police forces should invest in bias‑awareness training that prepares officers to question AI outputs regularly and constructively.

The National Police Chiefs' Council covenant mandated that AI should support rather than replace human judgment. This is a step in the right direction. Yet even this principle can backfire if police officers treat AI recommendations as objective truth, rather than guidance that requires careful scrutiny.

These concerns take on renewed urgency in light of the government's introduction of a national predictive policing prototype , announced in August 2025. The system, scheduled for nationwide deployment by 2030, combines AI‑powered crimemapping with behavioural‑pattern analysis, supported by a £4 million initial investment.

It draws on data from police forces, local councils and social services, and builds directly on the expanding fleet of live facial recognition vans now operating across seven forces across England and Wales.

At the same time, developments inside policing organisations highlight the limits of technological oversight. The Met was recently reported to have begun using AI tools to flag potential officer misconduct by analysing internal data such as sickness records, absences and overtime patterns.

While the Met argues that such systems help raise standards and rebuild public trust, critics warn that such monitoring risks misclassifying workplace pressures as misconduct and eroding accountability rather than strengthening it.

Ultimately, whether AI technology improves policing outcomes depends on the governance surrounding it. Ensuring there is a vigilant human in every AI loop should be a non-negotiable safeguard.

The Conversation

Federico Iannacci has received funding from the British Academy for a small research grant entitled "Investigating the future of work in policing: a Qualitative Comparative Analysis of police forces in England and Wales."

Stan Karanasios does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).