Police Facial Recognition Accurate, Public Awareness Low

The UK government's proposed reforms to policing in England and Wales signal an increase in the use of facial recognition technology. The number of live facial recognition vans is set to rise from ten to 50, making them available to every police force in both countries.

Authors

  • Kay Ritchie

    Associate Professor in Cognitive Psychology, University of Lincoln

  • Katie Gray

    Associate Professor, School of Psychology and Clinical Language Sciences, University of Reading

The plan pledges £26 million for a national facial recognition system, and £11.6 million on live facial recognition technology. The announcement has come before the end of the government's 12-week public consultation on police use of such technology.

The home secretary, Shabana Mahmood, claims facial recognition technology has "already led to 1,700 arrests in the Met [police force] alone - I think it's got huge potential."

We have been researching public attitudes to the use of this technology around the world since 2020. While accuracy levels are constantly evolving, we have found people's awareness of this is not always up to date.

In the UK, the technology has so far been used by police in three main ways . All UK forces have the capability to use "retrospective" facial recognition for analysis of images captured from CCTV - for example, to identify suspects. Thirteen of the 43 forces also use live facial recognition in public spaces to locate wanted or missing individuals.

In addition, two forces (South Wales and Gwent) use "operator-initiated facial recognition" through a mobile app, enabling officers to take a photo when they stop someone and then compare their identity against a watchlist containing information about people of interest - either because they have committed a crime or are missing.

In countries such as China , facial recognition technology has been used more widely by the police - for example, by integrating it into realtime mass surveillance systems. In the UK, some private companies including high-street shops use facial recognition technology to identify repeat shoplifters, for example.

Despite this widespread use of the technology, our latest survey of public attitudes in England and Wales (yet to be peer reviewed) finds that only around 10% of people feel confident that they know a lot about how and when this technology is used. This is still a jump from our 2020 study , though, when many of our UK focus group participants said they thought the technology was just sci-fi - "something that only exists in the movies".

A longstanding concern has been the issue of facial recognition being less accurate when used to identify non-white faces. However, our research and other tests suggest this is not the case with the systems now being used in the UK, US and some other countries.

How accurate is today's technology?

It's a common misconception that facial recognition technology captures and stores an image of your face. In fact, it creates a digital representation of the face in numbers. This representation is then compared with digital representations of known faces to determine the degree of similarity between them.

In recent years, we have seen a rapid improvement in the performance of facial recognition algorithms through the use of "deep convolutional neural networks" - artificial networks consisting of multiple layers, designed to mimic a human brain .

There are two types of mistake a facial recognition algorithm can make: "false negatives", where it doesn't recognise a wanted person, and "false positives" where it incorrectly identifies the wrong person.

The US National Institute of Standards and Technology (Nist) runs the world's gold standard evaluation of facial recognition algorithms. The 16 algorithms currently topping its leaderboard all show overall false negative rates of less than 1%, while false positives are held at 0.3%.

The UK's National Physical Laboratory's data shows the system being tested and used by UK police to search their databases returns the correct identity in 99% of cases . This accuracy level is achieved by balancing high true identification rates with low false positive rates.

While some people are uncomfortable with even small error rates, human observers have been found to make far more mistakes when doing the same kinds of tasks. Two of the standard tests of face matching ask people to compare two images side-by-side and decide whether they show the same person. One test recorded an error rate of up to 32.5% , and the other an error rate of 34% .

Historically, when testing the accuracy of facial recognition technology, bigger error rates have been found with non-white faces. In a 2018 study , for example, error rates for darker-skinned women were 40 times higher than for white men.

These earlier systems were trained on small numbers of images, mostly white male faces. Recent systems have been trained on much larger, deliberately balanced image sets. They are actively tested for demographic biases and are tuned to minimise errors.

Nist has published tests showing that although the leading algorithms still have slightly higher false positive rates for non-white faces compared with white faces, these error rates are below 0.5% .

How the public feel about this technology

According to our January 2026 survey of 1,001 people across England and Wales, almost 80% of people now feel "comfortable" with police using facial recognition technology to search for people on police watchlists.

However, only around 55% said they trust the police to use facial recognition responsibly. This compares with 79% and 63% when we asked the same questions to 1,107 people throughout the UK in 2020 .

Both times, we asked to what extent people agree with police using facial recognition technology for different uses. Our results show the public remains particularly supportive of police use of facial recognition in criminal investigations (90% in 2020 and 89% in 2026), to search for missing persons (86% up to 89%), and for people who have committed a crime (90% down slightly to 89%).

There are lots of examples of facial recognition's role in helping police to locate wanted and vulnerable people. But as facial recognition technology is more widely adopted, our research suggests the police and Home Office need to do more to make sure the public are informed about how it is - and isn't - being used.

We also suggest the proposed new legal framework should apply to all users of facial recognition, not just the police. If not, public trust in the police's use of this technology could be undermined by other users' less responsible actions.

It is critical that the police are using up-to-date systems to guard against demographic biases. A more streamlined national police service, as laid out in the government's latest white paper , could help ensure the same systems are being used everywhere - and that officers are being trained consistently in how to use these systems correctly and fairly.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).