New Funding For Human-AI Teamwork Project

A new project led by the University of Portsmouth which will explore how humans and AI can work together in high-pressure security environments has secured £60k funding.

This project will investigate how humans and AI can form effective teams in security settings, combining insights from cognition, behavioural science, and human-AI interaction through evidence review and realistic scenario-based simulations.

Project lead, Professor Lorraine Hope from the University's School of Psychology, Sport and Health Sciences , said: "Security organisations today face an increasingly complex challenge: making critical decisions quickly using large volumes of fragmented and often uncertain information. From identifying potential threats to assessing risks, decision-makers must act under pressure, where delays or errors can have serious consequences.

"This newly funded project aims to address that challenge by examining when and how AI can support human judgement in security settings."

Security organisations today face an increasingly complex challenge: making critical decisions quickly using large volumes of fragmented and often uncertain information.

Lorraine Hope, Professor of Applied Cognitive Psychology

AI technologies are already capable of processing vast amounts of data and identifying patterns at speeds far beyond human capacity. However, when it comes to sensitive areas like security, the question is not just what AI can do, but how it should be used responsibly. Trust, transparency, and human oversight remain essential.

The interdisciplinary project brings together experts in psychology, behavioural science and human-AI interaction with security specialists to understand how people and AI can form effective, trustworthy partnerships.

The team includes Oli Buckley, a Professor of Cyber Security at Loughborough University; Olivia Brown, a lecturer at the University of Bath; and Jim Sauer, an Associate Professor in Psychology at the University of Tasmania.

The project will be delivered in two key stages:

  • Reviewing emerging evidence: Researchers will analyse current knowledge on how humans and AI interact, identifying what works, where gaps remain, and how insights from behavioural science can improve real-world applications.

  • Running realistic simulations: The team will design immersive, scenario-based exercises that reflect real-world security challenges. Participants will take on roles working alongside AI systems that act as advisers or collaborators.

The simulations will focus on two critical areas:

  • Managing information overload: Helping professionals prioritise the most relevant information from large volumes of data.

  • Assessing threats under uncertainty: Supporting better decision-making when information is incomplete or unclear.

A key part of the research will explore communication between AI and human users. For example, the project will examine whether providing explanations, asking reflective questions, or building rapport affects how much people trust and rely on AI recommendations.

Professor Hope added: "By focusing on collaboration, the research aims to strengthen human judgement. Ultimately, it seeks to help government, defence and law enforcement organisations harness the benefits of AI while maintaining the accountability, transparency, and trust that are essential to public confidence."

The project is funded by The NetworkPlus for Analytical Behavioural Science (NABS+) in Security and Defence, a UKRI-funded network focused on developing the next generation of behavioural science and behavioural scientists for national security and defence.

You might also like...

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.