AI Consortium Boosts National Security Analysis

University of Warwick is part of a new £1m consortium to develop AI methods for national security and defence.

The AiTASHA project, funded by the Engineering and Physical Sciences Research Council (EPSRC), aims to improve the speed and confidence of intelligence analysts' assessments by building new AI tools that can work alongside human analysts.

The £1m research grant has been awarded to The Alan Turing Institute, University of Warwick and several other universities, working closely with UK government defence and national security partners.

Professor Jim Smith, Department of Statistics at University of Warwick, said: "Over several decades now the UK has led the development of predictive models that are structured around human expert judgements so that multimodal data can be fully utilised and interpreted. This project aims to translate these methodologies into the national security arena. It will lead the development of new defensible and ethical AI tools that are able to interpret streaming electronic data through the lens of human analytic expertise to further protect our country."

Intelligence analysts are routinely required to make high-consequence, defensible assessments from vast, complex and uncertain datasets, to identify indicators and warnings of hostile or malicious activities.

Analysts must make these assessments rapidly in a highly pressured and resource-constrained environment, where they face difficult choices of what data to analyse first, and whether to gather additional intelligence, potentially at the cost of delay or increased risk. Addressing this challenge is becoming increasingly urgent as both the scale and complexity of intelligence datasets, as well as the threat posed to UK safety, are growing.

Dr Richard Walters, Lead Research Data Scientist at the Turing's Defence AI Research Centre (DARe), said: "Developing specialised artificial intelligence tools to support national security analysts is a vital area of research, aimed at keeping our country safe. Analysts are often required to make critical assessments under intense pressure. This work explores how machines can help intelligence analysts respond to increasingly complex threats and datasets in a more efficient way, whilst maintaining their existing high legal and ethical standards."

The researchers will build explainable, defensible AI to complement, rather than replace, the work of intelligence analysts, by recommending which existing data should be prioritised for human review and which potential new data should be prioritised for acquisition.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.