New Report Warns of Risks in Multi-Agent AI Systems

Dept of Industry, Science and Resources

The new report identifies emerging critical risks as organisations start deploying multiple large language models (LLMs) in a system together. This network of AI agents is known as a multi-agent system.

The report contributes to the growing body of research that helps measure and manage risks associated with frontier AI technologies. It aims to help Australian organisations develop better ways to assess and monitor risks from general-purpose AI systems.

The report, Risk analysis tools for governed LLM-based multi-agent systems, outlines failure modes that arise when multiple AI agents interact, including:

  • inconsistent performance of a single agent derailing complex processes
  • cascading communication breakdowns
  • shared blind spots and repeated mistakes
  • groupthink dynamics
  • coordination failures.

Traditional single agent testing doesn't capture these risks which could have serious consequences for critical infrastructure and essential services.

Gradient Institute's Chief Scientist Dr Tiberio Caetano explains: 'A collection of safe agents does not make a safe collection of agents. As multi-agent systems become more prevalent, the need for risk analysis methods that account for agent interactions will only grow.'

The report offers practical tools for early-stage risk identification and analysis, including conceptual frameworks for evaluating measurement validity. It emphasises progressive testing to safely explore multi-agent deployments, from controlled simulations to monitored pilot programs.

This work is part of the government's broader effort to strengthen scientific understanding and improve the safety of AI technologies for everyone. It aligns with Australia's role in the International Network of AI Safety Institutes, which includes co-leading a global research agenda on managing risks from AI-generated content and joint testing frontier AI systems.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.