We met with fellow members of the network in Vancouver to advance artificial intelligence (AI) safety science.
The network is catalysing a new phase of international cooperation on AI safety. Their work affirms and builds on the Seoul statement of intent toward international cooperation on AI safety science released at the AI Seoul Summit on 21 May 2024.
Australia joined representatives from:
- Canada
- the European Commission
- France
- Japan
- the Republic of Korea
- Singapore
- the United Kingdom
- the United States.
Network directors gathered in the sidelines of the 2025 42nd International Conference on Machine Learning (ICML). This leading international academic conference is dedicated to advancing the branch of AI known as machine learning.
The network also published the following papers to inform discussions at ICML.
Australia and Canada co-led the publication of a research agenda developed by members of the Network on managing risks from AI-generated content.
Singapore and the United Kingdom co-published findings from the network's most recent joint testing exercise. This third exercise aimed to advance the science of AI agent evaluations and work towards building common best practices for testing AI agents. The exercise was split into 2 strands:
- leakage of sensitive information and fraud (Singapore led)
- cybersecurity (UK led).
Australia contributed to both testing strands, with technical contributions from the following organisations:
- CSIRO's Data61
- Gradient Institute
- Mileva Security Labs
- UNSW's AI Institute.
Our department will continue to support the network's mission to advance AI safety by helping governments and society understand the risks posed by advanced AI systems. We'll continue to suggest solutions to address those risks to minimise harm.