Leading AI Firms Boost Transparency, OECD Reports

Leading AI developers are taking significant steps to make their systems more robust and secure, according to a new OECD report.

How are AI developers managing risks? Insights from responses to the reporting framework of the Hiroshima AI Process Code of Conduct analyses voluntary transparency reporting under the G7 Hiroshima AI Process from technology and telecommunications companies as well as advisory, research, and educational institutions, including Anthropic, Google, Microsoft, NTT, OpenAI, Salesforce and Fujitsu.

The analysis shows that many organisations are developing increasingly sophisticated methods to evaluate and mitigate risks, including adversarial testing and AI-assisted tools to better understand model behaviour and improve reliability. Larger technology firms tend to have more advanced practices, particularly in assessing systemic and society-wide risks.

The report also finds that key AI actors increasingly recognise the importance of sharing information about risk management to build trust, enable peer-learning and create more predictable environments for innovation and investment. However, technical provenance tools such as watermarking, cryptographic signatures, and content credentials remain limited beyond some large firms.

"Greater transparency is key to building trust in artificial intelligence and accelerating its adoption. By providing common reference points, voluntary reporting can help disseminate best practices, reduce regulatory fragmentation, and promote the uptake of AI across the economy, including by smaller firms" said Jerry Sheehan, Director for Science, Technology and Innovation at the OECD.

"As we define common transparency expectations, the Hiroshima AI Process Reporting Framework can play a valuable role by streamlining the reporting process. Going forward, it could also help align organisations on emerging reporting expectations as AI technology and governance practices continue to advance." Amanda Craig, Senior Director, Responsible AI Public Policy, Microsoft.

Developed under the Italian G7 Presidency in 2024 with input from business, academia and civil society, the OECD's voluntary reporting framework provides a foundation for co-ordinated approaches to safe, secure and trustworthy AI. It supports the implementation of the Hiroshima AI Process initiated under Japan's 2023 G7 Presidency.

The report How are AI developers managing risks? Insights from responses to the reporting framework of the Hiroshima AI Process Code of Conduct is available here.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.