Landmark Bletchley Declaration Sets Safe AI Development Rules

  • For the first time, 28 countries convened by the UK and including the US, EU and China agree opportunities, risks and need for international action on frontier AI - systems where we face the most urgent and dangerous risks
  • driving forward key summit objectives on understanding of the risks and establishing further global collaboration, crucial talks are underway at Bletchley Park with the Technology Secretary opening the 2-day Summit
  • consensus on need for sustained international co-operation sees next summit hosts confirmed

Leading AI nations, convened for the first time by the UK and including the United States and China, along with the European Union, have today (Wednesday 1 November 2023) reached a world-first agreement at Bletchley Park establishing a shared understanding of the opportunities and risks posed by frontier AI and the need for governments to work together to meet the most significant challenges.

The Bletchley Declaration on AI safety sees 28 countries from across the globe including Africa, the Middle East, and Asia, as well as the EU, agreeing to the urgent need to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community.

Countries endorsing the Declaration include Brazil, France, India, Ireland, Japan, Kenya, the Kingdom of Saudi Arabia, Nigeria and the United Arab Emirates.

The Declaration fulfils key summit objectives in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration. Talks today, with leading frontier AI companies and experts from academia and civil society, will see further discussions on understanding frontier AI risks and improving frontier AI safety.

Countries agreed substantial risks may arise from potential intentional misuse or unintended issues of control of frontier AI, with particular concern caused by cybersecurity, biotechnology and misinformation risks. The Declaration sets out agreement that there is "potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models." Countries also noted the risks beyond frontier AI, including bias and privacy.

Recognising the need to deepen the understanding of risks and capabilities that are not fully understood, attendees have also agreed to work together to support a network of scientific research on Frontier AI safety. This builds on the UK Prime Minister's announcement last week for the UK to establish the world's first AI Safety Institute and complementing existing international efforts including at the G7, OECD, Council of Europe, United Nations and the Global Partnership on AI. This will ensure the best available scientific research can be used to create an evidence base for managing the risks whilst unlocking the benefits of the technology, including through the UK's AI Safety Institute which will look at the range of risks posed by AI.

The Declaration details that the risks are "best addressed through international cooperation". As part of agreeing a forward process for international collaboration on frontier AI safety, The Republic of Korea has agreed to co-host a mini virtual summit on AI in the next 6 months. France will then host the next in-person Summit in a year from now.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.