UK Urges UN for Joint Efforts in Safe, Responsible AI Development

I join others in thanking the UAE and Albania for convening this meeting, and the briefers for their expert insights. Indeed, the impact of artificial intelligence on mis and disinformation was a common concern when the Security Council held its first meeting on the opportunities and risks of AI to international peace and security, during our Presidency in July. So, we welcome further discussion on this topic today, along with the ongoing work of the Department of Global Communications to establish a Code of Conduct for Information Integrity on Digital Platforms to guide and support national responses to mis and disinformation.

Disinformation is a familiar topic to the Security Council, from its impact, as the Under-Secretary-General said, on UN peacekeeping operations, to its role in exacerbating conflict. However, as others have noted, advances in AI technologies make it easier, quicker and cheaper for malign actors to spread false information in hundreds of languages. And this creates potentially harmful consequences for public trust in information and institutions and poses grave risks to stability.

With the rapid pace of technological development and billions expected to vote next year in elections around the world, understanding the risks that advances in AI-generated disinformation poses to inclusive and peaceful societies is critical.

We should ensure the right behaviours and response levers exist across government, industry, and the general public to address these risks. The UK continues to promote the design, development, and use of technology in a way that adheres to the following four principles:

Open - supporting personal freedoms and democracy. Responsible - consistent with the rule of law and human rights, and supporting sustainable growth. As well as ensuring data is used responsibly in a way that is lawful, protected, ethical and accountable. Secure - with security, safety and predictability built in by design. And finally, resilient- being reliable and trusted by the public.

AI risks are not limited to national boundaries of course, they are global and affect us all. Managing these risks requires concerted international action involving all actors, states, international institutions, the private sector, academia, and civil society.

At the first global summit on AI Safety in the UK in November, States recognised an urgent need to address concerns around AI's ability to manipulate or generate deceptive content.

We must continue to work together in an inclusive manner to ensure that human-centric, trustworthy, and responsible AI is developed that is safe and supports the good of all.

Thank you.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.