UK State Secretary Donelan Inaugurates AI Safety Summit

Secretary of State for Science Innovation and Technology, Michelle Donelan, opens the AI Safety Summit

Good morning, everybody.    

It is my privilege to welcome you all to the first ever global summit on Frontier AI safety. 

During a time of global conflict eight decades ago, these grounds here in Bletchley Park were the backdrop to a gathering of the United Kingdom's best scientific minds, who mobilized technological advances in service of their country and their values.   

Today we have invited you here to address a sociotechnical challenge that transcends national boundaries, and which compels us to work together in service of shared security and also shared prosperity.   

Our task is as simple as it is profound: to develop artificial intelligence as a force for good. 

The release of ChatGPT, not even a year ago, was a Sputnik moment in humanity's history.   

We were surprised by this progress - and we now see accelerating investment into and adoption of AI systems at the frontier, making them increasingly powerful and consequential to our lives.   

These systems could free people everywhere from tedious work and amplify our creative abilities.  

They could help our scientists unlock bold new discoveries, opening the door to a world potentially without diseases like cancer and with access to near-limitless clean energy.   

But they could also further concentrate unaccountable power into the hands of a few, or be maliciously used to undermine societal trust, erode public safety, or threaten international security.   

However, there is a significant debate that is very robust…and I am sure it's going to be very robust with the attendees over the next two days.

Just about whether these risks will materialise.

How they will materialise.

And, potentially, when they will materialise.

Regardless, I believe we in this room have a responsibility to ensure that they never do.   

Together, we have the resources and the mandate to uphold humanity's safety and security, by creating the right guardrails and governance for the safe development and deployment of frontier AI systems.   

But this cannot be left to chance, neglect, or to private actors alone.   

And if we get this right -- the coming years could be what the computing pioneer J.C.R. Licklider foresaw as "intellectually the most creative and exciting in the history of humankind."   

This is what we are here to discuss honestly and candidly together at this Summit.   

Sputnik set off a global era of advances in science and engineering that spawned new technologies, institutions, and visions, and led humanity to the moon. 

We, the architects of this AI era - policymakers, civil society, scientists, and innovators - must be proactive, not reactive, in steering this technology towards the collective good.  

We must always remember that AI is not some natural phenomenon that is happening to us, but it is a product of human creation that we have the power to shape and direct.   

And today we will help define the trajectory of this technology, to ensure public safety and that humanity flourishes in the years to come.    

We will work through four themes of risks in our morning sessions, which will include demonstrations by researchers from the UK's Frontier AI Taskforce.   

Risks to global safety and security…   

… Risks from unpredictable advances,   

… from loss of control,   

… and from the integration of this technology within our societies.

Now, some of these risks do already manifest as harms to people today and are exacerbated by advances at the frontier.  

The existence of other risks is more contentious and polarizing.  

But in the words of mathematician I.J. Good, a codebreaker colleague of Turing himself here at Bletchley Park, "It is sometimes worthwhile to take science fiction seriously."   

Today, is an opportunity to move the discussion forward from the speculative and philosophical further towards the scientific and the empirical.   

Delegations and leaders from countries in attendance have already done so much work in advance of arriving… 

…across a diverse geopolitical and geographical spectrum to agree the world's first ever international statement on frontier AI - the Bletchley Declaration on AI Safety.  

Published this morning, the Declaration is a landmark achievement and lays the foundations for today's discussions.   

It commits us to deepening our understanding of the emerging risks of frontier AI.   

It affirms the need to address these risks - as the only way to safely unlock extraordinary opportunities.   

And it emphasises the critical importance of nation states, developers and civil society, in working together on our shared mission to deliver AI safety.   

But we must not remain comfortable with this Overton window. 

We each have a role to play in pushing the boundaries of what is actually possible. 

And that is what this afternoon will be all about, to discuss what actions different communities will need to take next, and to bring out diverse views, to open up fresh ideas and challenge them.    

For developers to discuss emerging risk management processes for AI safety, such as responsible, risk-informed capability scaling.   

For national and international policymakers to discuss pathways to regulation that preserve innovation and protect global stability.   

For scientists and researchers to discuss the sociotechnical nature of [safety], and approaches to better evaluate the risks.

These discussions will set the tone of the Chair's summary which will be published tomorrow. They will guide our collective actions in the coming year. 

And this will lead up to the next summit, that I am delighted to share with you today will be hosted by the Republic of Korea in six months' time. And then by France in one year's time. 

These outputs and this forward process must be held to a high standard, commensurate with the scale of the challenge at hand.   

We have successfully addressed societal-scale risks in the past. 

In fact, within just two years of the discovery of the hole in the Antarctic ozone layer, governments were able to work together to ratify the Montreal Protocol, and then change the behaviour of private actors to effectively tackle an existential problem.   

We all now look back upon that with admiration and respect. 

But for the challenges posed by frontier AI, how will future generations judge our actions here today?  

Will we have done enough to protect them?

Will we have done enough to develop our understanding to mitigate the risks?  

Will we have done enough to ensure their access to the huge upsides of this technology?   

This is no time to bury our heads in the sand. And I believe that we don't just have a responsibility, we also have a duty to act - and act now. 

So, your presence here today shows that these are challenges we are all ready to meet head on. 

The fruits of this summit must be clear-eyed understanding,  routes to collaboration, and bold actions to realize AI's benefits whilst mitigating the risks. 

So, I'll end my remarks by taking us back to the beginning. 

73 years ago, Alan Turing dared to ask if computers could one day think. 

From his vantage point at the dawn of the field, he observed that "we can only see a short distance ahead, but we can see plenty there that needs to be done."  

Today we can indeed see a little further, and there is a great deal that needs to be done.

So, ladies and gentlemen, let's get to work. 

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.