UK's Frontier AI Taskforce Bolstered by Industry, Security Leaders

  • Frontier AI Taskforce to research AI safety, identify new uses for AI in the public sector and strengthen UK capabilities
  • Turing Prize Laureate & Director of GCHQ to join new expert panel to advise the Taskforce
  • global partnerships launched to assess AI risks in cybersecurity and catastrophic risks

Leading names from national security to computer science will advise the UK government on the risks and opportunities from AI as the Frontier AI Taskforce gathers momentum and appoints a team of experts to accelerate efforts.

Formerly the Foundation Model Taskforce, the group's focus will be on 'Frontier AI', and in particular, systems which could pose significant risks to public safety and global security. Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly. This includes cutting-edge large scale machine models, which are trained on vast amounts of data.

Since it launched 11 weeks ago, the Taskforce has made rapid progress in recruiting a team of seven heavy-hitting experts to guide and shape its work. Turing Prize Laureate Yoshua Bengio and GCHQ Director Anne Keast-Butler will join its newly-created External Advisory Board, bringing unparalleled expertise from their roles in national security and deep computer learning. Helping to develop new approaches in addressing the risks of AI and harnessing its benefits, all board members will share evidence-based advice in their respective areas of expertise.

Oxford academic Yarin Gal is today announced as the first Taskforce Research Director. Cambridge academic David Kreuger will also be working with the Taskforce in a consultative role as it scopes its research programme in the run up the summit. Together, they will build a team to investigate frontier AI risks such as cyber-attacks.

To kickstart efforts they will be joined by technical recruits from the AI sector, after hundreds stepped forward to apply, with the research team set to begin evaluating the risks posed by the rapidly advancing frontier of AI. Leading AI companies Anthropic, DeepMind and OpenAI have committed to provide deep access to their AI models so researchers have all the tools they need. Over the coming weeks they'll continue to recruit industry experts with those interested urged to apply.

In the coming months, as its work on safety research gets underway, the Taskforce will build out its capability for delivering the other two parts of its mission: identifying new uses for AI in the public sector and strengthening the UK's capabilities.

Technology Secretary Michelle Donelan said:

When I started as Secretary of State for Science, Innovation and Technology, I was determined to do things differently, by working with experts in government and industry.

These new appointments are a huge vote of confidence in our status as a flagbearer for AI safety as we take advantage of the enormous wealth of knowledge we have both at home and abroad.

The Prime Minister and I created the Frontier AI Taskforce to lead that effort - ensuring that we can continue to harness the opportunities of AI safely, as we strengthen our own capabilities and encourage wider adoption of the technology across society.

We are already seeing how transformative AI can be, whether through new breakthroughs in healthcare or finding fresh approaches to help us tackle climate change. I am determined that my department will make sure that the UK leads the way, as I know we can.

Announced by the Prime Minister in April, the taskforce is backed with £100 million in funding to lead the safe and reliable development of frontier AI Models - a fast moving type of AI technology which is trained on large amounts of data and can be applied in numerous areas.

Frontier AI Taskforce Chair, Ian Hogarth, said:

I am pleased to confirm the first members of the Taskforce's External Advisory Board, bringing together experts from academia, industry, and government with diverse expertise in AI research and national security. I'm also happy to announce that in just 11 weeks we've rapidly hired an incredible team of AI researchers who will help make sure the UK government is at the cutting edge of AI safety.

We're working to ensure the safe and reliable development of foundation models but our efforts will also strengthen our leading AI sector, and demonstrate the huge benefits AI can bring to the whole country to deliver better outcomes for everyone across society".

External Advisory Board Member and Turing Prize Laureate Yoshua Bengio said:

The safe and responsible development of AI is an issue which concerns all of us. We have seen massive investment into improving AI capabilities, but not nearly enough investment into protecting the public, whether in terms of AI safety research or in terms of governance to make sure that AI is developed for the benefit of all.

With the upcoming global AI Safety Summit and the Frontier AI Taskforce, the UK government has taken greatly needed leadership in advancing international coordination on AI, especially on the question of risks and safety.

Others joining the line up and serving on the External Advisory Panel include Prime Minister's Representative for the AI Safety Summit Matt Clifford who will join as Vice-Chair, Deputy National Security Adviser Matt Collins, Chief Scientific Adviser for National Security Alex Van Someren, Academy of Medical Royal Colleges Chair Dame Helen Stokes-Lampard, and Alignment Research Centre Chief Paul Christiano, who will turbo charge the taskforce's work by offering expert insight.

International collaboration forms the backbone of UK's approach to shared AI safety and the work of the Taskforce will be no different. The Taskforce is harnessing established industry expertise through long-term partnerships with American-based companies 'Trail of Bits' and 'ARC Evals'. These partnerships will unlock expert advice on the cybersecurity and national security implications of Foundation Models, as well as broader support in assessing the major risks posed by AI systems. These are complemented by further agreements with The Center for AI Safety and The Collective Intelligence Project - advising on areas of AI development and risks.

On 1 and 2 November the UK will host the first major global AI Safety Summit at Bletchley Park, building consensus on rapid, international action to advance safety at the cutting edge of AI technology. The Taskforce is now positioned to play an important role ahead of those discussions. As the only governmental organisation of its kind in the world it will work to develop a robust system which can analyse the safety of Foundation Models, while also identifying the areas of 'sovereign capability' the UK should build on to take advantage of the next wave of AI technology and drive economic growth.

Notes

External Advisory Board Terms of Reference

Members join the Board as individuals after a robust due diligence process, and not as representatives of their respective organisations. Members will play an active role in the work of the Taskforce, contributing to vital areas of focus such as safety and risks, capability and public sector use, and the technology's impact on wider society.

All members will be transparent and disclose conflicts of interest both from the outset and throughout the Taskforce's duration.

Members will remove themselves from procurement conversations where they, or their company, have acted or have a perceived conflict of interest with the Taskforce.

The External Advisory Board's appointees will actively contribute to all meetings, bringing best practice, providing evidence-based advice in their respective area(s) of expertise whilst avoiding the promotion of company-specific work or direct conflicts of interest.

AI safety research programme

Those interested in applying to support the AI safety research programme can apply here.

External Advisory Board Members

  • External Advisory Board Vice-Chair Matt Clifford - Prime Minister's Representative for the AI Safety Summit
  • Yoshua Bengio - Turing Prize Laureate, Founder and Scientific Director of Mila Quebec Artificial Intelligence Institute
  • Anne Keast-Butler - Director of GCHQ
  • Alex Van Someren - Chief Scientific Adviser for National Security
  • Matt Collins - Deputy National Security Adviser
  • Paul Christiano - Head of the Alignment Research Centre
  • Dame Helen Stokes-Lampard - Former Chair of the Royal College of General. Practitioners and the Academy of Medical Royal Colleges

Read Frontier AI Taskforce: first progress report here.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.