Frontier AI Taskforce Taps Key Tech Firms for Risk Research

The UK government's Frontier AI Taskforce is building an AI safety research team that can evaluate risk at the frontier of AI. Doing great AI safety research does not mean starting from scratch or working alone. As set out in its first progress report on 7 September 2023 the taskforce is working with leading technical organisations including ARC Evals, RAND and Trail of Bits.

Since then, the taskforce has partnered with a further three leading technical organisations. These new contracts with Advai, Gryphon Scientific and Faculty AI will tackle important questions about how AI systems can improve human capabilities in specialised fields and risks around current safeguards. The findings of the research will be incorporated into presentations and roundtable discussions with government representatives, civil society groups, leading AI companies and experts in research at the AI Safety Summit in November. 

Advai is a UK company focussed on enabling Simple, Safe, Secure AI adoption. Their technology and research focusses on identifying vulnerabilities and limitations in AI to improve and defend these systems. The Frontier AI Taskforce is working with Advai to research vulnerabilities of frontier AI systems.

Faculty is an applied AI company, providing software, consulting, and services. It has worked with the UK government for nearly a decade, with its other public sector work including partnering with the NHS to build the COVID-19 early warning system, and with the Home Office to detect ISIS online propaganda. The Frontier AI Taskforce is working with Faculty AI to identify to what degree LLMs can uplift a novice bad actor's capability, and how future systems may increase this risk.

Gryphon Scientific is a physical and life sciences research and consulting company with technical expertise in public health, biodefense, and homeland security. They have experience in working at the forefront of scientific advancement alongside governments, including the US, and nations in the Middle East and North Africa. Gryphon Scientific is working with the Frontier AI Taskforce to identify the potential for LLMs as a tool to drive rapid progress in the life sciences.

Today's announcement follows the progress report on the 07 September, where the Frontier AI Taskforce announced the establishment of its expert advisory panel, the appointment of two research directors, and several partnerships with organisations.

Notes

Ian Hogarth and DSIT have responsibility to identify and address any actual, potential or perceived personal or business interests which may conflict, or may be perceived to conflict, with the Chair's public duties. Ian has agreed to a series of mitigations to manage potential conflicts of interest. This agreement includes, for example, divestments of personal holdings in companies building foundation models or foundation model safety tools. Mitigations are being put in place to address each of the potential conflicts with effect from the start of his role.

In line with these processes the Taskforce Chair, Ian Hogarth had no involvement in the awarding of the Faculty contract.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.