King's Joins Project to Boost AI Safety for Users, Regulators

King’s College London

The project is the first of its kind to give people without a technical background the tools to audit AI systems

Two people talking

King's is part of major project exploring how we can reap the benefits of AI across a range of fields, whilst minimising its potential risks. The £3.5 million project is the first of its kind to involve people without a technical background to audit AI systems.

Regulators, end-users and people likely to be affected by decisions made by AI systems will play a key role in ensuring that those systems provide fair and reliable outputs, with a focus on the use of AI in health and media content. This includes analysing data sets for predicting hospital readmissions, assessing child attachment for potential bias, and examining fairness in search engines and hate speech detection on social media.

The project, Participatory Harm Auditing Workbenches and Methodologies (PHAWM), is a cross-partner consortium funded by Responsible AI UK keystone. Responsible AI UK is a UKRI funded multi-partner programme which is driving research and innovation in responsible AI. Led by the University of Glasgow, PHAWM sees King's academics Professor Elena Simperl, Dr Elizabeth Black and Professor Daniele Quercia from the Department of Informatics, and Professor Dan Hunter, Executive Dean of the Dickson Poon School of Law, join 25 other researchers from seven UK universities and 23 partner organisations.

"The AI revolution affects all of us, yet due to the complexity of these technologies, not everyone has an equal seat at the table when it comes to deciding how to make AI safe and trustworthy."

Professor Elena Simperl

Generative AI tools such as ChatGPT are prone to reinforcing bias and hallucinating -presenting false or invented information as fact. As such, auditing AI outputs is vital to developing more accurate and reliable systems. Until now efforts to audit this content has mainly been left to AI experts, without involving those who might be better placed to understand the specific impact of AI in their fields, and often resulting in a lack of diverse representation and input.

Professor Elena Simperl said, "The AI revolution affects all of us, yet due to the complexity of these technologies, not everyone has an equal seat at the table when it comes to deciding how to make AI safe and trustworthy.

"At King's, we are aiming to develop methods and approaches that empower all users, regardless of their level of AI literacy, to participate in decisions and assessments of AI systems to make sure this technology works for them."

"This project enables us to explore a range of techniques that can make AI more safe and reliable, with input from the people who will be using it or affected by it."

Dr Elizabeth Black

The King's researchers will be working with the Wikimedia Foundation to understand how AI can be used responsibly to diversify knowledge and content in different languages, including in cases where content is limited due to a lack of Wikipedia editors. The team is also exploring how AI models can be made more safe and trusted, by using a mix of different techniques alongside generative AI. These include knowledge representation - a field of AI that organises information for computer systems to solve complex questions, and argumentation techniques - which provide a structured approach to representing diverse viewpoints and conflicting information, allowing for evaluation and reasoning that more closely emulates human discourse.

Dr Elizabeth Black said, "This project enables us to explore a range of techniques that can make AI more safe and reliable, with input from the people who will be using it or affected by it. The workbench of tools developed in the project will allow people without a technical background to audit AI systems, helping to create fairer outcomes for end-users.

"By collaborating in this major consortium of leading experts in responsible AI, we are building on King's long-standing work in this vital area, through the UKRI Trustworthy Autonomous Systems Hub and our UKRI Centre for Doctoral Training in Safe & Trusted AI, equipping the next generation of responsible AI practitioners."

In this story

elena_simperl_2019small

Professor of Computer Science

Dr Elizabeth Black

Reader in Artificial Intelligence

Professor Daniele Quercia

Professor in Urban Informatics

Dan Hunter headshot

Executive Dean, The Dickson Poon School of Law

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.