Friend or foe? role of AI in mitigating biases in HR

AI is already widely being used in HR processes, but it's unclear whether these applications contribute to fair and inclusive decision making. Leiden researcher Carlotta Rigotti is involved in BIAS, a big consortium research project that aims to provide answers and develop a new, trustworthy AI app for HR professionals.

Carlotta Rigotti: 'The ultimate aim is to develop a new technology that will identify and mitigate diversity biases in selection and recruitment practices.''

How widely are AI systems used in HR management right now, and in what ways?

'First of all, the use of AI applications in HR recruitment is much more common than we might think, because almost everyone has a LinkedIn account, and LinkedIn is powered by AI. Lots of HR people are using LinkedIn to find the best candidates. HR professionals also use AI applications to draft job vacancies to make sure that they use gender neutral and inclusive language, and of course to match the best candidates with their job application. AI helps them save their time, because an application can process a huge amount of CVs, commitment letters, motivation letters, and so forth.

AI is also being used for managing employees. In terms of their productivity, in terms of the leave that they take, their working hours, and so forth. Of course, this is tricky, because there is risk of surveillance, or discrimination.'

Series

What does AI mean for… HR management

How do HR professionals feel about the current AI applications?

'Within the BIAS project we managed to interview 70 AI developers and HR executives spread across Europe. They are much more positive about the use of AI when it comes to the selection and recruitment process. They're more cautious when it comes to HR management decisions such as promotions, sanctions and firing people.'

What is the aim of the BIAS-project?

'The ultimate aim is to develop a new technology that will identify and mitigate diversity biases in selection and recruitment practices, especially the biases related to gender and race, by highlighting or removing such information. Potentially, the technology could provide some suggestions to address the bias. Some specific design features will be decided during the co-creation workshops that will be held across Europe between August and September 2023.

But as a consortium we're first doing preliminary work to understand what counts as a fair recruitment process. And that's where Leiden University, project leader Eduard Fosch Villaronga and myself come in, because we did some desk research to understand how cross-disciplinary researchers are working on the topic. For instance, we have been in contact also with organizational psychologists in Tilburg University, in Amsterdam. Next to that, we've issued a survey in which we ask workers and other experts about their experiences when dealing with AI based HR-technology.'

How would the BIAS AI technology differ from products that are on the market right now?

'Among many other things, the technology will be built using language models with mitigated bias, with particular attention to language specific bias in labor context. Explainable AI components will aid users in interpreting the tool output, complemented by transparent case-based reasoning system. The team that will develop this technology is based in Bern University of Applied Science and in the Technical University in Norway.'

How is your own expertise as a criminal lawyer involved in the project?

'My expertise lies in law, gender and technology, especially in terms of their mutual and constitutive relationships. In this project, my research is focused on the question how diversity can and should inform the design of new technology. And also what impact legal provisions can have on the design of the tech. Because I'm a lawyer, I could identify some legal gaps. For instance, anti-discrimination laws and data protection laws are currently not able to address these diversity biases on their own. But right now, our focus is on the empirical work. At a later stage, when we will start to develop the technology, we will of course also make sure that this technology is compliant with applicable laws, considering also the legislative turmoil the European Union is experiencing in this field.'

What's your personal view on the direction AI development for the HR field is taking?

'I was happy to learn that HR professionals are aware of the risks that are involved in the use of AI. I feel that we are heading towards the right direction because of the widely acknowledged emphasis on the need for diversity and inclusion. If in the future more women or more diverse AI developers will be involved with building the technology, that will make a huge positive difference.'

Text: Jan Joost Aten

The image above was created using Dall-E, an artificial intelligence application which creates images from text descriptions. Prompt used:'employees working in the human resource department data bias'.

Contribute to unbiased AI-systems and join the BIAS survey

The BIAS-project is gathering as much information as possible on workers' personal experience and attitudes towards AI applications in the labor market. 'We need more people to take part', says Carlotta Rigotti, 'because the threshold is 4000 respondents across Europe. And it's really challenging, because we did different empirical activities. I'm hoping that people understand that their opinion really matters, and that they will take the time to sit down for 10 minutes and fill it out.'

To fill out the survey, please use the following link

About SAILS

The Leiden University SAILS (Society Artificial Intelligence and Life Sciences) initiative is a university-wide network of AI researchers from all seven faculties: Law, Science, Archaeology, Social & Behavioural Sciences, Humanities, Governance & Global Affairs and Leiden University Medical Center.

Our view on AI is human-centred, focusing on the fair and just use of AI in and for society. We aim to expand knowledge of AI and its uses for the benefit of society as a whole, in order to help it withstand the challenges it is facing. We are convinced that this can only be achieved by bringing together talents and insights from all academic disciplines. SAILS researchers collaborate in interdisciplinary research projects, share their knowledge, inspire others at events, and educate our students to be the AI professionals of the future. Because at Leiden University we believe the future of AI is human.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.