Reminding people human decision-making can be biased can make the use of artificial intelligence seem less problematic, a new study says.
Drawing attention to the limitations of human decision-making may make AI seem more consistent or impartial. This could increase pressure on governments from voters to rely more on algorithmic systems rather than less.
The study shows when people first think about the limitations of human decision-making, AI tends to appear more favourable by comparison. Conversely, when they first consider AI decision-making, they become more critical of human decision-makers.
Researchers examined how people evaluate the risk of discrimination in public-sector hiring decisions. They asked people about their views of a selection process that would either be conducted by AI or by human recruiters.
They asked people how likely they thought they would face discrimination in hiring decisions made either by AI systems or by human recruiters. Half of the respondents evaluated AI first, while the other half evaluated human recruiters first. This allowed some respondents to think about potential human bias before evaluating AI decision-making.
When respondents answer the question about humans first, the potential for human bias became more prominent in their thinking.
The study was carried out by Florian Stoeckel, from the University of Exeter, Ben Lyons, from the University of Utah and Adrienn Ujhelyi and Monika Kovacs, from ELTE Eötvös Loránd University.
Professor Stoeckel said: "Evaluations of AI do not only depend on the properties of algorithms, but also on whether people compare AI to human decision-making. Once that comparison is made, AI-based decision-making can look better, not just worse. This matters for how citizens respond to the use of AI in hiring, and in the public sector more generally."
"These findings suggest that public concerns about AI bias are not fixed. Instead, they depend on the context in which people evaluate algorithmic systems. When public debates highlight the limitations of human decision making, AI systems may appear more favourable by comparison. The potential problem is that this shift in perception can occur even if an AI system itself still contains biases."
"People seem to rely on general assumptions about algorithms or computers when judging AI. Debates may shift toward the weaknesses of human decision making, which can make AI appear more acceptable even if the fairness of the AI system itself has not been demonstrated."
"There is a risk that those who want to increase public acceptance of AI may therefore emphasise the shortcomings of human decision makers rather than demonstrating that a specific AI system actually operates fairly. If public administrations integrate AI, trust in these systems should be based on actual advantages and performance, rather than on comparisons with human weaknesses."
"The reverse dynamic is also possible. When people first think about AI decision-making, they may begin to evaluate human decision-makers more critically. As AI becomes a visible alternative, attention can shift to the limitations of human decisions. In that situation, AI may not only appear faster or cheaper, but potentially more consistent or impartial. If this dynamic extends more broadly, it could also increase pressure on governments from citizens to rely more on AI in decision-making, rather than less."
The YouGov survey was carried out with 11,000 participants in Austria, Germany,