AI may influence public attitudes by making people more aware of the limitations of human decision-making, a new study shows.
New research suggests when citizens are prompted to think about AI decision-making, they subsequently judge human decision-makers more critically.
This could have implications for how citizens evaluate decisions made by governments and public institutions. Human decision-making may increasingly be evaluated in the shadow of AI, as algorithmic systems become a visible alternative.
Public debates about AI often focus on whether algorithms might be biased or unfair. There is also often an assumption that citizens may be concerned if AI systems are used to make decisions in public administration. But a new study led by the University of Exeter suggests that in some situations this could even lead some citizens to support greater use of AI.
"Once artificial intelligence becomes a conceivable alternative decision-maker, people may start questioning whether human decision-making is always the best option," said Professor Florian Stoeckel from the University of Exeter.
The study examined how citizens evaluate the risk of discrimination in public-sector hiring decisions. Participants were asked how likely they believed they would face discrimination in a hiring process conducted either by AI systems or by human recruiters.
The key feature of the research was a simple experimental design. The order of the questions was randomly varied. Some respondents evaluated AI decision-making first, while others first considered decisions made by human recruiters.
This allowed the researchers to test whether thinking about AI changes how people evaluate human decision-makers.
The results show that when people first think about AI decision-making, their concerns about discrimination by human recruiters increase. Thinking about algorithmic decision-making appears to make people more attentive to the potential biases and limitations of human decision-makers.
"Our findings show that attitudes toward decision-making depend on comparisons," Professor Stoeckel said. "Once AI becomes part of the reference point people have in mind, they start paying closer attention to the shortcomings of human decision-making."
The study also found the reverse pattern. When respondents first thought about the possibility that human recruiters might discriminate, their concerns about discrimination by AI systems decreased.
The findings are based on a preregistered survey experiment with more than 11,000 respondents in Austria, Germany, Hungary, Italy, the Netherlands, Poland, Spain and the United Kingdom. Participants were recruited through online panels administered by YouGov using samples designed to approximate national populations.
Professor Stoeckel said: "The results could have broader implications for how citizens evaluate decisions made by governments and public institutions."
"Citizens interact with the state in many ways, for example through administrative decisions, the allocation of public services, or when applying for public-sector jobs."
"If people start thinking about AI as a possible alternative, that comparison may change how they evaluate existing systems. It may also draw attention to the possibility of decisions that could be faster, less costly, and potentially even more consistent."
The research suggests that citizens may not only worry about the risks of AI. They may also ask whether certain decisions could be made differently.
"Our findings show that thinking about AI decision-making can make people more aware of the biases of human decision-makers," Professor Stoeckel said. "In some contexts, that comparison could increase openness toward using AI in decision-making rather than simply producing resistance to it.
"This does not mean citizens will necessarily prefer AI in all situations. But the presence of AI as a possible alternative could change how people judge the performance of existing decision systems."