Artificial intelligence has improved by leaps and bounds over the last few decades and has changed the way many people, including corporate managers, conduct business.
But the use of algorithms in managerial decision-making isn't universal, and there are a few factors that spur greater use of AI: how the manager gets paid, and how the artificial intelligence is framed, according to a new study co-led by a Cornell researcher.
Contrary to highly cited research from more than 30 years ago, an incentivized pay structure will lead to greater reliance on AI in decision-making than flat, fixed compensation. And if the AI is described as combining both data and human expert knowledge, people are more likely to use it than if it's framed as strictly algorithmic advice.
Martin Wiernsperger, assistant professor of accounting at the Samuel Curtis Johnson Graduate School of Management, in the Cornell SC Johnson College of Business, is a co-author of "Incentives, Framing, and Reliance on Algorithmic Advice: An Experimental Study," which published May 30 in Management Science. His co-authors are from the Vienna University of Economics and Business.
Two studies from the 1980s and '90s contended that incentivized decision-makers - those being paid either based on performance, or in a "tournament" setting in which the top performer in a group gets the reward - will rely less on algorithmic advice and instead feel compelled to "earn" the reward through their own effort. In other words, the incentive backfires due to a phenomenon known as "algorithm aversion."
"They were pretty famous studies on the paradoxical effects of incentives and decision-making, and I thought it's probably worth re-examining this question now, when we have very different types of decision aids and algorithms," Wiernsperger said. "We wanted to see if this backfiring still holds."
He and co-authors Philipp Grünwald and Georg Lintner were all doctoral students in Vienna in 2021 when they attended a seminar and were assigned to work on a project together. They designed an experiment to test how reward structure and the framing of AI would impact decision-making in a specific task: estimating a per-night rate for an Airbnb apartment. The other co-authors, professors Ben Greiner and Thomas Lindner, met the students at the seminar and suggested they turn their project into a research paper.
For their study, the researchers recruited around 1,500 participants from three large public universities in Austria. Subjects were randomly assigned to one of nine experiment conditions: fixed pay; performance pay; or tournament pay, then within each of those conditions: no AI advice; AI advice; or human-AI advice.
The framing of the artificial intelligence as either strictly via algorithm, or algorithm with human expert involvement, was a key factor in whether the decision-makers trusted AI under certain conditions.
Participants were shown 10 Airbnb listings for apartments in Vienna - extracted from a dataset of approximately 12,000 apartment listings, on which the algorithm was trained - and given all relevant listing information except for the price. They were tasked with estimating a per-night rate for all 10. In the no-AI-advice group, participants had to estimate a price just based on the listing information they received.
But both AI groups performed two estimations: the first with just the listing information, the second with the listing info and algorithmic advice. That determined "weight of advice" - how much influence a piece of advice (in this case, algorithmic help) has on a person's decision-making.
The researchers found that individuals who were compensated based on either performance or tournament incentives relied significantly more heavily on AI than those who receive a fixed payment.
Also, those who used the algorithmic advice - regardless of how it was framed - performed the task of estimating the per-night rate significantly better than those who did not get the AI assistance.
"In general," Wiernsperger said, "paying managers or decision-makers based on their performance has positive effects, and not negative effects, when it comes to the use of AI."
These results have implications for companies trying to introduce AI into decision-making, the researchers said: If you want people to use AI tools, how you motivate them and how you talk about the technology both matter.