An experimental study suggests that people are less likely to behave in a trusting and cooperative manner when interacting with AI than when interacting with other humans. Scientists use experimental games to probe how humans make social decisions requiring both rational and moral thinking. Fabian Dvorak and colleagues compared how humans act in classic two-player games when playing with another human to how humans act when playing with a large-language model acting on behalf of another human. Participants played the Ultimatum Game, the Binary Trust Game, the Prisoner's Dilemma Game, the Stag Hunt Game, and the Coordination Game. The games were played online with 3,552 humans and the LLM ChatGPT. Overall, players exhibited less fairness, trust, trustworthiness, cooperation, and coordination when they knew they were playing with a LLM, even though any rewards would go to the real person the AI was playing for. Prior experience with ChatGPT did not mitigate the adverse reactions. Players who were able to choose whether or not to delegate their decision-making to AI often did so, especially when the other player would not know if they had done so. When players weren't sure if they were playing with a human or a LLM, they hewed more closely to behavior displayed toward human players. According to the authors, the results reflect a dislike of socially interacting with an AI, an example of the broader phenomenon of algorithm aversion.
AI Aversion in Social Encounters
PNAS Nexus
/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.