Research Uncovers AI Chatbot Political Persuasion Levers

American Association for the Advancement of Science (AAAS)

Even small, open-source AI chatbots can be effective political persuaders, according to a new study. The findings provide a comprehensive empirical map of the mechanisms behind AI political persuasion, revealing that post-training and prompting – not model scale and personalization – are the dominant levers. It also reveals evidence of a persuasion-accuracy tradeoff, reshaping how policymakers and researchers should conceptualize the risks of persuasive AI. There is a growing concern amongst many that advances in AI – particularly conversational large language models (LLMs) – may soon give machines significant persuasive power over human beliefs at unprecedented scale. However, just how persuasive these systems truly are, and the underlying mechanisms that make them so, remain largely unknown. To explore these risks, Kobi Hackenburg and colleagues investigated three central questions: whether larger and more advanced models are inherently more persuasive; whether smaller models can be made highly persuasive through targeted post-training; and which tactics AI systems rely on when attempting to change people's minds. Hackenburg et al. conducted three large-scale survey experiments involving nearly 77,000 participants who conversed with 19 different LLM models – ranging from small open-source systems to state-of-the-art "frontier" models – on hundreds of political issues. They also tested multiple prompting strategies, several post-training methods, and assessed how each "lever" affected persuasive impact and factual accuracy. According to the findings, model size and personalization (providing the LLM with information about the user) produced small, but measurable effects on persuasion. Post-training techniques and simple prompting strategies, on the other hand, increased persuasiveness dramatically, by as much as 51% and 27%, respectively. Once post-trained, even small, open-source models could rival large frontier models in shifting political attitudes. Hackenburg et al. found AI systems are most persuasive when they deliver information-rich arguments. Roughly half of the variance in persuasion effects across models and methods could be traced to this single factor. However, the authors also discovered a notable tradeoff; models and prompting strategies that were effective in boosting persuasiveness often did so at the expense of truthfulness, showing that optimizing an AI model for influence may inadvertently degrade accuracy. In a Perspective, Lisa Argyle discusses this study and its companion study, published in Nature, in greater detail.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.