As people increasingly rely on AI-powered chatbots to look up basic facts about the world, a new Yale study shows that those interactions can influence users' social and political opinions.
Prior research has shown that content generated by artificial intelligence (AI) that has been prompted to be persuasive can indeed shift people's opinions. But this study provides evidence that the same is also true of content that is not intended to change minds, such as the summaries that popular chatbots produce in response to simple queries about historical events.
This unintended power to persuade is caused by latent biases introduced during the training of the large language models (LLMs) that drive chatbots' core capabilities, the researchers said. Those latent biases - which can carry over from ideological leanings in the data used to train LLMs - lend subtle nuances to the framing of the narratives the chatbots generate, they explained.
"We show that querying an AI chatbot to obtain historical facts can influence people's opinions even when the information provided is accurate and nobody has prompted the tool to try to persuade you of anything," said Daniel Karell, an assistant professor of sociology in Yale's Faculty of Arts and Sciences and the study's senior author. "The effects are modest but could compound if somebody frequently engages with chatbots for factual information."
The study was published on March 3 in the journal PNAS Nexus. Matthew Shu, a 2025 graduate of Yale College, is the lead author.
For the study, the researchers tested for the effects of both latent and prompted biases in AI-generated narratives about two historical events from the 20th century: the Seattle General Strike, a five-day general work stoppage in the city during February 1919; and the Third World Liberation Front student protests, student-led demonstrations in 1968 that demanded greater representation of ethnic minorities in academia.
To evaluate the effects of latent biases, the researchers asked 1,912 participants to read default summaries of the two events generated by either GPT-4o, a chatbot technology released by OpenAI in 2024, or the corresponding Wikipedia entries. They tested the relative influence of prompted biases by having other participants read summaries that portrayed the events with either deliberately liberal or conservative framing.
The researchers found that, compared to the Wikipedia entries, both the default AI summaries and those prompted to have what was considered a liberal framing caused participants to express more liberal opinions about the two events. At the same time, the study showed that readers of AI summaries with a conservative slant reported more conservative opinions relative to readers of Wikipedia.
That the default summaries moved readers' opinions in a "liberal" direction demonstrates the persuasive effects of latent biases in LLMs, the researchers said. However, while statistically significant, the effects represent slight difference - from leaning towards a moderate stance to leaning towards a somewhat liberal stance, Karell noted.
To test whether readers' existing political views moderate the degree to which the political framing of AI summaries influences their opinions, the researchers asked participants to self-report their political ideology. They found that the AI summaries prompted to have a liberal framing led to more liberal opinions across the ideological groups. The AI summaries with a conservative slant only showed statistically significant effects on the opinions of readers who had identified as politically conservative.
These findings suggest that conservative framing in content generated by GPT-4o, and perhaps other AI chatbots, would likely result from prompting bias, whereas liberal framing could be the result of both latent and prompting bias, Karell said.
"We show that using chatbots to learn about history has unanticipated and anticipated influences on people's opinions," he said. "In contrast to Wikipedia, which emphasizes transparency in how its entries are edited, the development of AI chatbots is opaque. Our work suggests that the companies developing these models have the ability to shape people's opinions, which is an unsettling thought."
The study was coauthored by Keitaro Okura, a Ph.D. candidate at Yale, and Thomas Davidson of Rutgers University.