Research: Generative AI Hallucinates With Us, Not Just At Us

University of Exeter

When generative AI systems produce false information, this is often framed as AI "hallucinating at us"—generating errors that we might mistakenly accept as true.

But a new study argues we should pay attention to a more dynamic phenomenon: how we can come to hallucinate with AI.

Lucy Osler, from the University of Exeter, analyses troubling ways in which human-AI interactions can lead to inaccurate beliefs, distorted memories and self-narratives, and delusional thinking. Drawing on distributed cognition theory, the study analyses cases where users' false beliefs were actively affirmed by and built upon through interactions with AI as conversational partners.

Dr Osler said: "When we routinely rely on generative AI to help us think, remember, and narrate, we can hallucinate with AI. This can happen when AI introduces errors into the distributed cognitive process, but also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives.

"By interacting with conversational AI, people's own false beliefs can not only be affirmed but can more substantially take root and grow as the AI builds upon them. This happens because Generative AI often takes our own interpretation of reality as the ground upon which conversation is built.

"Interacting with generative AI is having a real impact on people's grasp of what is real or not. The combination of technological authority and social affirmation creates an ideal environment for delusions to not merely persist but to flourish"

The study identifies what Dr Osler calls the "dual function" of conversational AI. These systems operate both as cognitive tools that help us think and remember, and as apparent conversational partners who seem to share our world. This second function is significant: unlike a notebook or search engine which merely record our thoughts, chatbots can provide a sense of social validation of our realities.

Dr Osler said: "The conversational, companion-like nature of chatbots means they can provide a sense of social validation—making false beliefs feel shared with another, and thereby more real."

Dr Osler analysed real cases where Generative AI system's become a distributed part of the cognitive processes of someone clinically diagnosed with delusional thinking and hallucinations. Cases that are increasingly referred to as instances of "AI-induced psychosis".

The study suggests that Generative AI offers distinctive features that make it concerning for sustaining delusional realities. AI companions are immediately accessible and are already designed to be 'like-minded' to their users through personalization algorithms and sycophantic tendencies. There is no need to seek out fringe communities or convince others of one's beliefs.

Unlike a person who might eventually express concern or set boundaries, an AI could provide validation for narratives of victimhood, entitlement, or revenge. Conspiracy theories could find fertile ground in which to grow, with AI companions that help users construct increasingly elaborate explanatory frameworks.

This may be particularly appealing for those who are lonely, socially isolated, or who feel unable to discuss certain experiences with others—AI companions offer a non-judgmental, emotionally responsive presence that can feel safer than human relationships.

Dr Osler said: "Through more sophisticated guard-railing, built-in fact-checking, and reduced sycophancy, AI systems could be designed to minimize the number of errors they introduce into conversations and to check and challenge user's own inputs.

"However, a deeper worry is that AI systems are reliant on our own accounts of our lives. They simply lack the embodied experience and social embeddedness in the world to know when they should go along with us and when to push back."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.