Feral AI Gossip to Spread Damage, Shame, Researchers Warn

University of Exeter

"Feral" gossip spread via AI bots is likely to become more frequent and pervasive, causing reputational damage and shame, humiliation, anxiety, and distress, researchers have warned.

Chatbots like ChatGPT, Claude, and Gemini don't just make things up—they generate and spread gossip, complete with negative evaluations and juicy rumours that can cause real-world harm, according to new analysis by philosophers Joel Krueger and Lucy Osler from the University of Exeter.

The harm caused by AI gossip isn't a hypothetical threat. Real-world cases of AI gossip already exist. After publishing an article about how emotionally manipulative chatbots can be, the New York Times reporter Kevin Roose found out chatbots were describing his writing as sensational and accusing him of poor journalistic ethics and being unscrupulous. Other AI bots have falsely detailed people's involvement in bribery, embezzlement, and sexual harassment. These gossipy AI-generated outputs cause real-world harms—reputational damage, shame, and social unrest.

The study outlines how chatbots gossip, both to human users and other chatbots, but in a different way to humans. This can lead to harm which is potentially wider in scope than fake information spread by chatbots.

Bot-to-bot gossip is particularly dangerous because it operates unconstrained by the social norms that moderate human gossip. It continues to embellish and exaggerate without being checked, spreading quickly in the background, making its way from one bot to the next and inflicting significant harms.

Dr Osler said: "Chatbots often say unexpected things and when chatting with them it can feel like there's a person on the other side of the exchange. This feeling will likely be more common as they become even more sophisticated.

"Chatbot "bullshit" can be deceptive — and seductive. Because chatbots sound authoritative when we interact with them — their dataset exceeds what any single person can know, and false information is often presented alongside information we know is true — it's easy to take their outputs at face value.

"This trust can be dangerous. Unsuspecting users might develop false beliefs that lead to harmful behaviour or biases based upon discriminatory information propagated by these chatbots."

The study shows how the drive to increasingly personalise chatbots could be led by the hope that we'll become more dependent on these systems and give them greater access to our lives. It's also done to intensify our feeling of trust and drive us to develop increasingly rich social relationships with them.

Dr Krueger said: "Designing AI to engage in gossip is yet another way of securing increasingly robust emotional bonds between users and their bots.

"Of course, bots have no interest in promoting a sense of emotional connection with other bots, since they don't get the same "kick" out of spreading gossip the way humans do. But certain aspects of the way they disseminate gossip mirror the connection-promoting qualities of human gossip while, simultaneously making bot-to-bot gossip potentially even more pernicious than gossip involving humans."

The researchers predict that user-to-bot gossip may become more common. In these cases, users might seed bots with different nuggets of gossip knowing the latter will, in turn, rapidly disseminate them in its characteristically feral way. Bots might therefore act as intermediaries, responding to user-seeded gossip and rapidly spreading it to others.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.