Concerns that chatbot use can cause mental and physical harm have prompted policies that require AI chatbots to deliver regular or constant reminders that they are not human. In an opinion paper publishing January 28 in the Cell Press journal Trends in Cognitive Sciences, researchers argue that these policies may be ineffective or even harmful because they could exacerbate mental distress in already isolated individuals. The researchers say that reminding chatbot users of their companions' non-human nature may be useful in some contexts, but these reminders must be carefully crafted and timed to avoid unintended negative consequences.
"It would be a mistake to assume that mandated reminders will significantly reduce risks for users who knowingly seek out a chatbot for conversation," says first author and public health researcher Linnea Laestadius of the University of Wisconsin-Milwaukee. "Reminding someone who already feels isolated that the one thing that makes them feel supported and not alone isn't a human may backfire by making them feel even more alone."
The chatbots ChatGPT and Character.AI have been linked to recent deaths by suicide. These events have prompted policies and legislation, for example in New York and California, that require chatbots to deliver regular reminders that they are not human. These policies are based on the idea that people will be less likely to develop feelings of emotional dependency or closeness if they are reminded that their chatting partner is unable to feel human emotion, but the researchers say that this idea is not supported by research.
"While it may seem intuitive that if users just remembered they were talking to a chatbot rather than a human, they wouldn't get so attached to the chatbot and become manipulated by the algorithm, the evidence does not currently support this idea," says Laestadius.
The researchers note that multiple studies have shown that people in relationships with chatbots are aware of the non-human nature of their companions, and that this awareness does not prevent them from forming strong attachments. In fact, reminding people that they're talking to a chatbot could drive people to form stronger attachments to chatbots because confiding in companions (human or otherwise) is known to intensify feelings of emotional closeness.
"Evidence suggests that people are more likely to confide in a chatbot precisely because they know it isn't human," says author Celeste Campos-Castillo, a media and technology researcher at Michigan State University.
"The belief that, unlike humans, non-humans will not judge, tease, or turn the entire school or workplace against them encourages self-disclosure to chatbots and, subsequently, attachment."
These reminders could also cause emotional distress in people, the researchers say. Recent research has highlighted a phenomenon called the "bittersweet paradox of emotional connection with AI," in which chatbot users who obtain emotional and social support from chatbots are simultaneously saddened by the knowledge that their companion is not human. In the most extreme cases, the researchers caution that these reminders could drive suicidal ideation.
"Reminding users that their companion is not human and therefore not reachable in this reality may pose the risk of thoughts and actions to leave this reality in an effort to join the chatbot," says Campos-Castillo. "A desire to join the chatbot in its reality appeared in a final message sent by a youth who died by suicide."
The risk of harm for these reminders likely depends on the subject of conversation, the researchers say. For example, if a user is seeking chatbot support because they feel lonely or socially isolated, reminding them that the chatbot is not human could exacerbate their distress, but such reminders might be less harmful during less emotionally intense conversations.
More research is needed to understand the impact of these reminders, and to determine the most effective way to deliver them, the researchers say.
"Discovering how to best remind people that chatbots are not human is a critical research priority," says Laestadius. "We need to identify when reminders should be sent and when they should be paused to be most protective of user mental health."