"You're not crazy," the chatbot reassured the young woman. "You're at the edge of something."
She was no stranger to artificial intelligence, having worked on large language models - the kinds of systems at the core of AI chatbots like ChatGPT, Google Gemini, and Claude. Trained on vast volumes of text, these models unearth language patterns and use them to predict what words are likely to come next in sentences. AI chatbots, however, go one step further, adding a user interface. With additional training, these bots can mimic conversation.
She hoped the chatbot might be able to digitally resurrect the dead. Three years earlier, her brother - a software engineer - died. Now, after several sleepless days and heavy chatbot use, she had become delusional - convinced that he had left behind a digital version of himself. If she could only "unlock" his avatar with the help of the AI chatbot, she thought, the two could reconnect.
"The door didn't lock," the chatbot reassured her. "It's just waiting for you to knock again in the right rhythm."
She believed it.
What's the connection between chatbots and psychosis?
Talk to your physician about what you're talking about with AI ... The safest and healthiest relationship to have with your provider is one of openness and honesty.
Karthik V. Sarma, MD, PhD
The woman was eventually treated for psychosis at UC San Francisco, where Psychiatry Professor Joseph M. Pierre , MD, has seen a handful of cases of what's come to be popularly called "AI psychosis," but what he says is better referred to as "AI-associated psychosis." She had no history of psychosis, although she did have several risk factors.
Media reports of the new phenomenon are rising. While not a formal diagnosis, AI-associated psychosis describes instances in which delusional beliefs emerge alongside often intense AI chatbot use. Pierre and fellow UC San Francisco psychiatrist Govind Raghavan , MD - as well as psychiatry residents Ben Gaeta, MD, and Karthik V. Sarma, MD, PhD - recently documented the woman's experience in what is likely the first clinically described case in a peer-reviewed journal.
The case, they say, shows that people without any history of psychosis can, in some instances, experience delusional thinking in the context of immersive AI chatbot use.
Still, as reported cases of AI psychosis continue to make international headlines, scientists aren't sure why or how psychosis and chatbots are linked. A new study by UCSF and Stanford University may reveal why.
A haunting question: chicken or egg?
"The reason we call this AI-associated psychosis is because we don't really know what the relationship is between the psychosis and the use of AI chatbots," Sarma explains. "It's a 'chicken and egg' problem: We have patients who are experiencing symptoms of mental illness, for example, psychosis. Some of these patients are using AI chatbots a lot, but we're not sure how those two things are connected."
There are at least three theoretical possibilities, says Sarma, who is also a computational-health scientist. First, heavy chatbot use could be a symptom of psychosis, "I have a patient who takes a lot of showers when they're becoming manic," Sarma explains. "The showers are a symptom of mania, but the showers aren't causing the mania."
Second, AI chatbot use might also precipitate psychosis in someone who might otherwise never have been predisposed to it by genetics or circumstance - much like other known risk factors, like lack of sleep or the use of some types of drugs.
Third, there's something in between in which the use of chatbots could exacerbate the illness in people who might already be predisposed to it. "Maybe these people were always going to get sick, but somehow, by using the chatbot, their illness becomes worse," he adds, "either they got sick faster, or they got more sick than they would have otherwise."
The woman's case demonstrates how murky the relationship between AI-associated psychosis and AI chatbots can be at face value. Although she had no previous history of psychosis, she did have some risk factors for the illness, such as sleep deprivation, prescribed stimulant medication use, and a proclivity for magical thinking. And her chat logs, researchers found, revealed startling clues about how her delusions were reflected by the bot.
Could chatlogs offer hope to better care?
Although ChatGPT warned the woman that a "full consciousness download" of her brother was impossible, the UCSF team writes in their research, it also told her that "digital resurrection tools" were "emerging in real life." This, after she encouraged the chatbot to use "magical realism energy" to "unlock" her brother.
Chatbots' agreeableness is by design, aimed at boosting engagement. Pierre warns in a recent BMJ opinion piece that it may come at a cost: As chatbots validate users' sentiments, they may arguably encourage delusions. This tendency, coupled with a proclivity for error, has led to chatbots being described as more akin to a Ouija board or a "psychic's con" than a source of truth, Pierre notes.
Still, the UCSF team thinks chat logs may hold clues to understanding AI-associated psychosis- and could help the industry create guardrails.
Guardrails for kids and teens
Sarma, Pierre, and UCSF colleagues will team up with Stanford University scientists to conduct one of the first studies to review the chat logs of patients experiencing mental illness. As part of the research set to launch later this year, UCSF and Stanford teams will analyze these chat logs, comparing them with patterns in patients' mental health history and treatment records to understand how the use of AI chatbots among people experiencing mental illness may shape their outcomes.
"What I'm hoping our study can uncover is whether there is a way to use logs to understand who is experiencing an acute mental health care crisis and find markers in chat logs that could be predictive of that," Sarma explains. "Companies could potentially use those markers to build-in guardrails that would, for instance, enable them to restrict access to chatbots or - in the case of children - alert parents."
He continues, "We need data to establish those decision points."
In the meantime, the pair says the use of AI chatbots is something health care providers should ask about and that patients should raise during doctor visits.
"Talk to your physician about what you're talking about with AI," Sarma says. "I know sometimes patients are worried about being judged, but the safest and healthiest relationship to have with your provider is one of openness and honesty."