AI Chatbot Health Advice: Essential Tips

Yale University

Shaili Gupta often sees patients who consult chatbots like ChatGPT for health advice.

She finds that some of her patients are much more informed about health-related issues and have follow up questions that show a much better awareness that they didn't have before. Sometimes, this information becomes more concrete, however, and patients have a harder time hearing a different perspective from their clinician.

In one situation, a person was experiencing chest pain and became convinced something was wrong with their heart after asking a chatbot. That was despite multiple tests telling them otherwise.

"It's very difficult for patients to understand that the kind of pain that they're describing is not cardiac in origin, especially after thorough testing has found other explanations" said Gupta, an associate professor of medicine at Yale School of Medicine. "Chatbots have been trained to highlight the more serious and urgent things first."

Gupta's patients are not alone in turning to chatbots as a medical resource. Globally, more than 40 million people use ChatGPT alone for health information each day, according to a recent report from OpenAI, the company behind ChatGPT. Just recently, both ChatGPT and Claude, a series of language models created by Anthropic, announced they'd be launching services specifically to give consumers health advice.

But chatbots can often give false and potentially harmful advice to those trying to self-diagnose or manage their care, especially when it comes to questions related to mental health, experts say. That can make the job of actual doctors like Gupta more difficult.

"On the one hand, these platforms have provided a readily available source of health literacy, which is wonderful," she said. "On the other, patients might get a whole bunch of information that they would come armed with and are sometimes confused about. It becomes a whole aspect of your clinic visit where you're trying to educate, redirect, and cancel out the misinformation overlying the right information."

At Yale, Gupta is also the director of YSM's AI and Innovation in Medicine Distinction Pathway, which provides residents with advanced learning in AI, machine learning, and clinical applications to residents.

In an interview, Gupta explains the benefits and risks of using chatbots for health advice and how users can safeguard against misinformation.

The interview has been edited for length and clarity.

Why do people turn to AI chatbots as a health resource?

Shaili Gupta: When you look at how patients use them, I see chatbots as both simplifiers and amplifiers. So, they are simplifiers in the sense that they can translate complex language in easily understandable terms. They can simplify what a disease process is like, including what a particular question means, what a symptom means, what you should be asking your doctor, and which tests would need to be done. They're also amplifiers because they extract and summarize information from the large amount of data out there.

Chatbots are also anthropomorphized. They've been trained to use pronouns like "you" and "I" so you can relate to the information as if you're talking to a person. So it's very easy to then see them as a friend, a guide, or an authority. It feels personal, and it feels easy. It feels very protected because you can just have the conversation, and you can then choose whether you want to use it or not use it. For example, you're not bound to go fill that prescription because you had this conversation with a doctor who wanted you to do something. That anthropomorphization, however, also increases the risk of overtrust.

What are the benefits of using chatbots for health advice?

Gupta: In a way, chatbots have the profound power of equalizing the world. It's a good thing in that a lot of people can have access to the same information, and they're not deprived of that information just because they are sitting in a corner of the world where they don't have immediate access to a physician. In that sense, chatbots have the potential to provide health equity, but it comes with a huge amount of responsibility.

Chatbots can also talk to you in your level of language. There are many chatbots that speak different languages, or they can meet you at your health literacy level. They can explain and educate in user-level language as many times as needed, making them a very patient-centered interactive entity.

Another good use of chatbot is for caregiving support. Some chatbots work as almost a visiting nurse where they can remind you to take a medication, provide lifestyle guidance, and triaging guidance on what kind of medical expert to seek care from.

/University Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.