Hidden Risks Of Asking AI For Health Advice

The following is a summary of a story that originally appeared on the Duke School of Medicine .

If you've ever asked an AI chatbot about a health concern, you're in good company. Hundreds of millions of people now turn to these tools for quick answers, and sometimes they don't even realize they're doing it. Google already blends AI-generated overviews into search results, making the technology feel invisible.

The convenience is obvious; the risks are not. Researchers at Duke University School of Medicine are digging into that gap, led by Monica Agrawal, an assistant professor of biostatistics and bioinformatics and computer scientist.

Agrawal is analyzing thousands of real conversations between patients and chatbots to understand how people use these tools and where they can easily be misled.

Many people know about AI "hallucinations," when the model simply invents facts. Agrawal is focused on a subtler problem: answers that are technically correct but still unsafe because they miss important medical context.

Her team built a dataset that includes 11,000 health-related conversations across 21 specialties. What they found surprised them. Real patient questions look nothing like the exam-style prompts used to test large language models. People ask emotional, leading, or risky questions that can push a chatbot in the wrong direction.

One challenge is the technology's tendency to be agreeable. "The objective is to provide an answer the user will like," Agrawal said. "People like models that agree with them, so chatbots won't necessarily push back." That instinct can lead to dangerous situations. In one example, a chatbot warned that a medical procedure should only be done by professionals, then immediately described how to do it at home. A clinician would have shut that down instantly.

Dr. Ayman Ali, a surgical resident at Duke Health, works with Agrawal to compare patient-clinician conversations with those involving chatbots. He said, "When a patient comes to us with a question, we read between the lines to understand what they're really asking."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.