Medical Information Provided To AI Is Often Incomplete

A new study shows that when people describe their symptoms to an AI rather than to medical professionals, the quality of the information they provide decreases. This jeopardizes the accuracy of digital diagnoses and patient safety.

none
Artificial intelligence is playing an increasingly important role also in medicine. This presents patients with particular challenges. (Image: elenabsl / Adobe Stock)

It is quite possible that in the near future, people will have to describe their symptoms to an AI before they can get a doctor's appointment. The AI will then decide whether it is an emergency or if treatment can wait, and schedule appointments accordingly.

Fortunately, we are not quite there yet, but digitalization is advancing rapidly in the healthcare sector as well. AI chatbots and digital symptom checkers are playing an increasingly important role and are more and more serving as the first point of contact for so-called "self-triage"-that is, the initial assessment of the urgency of treatment by the patients themselves.

But while the technical capabilities of these systems are constantly growing, another factor is coming into the focus of research: how humans communicate with the machine. This is an important topic because even the best technology, especially in medical diagnostics, relies on precise information that users do not always provide in full.

Human reluctance limits the potential of AI

This is the central finding of a study now published in the journal Nature Health. The study was led by Professor Wilfried Kunde, holder of the Chair of Psychology III at the University of Würzburg, and Moritz Reis, a research associate in that department. It involved scientists from Charité - Universitätsmedizin Berlin, the University of Cambridge, as well as Helios Klinikum Emil von Behring and Vivantes Klinikum Neukölln in Berlin.

"The 500 study participants were tasked with writing simulated symptom reports for two common conditions - unusual headaches and flu-like symptoms" describes lead author Moritz Reis the study design. They were led to believe that their reports would be read either by an AI chatbot or a human doctor. The goal was to examine the quality of these reports in terms of their suitability for a medical urgency assessment.

Loss of quality is evident in reduced level of detail

The key finding: When participants believed they were communicating with artificial intelligence, the suitability of their descriptions for an initial medical assessment deteriorated measurably compared to interactions with supposed medical professionals. This effect was even observed among participants who were actually experiencing the relevant symptoms at the time of the survey.

This loss of quality is directly reflected in the level of detail in the reports. While descriptions provided to medical professionals averaged 255.6 characters, those provided to chatbots averaged only 228.7 characters.

Even though a difference of 28 characters may sound small, the research team states that this effect is practically relevant and can result in even high-performance AI models ultimately providing incorrect medical advice. After all, these models also fail to make an accurate medical assessment if patients do not provide all essential information. The success of digital initial assessments depends less on computational power than on the patient's willingness to provide a detailed description.

Psychological Barriers: Concerns About a "One-Size-Fits-All Diagnosis"

But why are people so hesitant when it comes to machines? A key reason is likely what's known as "uniqueness neglect." "Many people assume that AI cannot grasp the individual nuances of their personal situation and instead merely matches standardized patterns," explains Wilfried Kunde.

In addition, skepticism about algorithms' diagnostic capabilities, as well as privacy concerns, may lead people to provide abbreviated or vague information. Moritz Reis sums up the human component this way: "If we don't trust a machine to understand our uniqueness, we may unconsciously withhold the information it would need to provide precise assistance." This psychological filter can have the effect that medically relevant details never even reach the system, thereby lowering the quality of the diagnosis.

Improving the dialogue with the machine

In the research team's view, the findings clearly show that the technical advancement of AI alone is not sufficient. They therefore see a potential solution in the intelligent design of user interfaces.

To improve the quality of symptom reports, developers should provide concrete examples of high-quality descriptions and program the AI to actively request missing details. Only when users are encouraged to provide detailed information misdiagnoses can be avoided and the burden on the healthcare system could be effectively reduced.

Original publication

Reduced Symptom Reporting Quality During Human-Chatbot Versus Human-Physician Interactions, Moritz Reis, Florian Reis, Yeun Joon Kim, Aylin Demir, Jess Lim, Matthias I. Gröschel, Sebastian D. Boie, Wilfried Kunde. Nature Health, DOI: 10.1038/s44360-026-00116-y

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.