AI Jumps to Conclusions Like Humans, Study Finds

The Mount Sinai Hospital / Mount Sinai School of Medicine

New York, NY [July 22, 2025]—A study by investigators at the Icahn School of Medicine at Mount Sinai, in collaboration with colleagues from Rabin Medical Center in Israel and other collaborators, suggests that even the most advanced artificial intelligence (AI) models can make surprisingly simple mistakes when faced with complex medical ethics scenarios.

The findings, which raise important questions about how and when to rely on large language models (LLMs), such as ChatGPT, in health care settings, were reported in the July 22 online issue of NPJ Digital Medicine [10.1038/s41746-025-01792-y].

The research team was inspired by Daniel Kahneman's book "Thinking, Fast and Slow," which contrasts fast, intuitive reactions with slower, analytical reasoning. It has been observed that large language models (LLMs) falter when classic lateral-thinking puzzles receive subtle tweaks. Building on this insight, the study tested how well AI systems shift between these two modes when confronted with well-known ethical dilemmas that had been deliberately tweaked.

"AI can be very powerful and efficient, but our study showed that it may default to the most familiar or intuitive answer, even when that response overlooks critical details," says co-senior author Eyal Klang, MD , Chief of Generative AI in the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai. "In everyday situations, that kind of thinking might go unnoticed. But in health care, where decisions often carry serious ethical and clinical implications, missing those nuances can have real consequences for patients."

To explore this tendency, the research team tested several commercially available LLMs using a combination of creative lateral thinking puzzles and slightly modified well-known medical ethics cases. In one example, they adapted the classic "Surgeon's Dilemma," a widely cited 1970s puzzle that highlights implicit gender bias. In the original version, a boy is injured in a car accident with his father and rushed to the hospital, where the surgeon exclaims, "I can't operate on this boy—he's my son!" The twist is that the surgeon is his mother, though many people don't consider that possibility due to gender bias. In the researchers' modified version, they explicitly stated that the boy's father was the surgeon, removing the ambiguity. Even so, some AI models still responded that the surgeon must be the boy's mother. The error reveals how LLMs can cling to familiar patterns, even when contradicted by new information.

In another example to test whether LLMs rely on familiar patterns, the researchers drew from a classic ethical dilemma in which religious parents refuse a life-saving blood transfusion for their child. Even when the researchers altered the scenario to state that the parents had already consented, many models still recommended overriding a refusal that no longer existed.

"Our findings don't suggest that AI has no place in medical practice, but they do highlight the need for thoughtful human oversight, especially in situations that require ethical sensitivity, nuanced judgment, or emotional intelligence," says co-senior corresponding author  Girish N. Nadkarni, MD, MPH , Chair of the  Windreich Department of Artificial Intelligence and Human Health , Director of the  Hasso Plattner Institute for Digital Health , Irene and Dr. Arthur M. Fishberg Professor of Medicine at the Icahn School of Medicine at Mount Sinai, and Chief AI Officer of the Mount Sinai Health System. "Naturally, these tools can be incredibly helpful, but they're not infallible. Physicians and patients alike should understand that AI is best used as a complement to enhance clinical expertise, not a substitute for it, particularly when navigating complex or high-stakes decisions. Ultimately, the goal is to build more reliable and ethically sound ways to integrate AI into patient care."

"Simple tweaks to familiar cases exposed blind spots that clinicians can't afford," says lead author Shelly Soffer, MD, a Fellow at the Institute of Hematology, Davidoff Cancer Center, Rabin Medical Center. "It underscores why human oversight must stay central when we deploy AI in patient care."

Next, the research team plans to expand their work by testing a wider range of clinical examples. They're also developing an "AI assurance lab" to systematically evaluate how well different models handle real-world medical complexity.

The paper is titled "Pitfalls of Large Language Models in Medical Ethics Reasoning."

The study's authors, as listed in the journal, are Shelly Soffer, MD; Vera Sorin, MD; Girish N. Nadkarni, MD, MPH; and Eyal Klang, MD.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.