Human-AI Teams Deliver Most Accurate Medical Diagnoses

Max Planck Institute for Human Development

Diagnostic errors are among the most serious problems in everyday medical practice. AI systems—especially large language models (LLMs) like ChatGPT-4, Gemini, or Claude 3—offer new ways to efficiently support medical diagnoses. Yet these systems also entail considerable risks—for example, they can "hallucinate" and generate false information. In addition, they reproduce existing social or medical biases and make mistakes that are often perplexing to humans.

An international research team, led by the Max Planck Institute for Human Development and in collaboration with partners from the Human Diagnosis Project (San Francisco) and the Institute of Cognitive Sciences and Technologies of the Italian National Research Council (CNR-ISTC Rome), investigated how humans and AI can best collaborate. The result: hybrid diagnostic collectives—groups consisting of human experts and AI systems—are significantly more accurate than collectives consisting solely of humans or AI. This holds particularly for complex, open-ended diagnostic questions with numerous possible solutions, rather than simple yes/no decisions. "Our results show that cooperation between humans and AI models has great potential to improve patient safety," says lead author Nikolas Zöller, postdoctoral researcher at the Center for Adaptive Rationality of the Max Planck Institute for Human Development.

Realistic simulations using more than 2,100 clinical vignettes

The researchers used data from the Human Diagnosis Project, which provides clinical vignettes—short descriptions of medical case studies—along with the correct diagnoses. Using more than 2,100 of these vignettes, the study compared the diagnoses made by medical professionals with those of five leading AI models. In the central experiment, various diagnostic collectives were simulated: individuals, human collectives, AI models, and mixed human–AI collectives. In total, the researchers analyzed more than 40,000 diagnoses. Each was classified and evaluated according to international medical standards (SNOMED CT).

Humans and machines complement each other—even in their errors

The study shows that combining multiple AI models improved diagnostic quality. On average, the AI collectives outperformed 85% of human diagnosticians. However, there were numerous cases in which humans performed better. Interestingly, when AI failed, humans often knew the correct diagnosis.

The biggest surprise was that combining both worlds led to a significant increase in accuracy. Even adding a single AI model to a group of human diagnosticians—or vice versa—substantially improved the result. The most reliable outcomes came from collective decisions involving multiple humans and multiple AIs. The explanation is that humans and AI make systematically different errors. When AI failed, a human professional could compensate for the mistake—and vice versa. This so-called error complementarity makes hybrid collectives so powerful. "It's not about replacing humans with machines. Rather, we should view artificial intelligence as a complementary tool that unfolds its full potential in collective decision-making," says co-author Stefan Herzog, Senior Research Scientist at the Max Planck Institute for Human Development.

However, the researchers also emphasize the limitations of their work. The study only considered text-based case vignettes—not actual patients in real clinical settings. Whether the results can be transferred directly to practice remains a questions for future studies to address. Likewise, the study focused solely on diagnosis, not treatment, and a correct diagnosis does not necessarily guarantee an optimal treatment.

It also remains uncertain how AI-based support systems will be accepted in practice by medical staff and patients. The potential risks of bias and discrimination by both AI and humans, particularly in relation to ethnic, social, or gender differences, likewise require further research.

Wide range of applications for hybrid human–AI collectives

The study is part of the Hybrid Human Artificial Collective Intelligence in Open-Ended Decision Making (HACID) project, funded under Horizon Europe, which aims to promote the development of future clinical decision-support systems through the smart integration of human and machine intelligence. The researchers see particular potential in regions where access to medical care is limited. Hybrid human–AI collectives could make a crucial contribution to greater healthcare equity in such areas.

"The approach can also be transferred to other critical areas—such as the legal system, disaster response, or climate policy—anywhere that complex, high-risk decisions are needed. For example, the HACID project is also developing tools to enhance decision-making in climate adaptation" says Vito Trianni, co-author and coordinator of the HACID project.

In brief:

  • Hybrid diagnostic collectives consisting of humans and AI make significantly more accurate diagnoses than either medical professionals or AI systems alone—because they make systematically different errors that cancel each other out.
  • The study analyzed over 40,000 diagnoses made by humans and machines in response to more than 2,100 realistic clinical vignettes.
  • Adding an AI model to a human collective—or vice versa—noticeably improved diagnostic quality; hybrid collective decisions made by several humans and machines achieved the best results.
  • These findings highlight the potential for greater patient safety and more equitable healthcare, especially in underserved regions. However, further research is needed on practical implementation and ethical considerations.
/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.