Generative AI Matches Non-specialist Doctors in Diagnosis

Osaka Metropolitan University

The use of generative AI for diagnostics has attracted attention in the medical field and many research papers have been published on this topic. However, because the evaluation criteria were different for each study, a comprehensive analysis was needed to determine the extent AI could be used in actual medical settings and what advantages it featured in comparison to doctors.

A research group led by Dr. Hirotaka Takita and Associate Professor Daiju Ueda at Osaka Metropolitan University's Graduate School of Medicine conducted a meta-analysis of generative AI's diagnostic capabilities using 83 research papers published between June 2018 and June 2024 that covered a wide range of medical specialties. Of the large language models (LLMs) that were analyzed, ChatGPT was the most commonly studied.

The comparative evaluation revealed that medical specialists had a 15.8% higher diagnostic accuracy than generative AI. The average diagnostic accuracy of generative AI was 52.1%, with the latest models of generative AI sometimes showing accuracy on par with non-specialist doctors.

"This research shows that generative AI's diagnostic capabilities are comparable to non-specialist doctors. It could be used in medical education to support non-specialist doctors and assist in diagnostics in areas with limited medical resources." stated Dr. Takita. "Further research, such as evaluations in more complex clinical scenarios, performance evaluations using actual medical records, improving the transparency of AI decision-making, and verification in diverse patient groups, is needed to verify AI's capabilities."

The findings were published in npj Digital Medicine.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.