ChatGPT's Capacity and Constraints in Condensing Medical Studies

American Academy of Family Physicians

Large language models (LLMs) are neural network–based computer programs that use a detailed statistical understanding of written language to perform many tasks, including text generation, summarization, software development, and prediction. However, LLMs can produce text that, while may seem correct, is not fact-based. This study investigates whether a popular LLM, ChatGPT-3.5, could produce high-quality, accurate, and bias-free summaries of medical research abstracts and determine the relevance of various journals and their articles to different medical specialties. Ten articles published in 2022 (not yet "seen" by ChatGPT, as ChatGPT was trained on data before 2022) were randomly sampled from each of 14 selected journals. ChatGPT was then prompted to summarize the abstract, "self-reflect" on the quality, accuracy, and bias of its own summaries, and evaluate its performance in classifying articles' and journals' relevance to various areas of medicine (cardiology, pulmonary medicine, family medicine, internal medicine, public health, primary care, neurology, psychiatry, obstetrics and gynecology, and general surgery).

The quality of summaries, relevant classification of journal and article to medical specialty were also assessed by human physicians. The results include a total of 140 abstract summaries across 14 journals. ChatGPT produced summaries that were 70% shorter than the abstracts. The summaries were rated as high quality, high accuracy, and low bias by both ChatGPT and physician reviewers. Serious inaccuracies occurred in only four of the 140 summaries. Minor inaccuracies were noted in 20 of 140 articles and mostly related to the introduction of ambiguity in meaning or summarization of details that would have provided additional content but not completely changed the meaning. ChatGPT was able to classify journals to relevant medical specialties but was much less able to classify specific articles to relevant medical specialties. The summaries were found to have rare—but important—inaccuracies that preclude them from being considered a definitive source of truth.

What We Know: The availability of medical knowledge is increasing. However, due to the demands of their jobs, clinicians have little time to review academic literature, even within their own specialty. Large language models (eg, ChatGPT) could be helpful and save time, but they are not always accurate as they can include bias from their training models and the human feedback which reinforces their learning, and sometimes include information that is not fact-based.

What This Study Adds: Clinicians are strongly cautioned against solely relying on ChatGPT-based summaries to understand study methods and study results, especially in high-risk situations. Critical medical decisions should—for obvious reasons—remain based on a full evaluation of the full text of articles in context with available evidence from meta-analyses and professional guidelines. However, this study suggests ChatGPT can be useful as a screening tool to help busy clinicians and scientists more rapidly evaluate whether further review of an article is likely to be worthwhile.

Quality, Accuracy, and Bias in ChatGPT-Based Summarization of Medical Abstracts

Daniel J. Parente, MD, PhD, et al

Department of Family Medicine and Community Health, University of Kansas Medical Center, Kansas City, Kansas

PRE-EMBARGO LINK (Link expires at 5 p.m. EDT March 25, 2024)

PERMANENT LINK

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.