Five Key Questions to Enhance AI in Clinical Decisions

University of California - Los Angeles Health Sciences

While Artificial Intelligence (AI) can be a powerful tool that physicians can use to help diagnose their patients and has great potential to improve accuracy, efficiency and patient safety, it has its drawbacks. It may distract doctors, give them too much confidence in the answers it provides, and even lead them to lose confidence in their own diagnostic judgement.

To ensure that AI is properly integrated into healthcare practice, a research team has provided a framework comprising five guiding questions aimed at supporting doctors in their patient care while not undermining their expertise through an over-reliance on AI. The framework was recently published in the peer-reviewed Journal of the American Medical Informatics Association.

"This paper moves the discussion from how well the AI algorithm performs to how physicians actually interact with AI during diagnosis," said senior author Dr. Joann G. Elmore , professor of medicine in the division of general internal medicine and health services research and Director of the National Clinician Scholars Program at the David Geffen School of Medicine at UCLA . "This paper provides a framework that pushes the field beyond 'Can AI detect disease?' to 'How should AI support doctors without undermining their expertise?' This reframing is an essential step toward safer and more effective adoption of AI in clinical practice."

While AI-related errors happen, no one really knows why these tools can fail to improve diagnostic decision-making when implemented into clinical practice.

To find out why, the researchers propose five questions to guide research and development to prevent AI-linked diagnostic errors. The questions to ask are: What type and format of information should AI present? Should it provide that information immediately, after initial review, or be toggled on and off by the physician? How does the AI system show how it arrives at its decisions? How does it affect bias and complacency? And finally, what are the risks of long-term reliance on it?

These questions are important to ask because:

  • Format affects doctors' attention, diagnostic accuracy, and possible interpretive biases
  • Immediate information can lead to a biased interpretation while delayed cues may help maintain diagnostic skills by allowing physicians to more fully engage in a diagnosis
  • How the AI system arrives at a decision can highlight features that were ruled in or out, provide "what-if" types of explanations, and more effectively align with doctors' clinical reasoning
  • When physicians lean too much on AI, they may rely less on their own critical thinking, letting an accurate diagnosis slip by
  • Long-term reliance on AI may erode a doctor's learned diagnostic abilities

The next steps toward improving AI for diagnostic purposes are to evaluate different designs in clinical settings, study how AI affects trust and decision-making, observe doctors' skill development when AI is used in training and clinical practice, and develop systems that self-adjust how they assist physicians.

"AI has huge potential to improve diagnostic accuracy, efficiency, and patient safety, but poor integration could make healthcare worse instead of better," Elmore said. "By highlighting the human factors like timing, trust, over-reliance, and skill erosion, our work emphasizes that AI must be designed to work with doctors, not replace them. This balance is crucial if we want AI to enhance care without introducing new risks."

Co-authors are Tad Brunyé of Tufts University and Stephen Mitroff of George Washington University.

The research was supported by the National Cancer Institute of the National Institutes of Health (R01 CA288824, R01 CA225585, R01 CA172343, and R01 CA140560).

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.