AI Studies Explore Clinician Image Analysis Aid

Stevens Institute of Technology

Hoboken, NJ., December 11, 2025 — In recent years AI has emerged as a powerful tool for analyzing medical images. Thanks to advances in computing and large medical datasets from which AI can learn, it has proven to be a valuable aid in reading and analyzing patterns in X-rays, MRIs and CT scans, enabling doctors to make better and faster decisions, particularly in the treatment and diagnosis of life-threatening diseases like cancer. In certain settings, these AI tools even offer advantages over their human counterparts.

"AI systems can process thousands of images quickly and provide predictions much faster than human reviewers," says Onur Asan , Associate Professor at Stevens Institute of Technology, whose research focuses on human-computer interaction in healthcare. "Unlike humans, AI does not get tired or lose focus over time."

Yet many clinicians view AI with at least some degree of distrust, largely because they don't know how it arrives at its predictions, an issue known as the "black box" problem. "When clinicians don't know how AI generates its predictions, they are less likely to trust it," says Asan. "So we wanted to find out whether providing extra explanations may help clinicians, and how different degrees of AI explainability influence diagnostic accuracy, as well as trust in the system."

Working together with his PhD student Olya Rezaeian and Assistant Professor Alparslan Emrah Bayrak at Lehigh University, Asan conducted a study of 28 oncologists and radiologists who used AI to analyze breast cancer images. The clinicians were also provided with various levels of explanations for the AI tool's assessments. At the end, participants answered a series of questions designed to gauge their confidence in the AI-generated assessment and how difficult the task was.

The team found that AI did improve diagnostic accuracy for clinicians over the control group, but there were some interesting caveats.

The study revealed that providing more in-depth explanations didn't necessarily produce more trust. "We found that more explainability doesn't equal more trust," says Asan. That's because adding extra or more complex explanations requires clinicians to process additional info, taking their time and focus away from analyzing the images. When explanations were more elaborate, clinicians took longer to make decisions, which decreased their overall performance.

"Processing more information adds more cognitive workload to clinicians. It also makes them more likely to make mistakes and possibly harm the patient," Asan explains. "You don't want to add cognitive load to the users by adding more tasks."

Asan's research also found that in some cases clinicians trusted the AI too much, which could lead to overlooking crucial information on images, and lead to patient harm. "If an AI system is not designed well and makes some errors while users have high confidence in it, some clinicians may develop a blind trust believing that whatever the AI is suggesting is true, and not scrutinize the results enough," says Asan.

The team outlined their findings in two recent studies: The impact of AI explanations on clinicians' trust and diagnostic accuracy in breast cancer , published in the journal of Applied Ergonomics on November 1, 2025, and Explainability and AI Confidence in Clinical Decision Support Systems: Effects on Trust, Diagnostic Performance, and Cognitive Load in Breast Cancer Care , published in the International Journal of Human–Computer Interaction on August 7, 2025.

Asan believes that AI will continue to be a helpful assistant to clinicians in interpreting medical imaging, but such systems must be built thoughtfully. "Our findings suggest that designers should exercise caution when building explanations into the AI systems," he says, so that they don't become too cumbersome to use. Plus, he adds, proper training will be needed for the users, as human oversight will still be necessary. "Clinicians who use AI should receive training that emphasizes interpreting the AI outputs and not just trusting it."

Ultimately, there should be a good balance between the ease of use and utility of the AI systems, Asan notes. "Research finds that there are two main parameters for a person to use any form of technology — perceived usefulness and perceived ease of use," he says. "So if doctors will think that this tool is useful for doing their job, and it's easy to use, they are going to use it."

About Stevens Institute of Technology

Stevens is a premier, private research university situated in Hoboken, New Jersey. Since our founding in 1870, technological innovation has been the hallmark of Stevens' education and research. Within the university's three schools and one college, more than 8,000 undergraduate and graduate students collaborate closely with faculty in an interdisciplinary, student-centric, entrepreneurial environment. Academic and research programs spanning business, computing, engineering, the arts and other disciplines actively advance the frontiers of science and leverage technology to confront our most pressing global challenges. The university continues to be consistently ranked among the nation's leaders in career services, post-graduation salaries of alumni and return on tuition investment.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.