In modern hospitals, timely and accurate decision-making is essential—especially in radiology, where contrast media consultations often require rapid answers rooted in complex clinical guidelines. Yet, physicians are frequently forced to make these decisions under pressure, without immediate access to all relevant information. This challenge is particularly critical for institutions that must also safeguard patient data by avoiding cloud-based tools.
In a new study published online in npj Digital Medicine on July 02, 2025, a team of researchers led by Associate Professor Akihiko Wada from Juntendo University, Japan, demonstrated that retrieval-augmented generation (RAG), a technique that enables AI to consult trusted sources during response generation, can significantly improve the safety, accuracy, and speed of locally deployed large language models (LLMs) for radiology contrast media consultations. The study was co-authored by Dr. Yuya Tanaka from The University of Tokyo, Dr. Mitsuo Nishizawa from Juntendo University Urayasu Hospital, and Professor Shigeki Aoki from Juntendo University Graduate School of Medicine.
The team developed a RAG-enhanced version of a local language model and tested it on 100 simulated cases involving iodinated contrast media, a common component in computed tomography imaging. These consultations typically require real-time risk assessments based on factors like kidney function, allergies, and medication history. The enhanced model was compared to three leading cloud-based AIs—GPT-4o mini, Gemini 2.0 Flash, and Claude 3.5 Haiku—as well as its own baseline version, a standard LLM.
The results were striking. The RAG-enhanced model completely eliminated dangerous hallucinations (from 8% to 0%) and responded significantly faster than the cloud-based systems (2.6 seconds on average, compared to 4.9–7.3 seconds). While cloud models performed well, the RAG-enhanced system closed the performance gap, delivering safer and faster results, all while keeping sensitive medical data onsite. "For clinical use, reducing hallucinations to zero is a safety breakthrough," says Dr. Wada. "These hallucinations can lead to incorrect recommendations about contrast dosage or missed contraindications. Our system generated accurate, guideline-based responses without making those mistakes." Notably, the model also ran efficiently on standard hospital computers, making it accessible without costly hardware or cloud subscriptions—especially valuable for hospitals with limited radiology staff.
The inspiration for this work came directly from clinical experience. "We frequently encounter complex contrast media decisions that require consulting multiple guidelines under time pressure," recalls Dr. Wada. "For example, cases involving patients with multiple risk factors - reduced kidney function, medication interactions, or allergy histories. We realized that AI could streamline this process, but only if we could keep sensitive patient data within our institution." The RAG-enhanced model operates by dynamically retrieving relevant information from a curated knowledge base, including international radiology guidelines and institutional protocols. This ensures each response is grounded in verified, up-to-date medical knowledge rather than solely relying on pre-trained data.
Beyond radiology, the researchers envision this technology being applied to emergency medicine, cardiology, internal medicine, and even medical education. It could also be a game-changer for rural hospitals and healthcare providers in low-resource settings by offering instant access to expert-level guidance.
Overall, this study represents a major breakthrough in clinical AI—proving that it is possible to achieve expert-level performance without compromising patient privacy. The RAG-enhanced model paves the way for safer, more equitable, and immediately deployable AI solutions in healthcare. As hospitals worldwide seek to balance technological advancement with ethical responsibility, this research offers a practical and scalable path forward.
"We believe this represents a new era of AI-assisted medicine," says Dr. Wada. "One where clinical excellence and patient privacy go hand in hand."
Reference
Authors |
Akihiko Wada¹, Yuya Tanaka1,2, Mitsuo Nishizawa3,4, Akira Yamamoto4, Toshiaki Akashi¹, Akifumi Hagiwara¹, Yayoi Hayakawa¹, Junko Kikuta¹, Keigo Shimoji1,4, Katsuhiro Sano¹, Koji Kamagata¹, Atsushi Nakanishi¹, and Shigeki Aoki1,4 |
Title of original paper |
Retrieval-Augmented Generation Elevates Local LLM Quality in Radiology Contrast Media Consultation |
Journal |
npj Digital Medicine |
DOI |
|
Affiliations |
¹Department of Radiology, Juntendo University Graduate School of Medicine, Tokyo, Japan ²Department of Radiology, The University of Tokyo School of Medicine, Tokyo, Japan ³Department of Radiology, Juntendo University Urayasu Hospital, Chiba, Japan ⁴Faculty of Health Data Science, Juntendo University Graduate School of Medicine, Chiba, Japan |
About Associate Professor Akihiko Wada from Juntendo University Graduate School of Medicine
Associate Professor Akihiko Wada, MD, PhD, is a clinician-scientist in the Department of Radiology at the Juntendo University Graduate School of Medicine. His research focuses on advancing clinical imaging through artificial intelligence and deep learning applications, particularly in neuroradiology, diffusion MRI harmonization, and pediatric brain imaging. Dr. Wada's recent work demonstrates expertise in developing AI-driven tools for diagnostic accuracy enhancement, including prompt engineering for large language models in neuroradiology and novel approaches for MRI image synthesis. His contributions span from fundamental deep learning methodologies to practical clinical applications, representing his commitment to bridging cutting-edge AI technology with clinical practice.