OAK BROOK, Ill. – Renowned physician-scientist Eric J. Topol, M.D., and Harvard artificial intelligence (AI) expert Pranav Rajpurkar, Ph.D., advocate for a clear separation of the roles between AI systems and radiologists in an editorial published today in Radiology , a journal of the Radiological Society of North America ( RSNA ).
"We're stuck between distrust and dependence, and missing out on the full potential of AI," said Dr. Rajpurkar, associate professor of Biomedical Informatics at Harvard University.
The authors urge a rethinking of the assistive role of AI, which is designed to work alongside human radiologists to improve diagnostic accuracy. But so far, fully integrating AI into radiology workflows has fallen short of expectations.
"It's still early for getting a definitive assessment," said Dr. Topol, professor and executive vice president, Scripps Research. "But several recent studies of GenAI have not demonstrated the widely anticipated synergy between AI and physicians."
"Current evidence suggests that neither fully integrated assistive approaches nor complete automation are optimal," Dr. Rajpurkar said. "Radiologists don't know when to trust AI and when to trust themselves. Add AI errors into the mix, and you get a perfect storm of uncertainty."
Implementing assistive AI has presented notable challenges, including cognitive biases that cause radiologists to disregard or over-rely on AI suggestions. Misaligned incentives, unclear workflows, liability concerns, and economic models that don't support AI integration have also slowed its adoption.
"After years of hype, AI penetration in U.S. radiology remains surprisingly low," Dr. Rajpurkar said. "This suggests we've been implementing AI like sprinkling digital fairy dust on broken workflows. The real opportunity isn't marginal accuracy gains, it's workflow transformation."
The authors propose a careful, measured approach to role separation—guided by rigorous clinical validation and real-world evidence—as the most pragmatic path forward. Their framework includes three models:
- AI-First Sequential Model—Where effective, AI processes the initial segment of the workflow (e.g., preparing clinical context from electronic health records), followed by the radiologist providing expert interpretation.
- Doctor-First Sequential Model—The radiologist initiates the diagnostic process while AI performs complementary tasks such as report generation and follow-up recommendations to enhance the workflow.
- Case Allocation Model—Cases are triaged based on complexity and clarity, with some managed entirely by AI, others by a radiologist, and the rest through a combination of both.
"Radiologists are stuck in the worst of both worlds—afraid to trust AI fully, but too reliant to ignore it," Dr. Rajpurkar said. "Clear role separation breaks this cycle."
The authors envision institutions implementing their framework through repeated interactions rather than strict, sequential processes.
"We're providing a framework, but the real innovation will come from frontline radiologists adapting it to their specific needs," Dr. Rajpurkar said. "Institutions will likely discover hybrid approaches we haven't even imagined yet."
For example, a trauma center might use the AI-First model to review chest X-rays overnight, then switch to a Doctor-First model when teaching residents. Under the Case Allocation model, an AI screening system may identify and 'clear' normal results, escalating only abnormal cases to the radiologist for review.
"The breakthrough moment comes when practices stop asking 'Which model?' and start asking 'Which model when?'" he said. "That's where the magic happens—adaptive workflows that respond to real-time clinical needs, not rigid theoretical constructs."
Implementing their vision will require carefully designed pilot programs to test the models in real clinical environments, measuring accuracy, workflow efficiency, radiologist satisfaction and downstream outcomes.
"Results must be shared openly; the field desperately needs honest case studies," Dr. Rajpurkar said. "Our framework gives radiologists not another promise of AI magic, but a concrete, practical roadmap for integration that acknowledges both the current limitations and the inevitable evolution of AI."
The researchers also suggest establishing a clinical certification pathway for AI systems, something no single agency is equipped to handle alone.
"The Food & Drug Administration needs to maintain safety oversight, but clinical certification requires understanding real-world workflow integration, which goes beyond traditional regulatory scope," Dr. Rajpurkar said. "We need new models, perhaps independent certification bodies with input from multiple stakeholders and consortia that bring together clinical expertise, technical knowledge and implementation experience."
The researchers are awaiting the emergence of general medical AI systems capable of handling routine tasks, preparing cases, and drafting reports, all while learning the patterns of the practice.
"We're not there yet," Dr. Rajpurkar said. "But when these systems can competently manage the breadth of tasks a senior medical resident handles, the entire conversation changes. That's the inflection point we're watching for."