An agile, transparent, and ethics-driven oversight system is needed for the U.S. Food and Drug Administration (FDA) to balance innovation with patient safety when it comes to artificial intelligence-driven medical technologies. That is the takeaway from a new report issued to the FDA, published this week in the open-access journal PLOS Medicine by Leo Celi of the Massachusetts Institute of Technology, and colleagues.
Artificial intelligence is becoming a powerful force in healthcare, helping doctors diagnose diseases, monitor patients, and even recommend treatments. Unlike traditional medical devices, many AI tools continue to learn and change after they've been approved, meaning their behavior can shift in unpredictable ways once they're in use.
In the new paper, Celi and his colleagues argue that the FDA's current system is not set up to keep tabs on these post-approval changes. Their analysis calls for stronger rules around transparency and bias, especially to protect vulnerable populations. If an algorithm is trained mostly on data from one group of people, it may make mistakes when used with others. The authors recommend that developers be required to share information about how their AI models were trained and tested, and that the FDA involve patients and community advocates more directly in decision-making. They also suggest practical fixes, including creating public data repositories to track how AI performs in the real world, offering tax incentives for companies that follow ethical practices, and training medical students to critically evaluate AI tools.
"This work has the potential to drive real-world impact by prompting the FDA to rethink existing oversight mechanisms for AI-enabled medical technologies. We advocate for a patient-centered, risk-aware, and continuously adaptive regulatory approach—one that ensures AI remains an asset to clinical practice without compromising safety or exacerbating healthcare disparities," the authors say.