In Brief: Who's To Blame When AI Makes Medical Error?

In the realm of gastrointestinal (GI) endoscopy, artificial intelligence (AI) is becoming an essential tool, especially in the computer-aided detection of precancerous colon polyps during screening colonoscopy. This integration marks a significant advancement in gastroenterology care. However, the inevitability of errors persists, and in some cases, AI algorithms themselves could contribute to medical errors. To address this, physician-scientists at the Center for Advanced Endoscopy at Beth Israel Deaconess Medical Center (BIDMC), in collaboration with legal experts from Pennsylvania State University and Maastricht University, are pioneering efforts to develop guidelines on medical liability for AI use in GI endoscopy.

A recent paper, led by BIDMC gastroenterologists Sami Elamin, MD, and Tyler Berzin, MD, published in Clinical Gastroenterology and Hepatology, represents the first international effort to explore the legal implications of AI in GI endoscopy from the perspective of both gastroenterologists and legal scholars. Berzin, an advanced endoscopist at BIDMC and Associate Professor of Medicine at Harvard Medical School, has led several of the early national and international studies exploring the role of AI for precancerous colon polyp detection, a 'level 1' assistive algorithm. However, AI tools are soon poised to advance beyond just polyp detection and may soon play a role in predicting polyp diagnoses, potentially replacing the need for tissue biopsy in certain cases. The authors suggest that even higher levels of automation are both technically feasible and imminent, potentially providing assisting physicians with automated endoscopy reports and recommendations.

Lead author, Sami Elamin, a clinical fellow in Gastroenterology at BIDMC and Harvard Medical School, used hypothetical scenarios to explore the potential legal accountability of individual physicians or healthcare organizations for a variety of potential AI-generated errors that could occur in the field of GI endoscopy.

The degree of legal responsibility for AI errors, the authors conclude, will depend on how these tools are integrated into clinical practice and the level of automation of the algorithms. To ensure the safety, proper implementation, and monitoring of these AI tools, collaboration among hospitals, medical groups, and gastroenterologists is crucial. Specialty societies and healthcare organizations must establish guidelines for physician oversight of AI tools at various automation levels. For physicians, meticulous clinical documentation—whether they adhere to or deviate from AI recommendations—remains a cornerstone in minimizing liability risks.

Read the full paper in Clinical Gastroenterology and Hepatology

BIDMC study authors: Sami Elamin and Tyler Berzin

COI: Work conducted by Tyler Berzin was funded by the European Union (grant agreement no. 101057099). Tyler Berzin is a consultant for Medtronic, Wision AI, Microtech, Magentiq Eye, RSIP Vision, and Boston Scientific. Please see the publication for a complete list of disclosures.

Citation: Elamin, S. (2024) Artificial Intelligence and Medical Liability in Gastrointestinal Endoscopy

CGH. DOI: https://doi.org/10.1016/j.cgh.2024.03.011

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.