Artificial intelligence (AI) is changing the field and practice of medicine, including legal liability and the perception of who is at fault when a patient experiences harm.
"AI holds promise to improve the quality and safety of health care and to reduce errors and patient harm, but the risk of legal liability is a potential barrier for investment and development of this technology as well as the quality of care," said Michael Bruno, professor of radiology and of medicine at Penn State College of Medicine.
Now, Bruno, working alongside a team of researchers from Brown University and Seton Hall University School of Law, found that the understanding of physician liability is influenced by the way in which AI is integrated into a clinician's workflow. The study was published today (March 10) in the journal Nature Health.
The researchers presented mock jurors with a hypothetical malpractice case where a patient suffered irreversible brain damage because a radiologist didn't detect a brain bleed from a computerized tomography (CT) scan, even though AI correctly identified the scan as abnormal. They found that mock jurors were almost 50% more likely to side with the plaintiff and against the radiologist when the radiologist only reviewed the CT scan once after AI flagged the scan compared to when the radiologist read the scan twice, once before receiving the AI feedback and once after.