Medical AI Hides Flaws in Race, Ethnicity Data

University of Minnesota

The inaccuracy of race and ethnicity data found in electronic health records (EHRs) can negatively impact patient care as artificial intelligence (AI) is increasingly integrated into healthcare. Because hospitals and providers inconsistently collect such data and struggle to accurately classify individual patients, AI systems trained on these datasets can inherit and perpetuate racial bias.

In a new publication in PLOS Digital Health , experts in bioethics and law call for immediate standardization of methods for collection of race and ethnicity data, and for developers to warranty race and ethnicity data quality in medical AI systems. The research synthesizes concerns about why patient race data in EHRs may not be accurate, identifies best practices for healthcare systems and medical AI researchers to improve data accuracy, and provides a new template for medical AI developers to transparently warrant the quality of their race and ethnicity data

Lead author Alexandra Tsalidis , MBE, notes that "If AI developers heed our recommendation to disclose how their race and ethnicity data were collected, they will not only advance transparency in medical AI but also help patients and regulators critically assess the safety of the resulting medical devices. Just as nutrition labels inform consumers about what they're putting into their bodies, these disclaimers can reveal the quality and origins of the data used to train AI-based health care tools."

"Race bias in AI models is a huge concern as the technology is increasingly integrated into healthcare," senior author Francis Shen , JD, PhD, says. "This article provides a concrete method that can be implemented to help address these concerns."

While more work needs to be done, the article offers a starting point suggests co-author Lakshmi Bharadwaj, MBE. "An open dialogue regarding best practices is a vital step, and the approaches we suggest could generate significant improvements."

The research was supported by the NIH Bridge to Artificial Intelligence (Bridge2AI) program, and by an NIH BRAIN Neuroethics grant (R01MH134144).

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.