Promises and Challenges of AI in Medical Research

PNAS Nexus

In an editorial, Monica M. Bertagnolli assesses the promise of artificial intelligence and machine learning (AI/ML) to study and improve health. The editorial was written by Dr. Bertagnolli in her capacity as director of the National Cancer Institute. AI/ML offer powerful new tools to analyze highly complex datasets, and researchers across biomedicine are taking advantage. However, Dr. Bertagnolli argues that human judgment is still required. Humans must select and develop the right computational models and ensure that the data used to train machine learning models are relevant, complete, high quality, and sufficiently copious. Many machine learning insights emerge from a "black box" without transparency into the logic underlying the predictions, which can impede acceptance for AI/ML-informed methods in clinical practice. "Explainable AI" can crack open the box to allow researchers more access to the causal links the methods are capturing. AI/ML-informed methods must also meet patient needs in the real world, and so interdisciplinary collaborations should include those engaged in clinical care. Researchers must also watch for bias; unrecognized confounders such as race and socioeconomic status can produce results that discriminate against some patient groups. AI/ML is an exciting new tool that also demands increased responsibility. Ultimately, AI is only as smart and as responsible as the humans who wield it.

In the same issue, Victor J. Dzau, President of the National Academy of Medicine shares his perspective on the same topic.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.