Patients Demand Doctors, Not Just Algorithms

University of Michigan

Study: Factors for Patient Trust and Acceptance of Medical Artificial Intelligence

Artificial intelligence may be reshaping modern medicine, but when it comes to their own health, patients still want to know a real person is watching the screen.

Researchers at the University of Michigan and Michigan State University found that U.S. adults were significantly more likely to trust in and choose medical AI in scenarios with better AI performance, U.S. Food and Drug Administration approval, national and local certifications, the presence of a clinician, and the use of representative data.

The federally funded study, which appears in JAMA Network Open, is important because it underscores a central challenge for health systems: To build trust in medical AI, institutions must prioritize performance and transparency while balancing governance and the enduring importance of the clinician-patient relationship.

Rather than simply asking patients what they value, the study measured how specific features influenced choices, revealing that AI accuracy outweighed every other attribute.

Ana Bracic
Ana Bracic

"The advantage of a study like this is that it captures patients' choices, rather than just what they say is important in the abstract," said Ana Bracic, lead author and assistant professor of political science at Michigan State University.

In a national survey of 3,000 adults conducted in December 2024, respondents weighed in on hypothetical AI-assisted medical visits. They repeatedly chose between paired diagnostic scenarios that varied by six features, including AI performance level, the presence of a clinician, federal or institutional oversight, and the quality of data used to train the system. In total, they evaluated 12 mock visits each, generating 36,000 observations. The study sample reflected a broad cross-section of U.S. adults, with participants spanning age, race, income and education levels.

Kayte Spector-Bagdady
Kayte Spector-Bagdady

When AI systems were described as performing at or above the level of medical specialists, participants were dramatically more likely to select those visits, boosting the probability of choosing that option by up to 32.5%. The patients gravitated toward AI that outperformed general practitioners and matched or exceeded specialists, said co-author Kayte Spector-Bagdady, the George E. Wantz Research Professor of Bioethics at U-M.

But accuracy alone wasn't enough.

Nicholson Price
Nicholson Price

The presence of a clinician-a "human in the loop"-increased the likelihood of choosing a visit by 18.4%. Patients still wanted reassurance that a trained professional is overseeing their care. But co-author Nicholson Price, U-M professor of law, warns that even though clinicians are important to patient trust, they won't always be available as much as patients might like.

"Health systems engaged in local governance also shouldn't assume that clinicians will always be trained, or have the time, or have the skill, or even be available at all, to perform governance tasks at the level of the individual patient encounter," he said.

That said, performance mattered more to participants than a human in the loop-an interesting area for future policy exploration, Price said.

Oversight also mattered. Participants preferred AI systems that had formal oversight, whether through FDA approval, Mayo Clinic certification or local hospital validation. And when respondents were told the AI had been trained on representative, high-quality data, they were more likely to favor that option as well.

Participants cited performance and clinician involvement as the primary reasons for their choices. Many respondents expressed comfort with AI as a tool-but not as a standalone decision-maker.

The results of the research suggest that public acceptance of medical AI may hinge less on abstract enthusiasm for innovation and more on visible safeguards and demonstrated effectiveness, the researchers said.

"Patients appear willing to embrace AI-especially when it performs at a high level-but they are far more comfortable when it operates within a system that includes both oversight and human judgment," they wrote.

The study's authors also included U-M law students Sophie Towle and Rina Zhang, and Cornelius James, U-M assistant professor of internal medicine. The research was primarily funded by a partnership between the Michigan Institute for Data & AI in Society and the U-M Institute for Healthcare Policy & Innovation.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.