How physicians feel about artificial intelligence in medicine has been studied many times. But what do patients think? A team led by researchers at the Technical University of Munich (TUM) has investigated this for the first time in a large study spanning six continents. The central finding: the worse people rate their own health, the more likely they are to reject the use of AI. The study is intended to help align future medical AI applications more closely with patients' needs.
Acceptance by patients is essential for the effective use of artificial intelligence in medicine, regardless of whether AI is used as a diagnostic tool, to create individual treatment plans or for other applications. The international research network of the COMFORT study therefore surveyed around 14,000 patients in 74 clinics in 43 countries. In order to cover a wide range of diseases, the survey was conducted in radiology departments that carry out X-ray, CT and MRI examinations on behalf of other specialist disciplines.
A majority - 57.6% - viewed the use of AI in medicine positively. However, some differences emerged within the cohort: men were slightly more favorable at 59.1% than women at 55.6%. Approval also rose markedly with greater familiarity with technology and higher self-rated understanding of AI. Among respondents who described themselves as highly knowledgeable about AI, 83.3% were positive about using it in medicine.
Negative view of AI in serious illness
The more severe a person's own illness, the more negative their attitude toward AI. More than half of patients in very poor health rated medical AI "extremely negative" or "rather negative" (26.6% and 29.2%, respectively). Among those in very good health, the corresponding figures were 1.3% and 5.3%.
"The exact reasons for negative attitudes among seriously ill patients cannot be determined from our study," says Dr. Felix Busch, assistant physician at TUM's Institute of Diagnostic and Interventional Radiology and the study's first author. "We suspect that experiences with the health care system, illness burden, and psychological factors play a role."
Explainability of medical AI is crucial
Respondents expressed clear preferences for how AI should be used and designed. For 70.2%, it was important that medical AI be 'explainable,' allowing users to see the steps leading to its conclusions. In addition, 72.9% wanted these technologies to function as tools, with physicians making the final decisions.
Only 4.4% favored diagnoses made exclusively by AI. At the same time, just 6.6% wanted diagnoses to be made entirely without AI. These questions referred to hypothetical scenarios in which human clinicians and AI were equally accurate. "The results show that explainability must be considered from the outset," says Felix Busch.
Basis for further studies
One methodological limitation is the survey timing in 2023. "Since then, large language models in particular have advanced considerably. Patient attitudes may have changed," says Adjunct Teaching Professor Dr. Keno Bressem, senior lecturer and, together with Adjunct Teaching Professor Dr. Lisa Adams, senior author of the study. "Follow-up surveys are needed to test this and to align the development of medical AI with patients' needs," adds Dr. Lisa Adams. A follow-up study by the COMFORT consortium using the same questionnaire is already underway.
Busch F, Hoffmann L, Xu L, et al. Multinational Attitudes Toward AI in Health Care and Diagnostics Among Hospital Patients. JAMA Netw Open. 2025;8(6):e2514452. doi: 10.1001/jamanetworkopen.2025.14452