General public supports use of AI in medicine

ETH researchers have performed an experiment to find out how the liability of physicians is viewed by a jury trial when the physicians concerned have made use of artificial intelligence.

Artificial intelligence (AI) is already being used to help physicians make decisions, and, especially in personalised medicine, this development is likely to increase. The future use of such technologies to directly obtain treatment recommendations will also raise legal questions, however. How will the use of AI affect the liability of physicians if errors are made and patients experience harm as a result?

Legal experiment

A group of ETH researchers led by Alexander Stremitzer, Professor of Law and Economics, examined this question with an experiment based on US jury trials. They questioned a representative sample of 2,000 American adults who could theoretically be called to perform jury duty, asking them online how they viewed the liability of a physician after having read one of four different scenarios.

The scenarios varied specifically in terms of whether or not the physician followed the AI recommendation, and whether or not they deviated from the standard treatment.

The risks of contradicting AI

The results for one particular scenario are surprising – a physician is not protected from liability if he contradicts the AI, even if it suggests a treatment that deviates from the standard. The authors of the study conclude from this that the general public (in this case jury members) are not as critical of the use of AI in medicine as previously thought.

Reference

Tobia K, Nielsen A, Stremitzer A: When Does Physician Use of AI Increase Liability? Journal of Nuclear Medicine 2021, 62: 17. DOI: 10.2967/jnumed.120.256032

/Public Release. The material in this public release comes from the originating organization and may be of a point-in-time nature, edited for clarity, style and length. View in full here.