Overreliance on generative AI risks eroding new and future doctors' critical thinking skills, while potentially reinforcing existing data bias and inequity, warns an editorial published in the online journal BMJ Evidence Based Medicine.
GenAI tools are already being widely used amid few institutional policies and regulatory guidance, point out the authors, who urge medical educators to exercise vigilance and adjust curricula and training to mitigate the technology's pitfalls.
The use of AI is now used in a vast array of different tasks, but along with its burgeoning potential comes an increasing risk of overreliance on it and a host of potential issues for medical students and trainee doctors, note the authors from the University of Missouri, Columbia, USA.
These include:
● automation bias—uncritical trust of automated information after extended use
● cognitive off-loading and outsourcing of reasoning—shifting information retrieval, appraisal, and synthesis to AI, so undermining critical thinking and memory retention
● Deskilling—blunting skills, which is especially important for medical students and newly qualified doctors who are learning the skill in the first place and who lack the experience to probe AI's advice
● reinforcing existing data biases and inequity
● hallucinations—fluent and plausible, but inaccurate, information
● breaches of privacy, security, and data governance—a particular issue for the sensitive nature of healthcare data
The authors suggest various changes to help minimise these risks, including grading the process, rather than only the end product in educational assessments, on the assumption that learners will have used AI.
Critical skills assessments that exclude AI need to be designed, using supervised stations or in-person examinations—especially important, for bedside communication, physical examination, teamwork, and professional judgement—suggest the authors.
And it may be prudent to evaluate AI itself as a competency, because "data literacy and teaching AI design, development, and evaluation are more important now than ever, and this knowledge is no longer a luxury for medical learners and trainees," they add.
Medical trainees need to understand the principles and concepts underpinning Ai's strengths and weaknesses as well as where and how AI tools can be usefully incorporated into clinical workflows and care pathways. And trainees also need to know how to evaluate their intended performance and potential biases over time, they emphasise.
"Enhanced critical thinking teaching is especially needed, which can be achieved by building cases where the AI outputs are a mix of correct and intentionally flawed responses…. Learners would then accept, amend, or reject, and justify their decision with primary evidence- based sources," suggest the authors.
Regulators, professional societies, and educational associations around the globe also need to play their part, by producing and regularly updating guidance on the impact of AI on medical education, urge the authors.
They conclude: "Generative AI has documented and well-researched benefits, but it is not without pitfalls, particularly to medical education and novice learners. These tools can fabricate sources, encode bias, lead to over-reliance and have negatively disruptive effects on the educational journey.
"Medical programmes must be vigilant about these risks and adjust their curricula and training programmes to stay ahead of them and mitigate their likelihood."