UC Med Students Probe ChatGPT in Qualitative Research

University of Cincinnati

Newly published research from the University of Cincinnati College of Medicine highlights student-led work in medical education and examines how artificial intelligence (AI) can assist with qualitative research.

The study, published in the journal Medical Science Educator , explored whether ChatGPT can support thematic analysis. Corresponding author and third-year medical student Jonathan Bowden, of Pickerington, Ohio, collaborated with fellow third-year medical student Megha Mohanakrishnan, of San Jose, California, to lead the project. Both Bowden and Mohanakrishnan earned their undergraduate degrees in medical sciences at UC.

Both students also served as learning assistants during their first year of medical school for Andrew Thompson, PhD, professor-educator in the Department of Medical Education. Through their work with Thompson, they joined a project analyzing survey responses from fellow first-year medical students about their thoughts and feelings surrounding cadaveric dissection, which is part of their coursework.

The team identified several common themes: "We noted feelings of gratitude toward donors and their families and appreciation and excitement for a valuable learning opportunity," said Bowden. "We also noted some nervousness and apprehension."

With their manual analysis serving as the gold standard, the students then evaluated whether AI could perform similar thematic coding.

"So much of this project came from genuine curiosity about whether we, as medical students, could use AI to work more efficiently," said Bowden. "We chose ChatGPT because it's free and widely accessible."

Three methods

The team tested three methods for prompting the AI, running each method three times:

  • Method one instructed ChatGPT to code responses using only a list of themes and their definitions.

  • Method two added 25 example responses with assigned themes and brief explanations. The AI was told to reference these examples in its coding.

  • Method three asked ChatGPT to code each of the 25 example responses individually. After each attempt, the students provided feedback on incorrect or missing themes, and the AI revised its theme definitions accordingly. Once the 25 examples were complete, ChatGPT coded the remaining responses using the updated definitions.

  • "We tried to engage more and more with the AI to improve its accuracy," said Bowden. "Method three had the highest accuracy."

Reflections

Bowden described the research, which earned the students a national award, as a long process that was helpful in establishing possibilities for future research projects. "I learned how to take on a project from the ground up," he said. "We gained insight into the planning and execution of a research project."

Bowden said he may wish to both practice medicine and teach medical students in the future.

"I appreciate how much work current physicians have done for us and how they are helping us find our passions," he said. Bowden is currently considering internal medicine for residency.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.