Research Reveals Divergent AI Harm Views in Devs, Educators

Teachers are increasingly using educational tools that leverage large language models (LLMs) like ChatGPT for lesson planning, personalized tutoring and more in K-12 classrooms around the world.

Cornell researchers have found the developers of such tools and the educators who use them have different ideas about the potential harms they may cause, a finding that researchers say underscores the need for educators to be more involved in the tools' development.

"Education technology should center educators, and doing that requires researchers, education technology providers, school leaders and policymakers to come together and take action to mitigate potential harms from the use of LLMs in education," said Emma Harvey, a doctoral student in the field of information science and lead author of "'Don't Forget the Teachers': Towards an Educator-Centered Understanding of Harms from Large Language Models in Education." The paper was presented April 28 at the Association for Computing Machinery's Conference on Human Factors in Computing Systems (CHI) in Yokohama, Japan. It received a Best Paper Award. Her coauthors are Allison Koenecke, assistant professor of information science, and Rene Kizilcec, associate professor of information science, both at the Cornell Ann S. Bowers College of Computing and Information Science.

"These harms are not necessarily just the typical ones we hear of about LLMs, like bias or hallucinations," Harvey said. "It's this broader set of sociotechnical harms."

Harvey and her collaborators interviewed six administrators and developers from education technologies (or "edtech") companies and nearly two dozen educators who are navigating the increasing use of artificial intelligence-powered LLMs in schools.

The researchers found that developers from these companies tend to focus much of their time and energy on solving technical challenges, like preventing the kind of hallucinations, privacy violations or toxic content that LLMs sometimes produce.

Meanwhile, educators were more concerned with the broader impacts of using the tools, like inhibiting the development of students' critical thinking skills, hampering students' social development, increasing educator workload, and exacerbating systemic inequality, since disadvantaged school districts may be less able to purchase licenses to use these tools or choose to shift funding away from other resources to license them, to name a few. They were less concerned about the technical issues; they expressed that they knew how to work around them.

"I've noticed that as students become more tech aware, they also tend to lose that critical thinking skill. Because they can just ask for answers," one educator said.

"It's hard to feel like it's equitable, or it's going to be used for public good if it's only available if your district can pony up for it," said another.

A good step toward improving these education technologies is correcting the misalignment between what developers and educators see as potentially harmful, Harvey said.

The researchers outlined four recommendations to facilitate the design and development of educator-centered edtech:

  • Companies should design tools to give educators even more agency to question and correct what LLMs produce;
  • Regulators - whether in government or nonprofit agencies - should develop centralized, clear and independent reviews of LLM-based educational technologies;
  • Researchers and developers of education technologies should explore ways to make these tools more customizable for the educators who use them; and
  • Educator input should be prioritized when school district leaders are considering adopting such tools. Additionally, educators should not be penalized if they choose not to use their schools' LLM-based tools.

"Edtech providers are spending a lot of time on reducing the chance of LLM hallucinations," Harvey said. "Our findings suggest they could also design tools so that educators can intervene when hallucinations happen to correct students' misconceptions through their teaching practices. This can free up time to focus on mitigating other types of harm."

The research team hopes their findings will foster more dialogue between builders of edtech systems and the teachers who use them, Koenecke said.

"The potential harms of LLMs extend far past the technical concerns commonly measured by machine-learning researchers," she said. "We need to be prepared to study the higher-stakes, difficult-to-measure social and societal harms arising from LLM use in the classroom."

This research was supported by the Schmidt Futures Foundation and the National Science Foundation.

Louis DiPietro is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.