Are We Giving AI Pulse Through Language?

AMES, Iowa - Think, know, understand, remember.

Quick look

Iowa State researchers are studying how we use anthropomorphizing language - or words that give human traits to non-human things - when writing about artificial intelligence. Their findings, the researchers report, can help technical and professional communication practitioners reflect on how they think about AI technologies, both as tools in their writing process and in how they write about AI.

These are just a few of the mental verbs we use every day to describe what happens in a person's mind. But when using these same words to talk about artificial intelligence, we can unintentionally make AI sound human.

"We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines - it helps us relate to them," said Jo Mackiewicz, professor of English at Iowa State. "But at the same time, when we apply mental verbs to machines, there's also a risk of blurring the line between what humans and AI can do."

Mackiewicz and Jeanine Aune, teaching professor of English and director of the advanced communication program at Iowa State, are members of a research team that recently examined how writers use anthropomorphizing language - or words that give human traits to non-human things - when writing about AI systems. The findings of their new study, "Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT," were published in Technical Communication Quarterly.

The research team also included Matthew J. Baker, associate professor of linguistics at Brigham Young University, and Jordan Smith, assistant professor of English at the University of Northern Colorado. Both Baker and Smith are graduates of Iowa State University.

How mental verbs can be misleading

Anthropomorphizing mental verbs can be misleading when used to describe AI because it suggests that machines have human‑like inner lives, Mackiewicz and Aune said. Words like "think," "know," "understand" and "want" suggest beliefs, desires or consciousness. But AI systems don't have any of these; they generate outputs based on patterns, not feelings or intentions.

Mackiewicz and Aune also noted that mental verbs can inadvertently exaggerate AI's abilities. For example, writing "AI decided" or "ChatGPT knows" may make the system sound more autonomous or intelligent than it is and distort expectations of what it can safely or reliably do. And if we talk about AI as if it has intentions, the two ISU researchers added, it can become easier to overlook the real decision-makers: the people who design, train, deploy and oversee AI systems.

"Certain anthropomorphic phrases may even stick in readers' minds and can potentially shape public perception of AI in unhelpful ways," Aune said.

Words on words

In their research, Mackiewicz, Aune and team used the News on the Web (NOW) corpus, a 20-billion-word-plus dataset that features a constantly updated collection of English-language news articles from 20 countries, to study how often news writers pair anthropomorphizing mental verbs - like learns, means and knows - with the terms AI and ChatGPT.

Jo Mackiewicz, professor of English at Iowa State University.
Jo Mackiewicz, professor of English at Iowa State University. Photo by Christopher Gannon/Iowa State University.

The results, Mackiewicz and Aune said, surprised the research team.

In their analysis, the team identified three key findings:

1. The terms AI and ChatGPT are infrequently paired with mental verbs in news articles.

While there isn't a single definitive study of overall anthropomorphism in spoken vs. written language, the research we do have offers us some clues, Mackiewicz said. "Anthropomorphism has been shown to be common in everyday speech, but we found there's far less usage in news writing," she said.

In the research team's analysis, "needs" was identified as the mental verb most frequently paired with the term AI, occurring a total of 661 times, while "knows" was the mental verb most frequently paired with the term ChatGPT, occurring just 32 times.

Mackiewicz and Aune also noted that Associated Press guidelines to avoid attaching human emotions to capabilities to AI models may have impacted how often news writers used mental verbs with the terms AI and Chat GPT in recent years.

2. When the terms AI and ChatGPT were paired with mental verbs, they weren't necessarily anthropomorphized.

The research team's analysis found that writers used the mental verb "needs," for example, in two main ways when discussing AI. In many instances, "needs" simply described what AI requires to function, such as "AI needs large amounts of data" or "AI needs some human assistance." These uses weren't anthropomorphic because they treated AI the same way we talk about other non‑human systems - "the car needs gas" or "the soup needs salt."

Second, writers sometimes used "needs" in a way that suggested an obligation to do or be something - "AI needs to be trained" or "AI needs to be implemented." Aune said many of these instances were written in passive voice, which shifted responsibility back to humans, not AI.

3. Anthropomorphization with mental verbs exists on a spectrum.

Mackiewicz and Aune said the research team also discovered there were times the usage of "needs" edged into more human‑like territory. Some sentences - "AI needs to understand the real world," for example - implied expectations or qualities associated with people, such as fairness, ethics or a personal understanding of the world we live in.

"These instances showed that anthropomorphizing isn't all‑or‑nothing and instead exists on a spectrum," Aune said.

Jeanine Aune, teaching professor of English and director of the advanced communication program at Iowa State University. Photo courtesy of Jeanine Aune.
Jeanine Aune, teaching professor of English and director of the advanced communication program at Iowa State University. Photo courtesy of Jeanine Aune.

Writing the future

"Overall, our analysis shows that anthropomorphization of AI in news writing is far less common - and far more nuanced - than we might think," Mackiewicz said. "Even the instances that did anthropomorphize AI varied widely in strength."

The study's findings, Mackiewicz and Aune said, underscore the importance of looking beyond surface-level verb counts and considering how meaning comes from context.

"For writers, this nuance matters: the language we choose shapes how readers understand AI systems, their capabilities and the humans responsible for them," Mackiewicz said.

"Our findings can help technical and professional communication practitioners reflect on how they think about AI technologies as tools in their writing process and how they write about AI," the research team wrote in the published study.

And as AI technologies continue to evolve, writers will continually need to consider how word choices may frame those technologies, Mackiewicz and Aune said.

Future research, the team concluded, "could examine the anthromorphizing impact of different words and their senses" and "look at whether or not infrequent usage has an outsized effect on how people, including news writers and other professional communicators, think about AI."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.