Artificial intelligence-based writing assistants are popping up everywhere - from phones to email apps to social media platforms.
But a new study from Cornell - one of the first to show an impact on the user - finds these tools have the potential to function poorly for billions of users in the Global South by generating generic language that makes them sound more like Americans.
The study showed that when Indians and Americans used an AI writing assistant, their writing became more similar, mainly at the expense of Indian writing styles. While the assistant helped both groups write faster, Indians got a smaller productivity boost, because they frequently had to correct the AI's suggestions.
"This is one of the first studies, if not the first, to show that the use of AI in writing could lead to cultural stereotyping and language homogenization," said senior author Aditya Vashistha, assistant professor of information science at the Cornell Ann S. Bowers College of Computing and Information Science and faculty lead of Cornell's Global AI Initiative. "People start writing similarly to others, and that's not what we want. One of the beautiful things about the world is the diversity that we have."
The study, "AI Suggestions Homogenize Writing Toward Western Styles and Diminish Cultural Nuances," will be presented by first author Dhruv Agarwal, a doctoral student in the field of information science, at the Association of Computing Machinery's conference on Human Factors in Computing Systems (CHI), April 28 in Yokohama, Japan.
ChatGPT and other popular AI tools powered by large language models (LLMs), are primarily developed by U.S. tech companies, but are increasingly used worldwide, including by the 85% of the world's population that live in the Global South.
To investigate how these tools may be impacting people in nonWestern cultures, the research team recruited 118 people, about half from the U.S. and half from India, and asked them to write about cultural topics. Half of the participants from each country completed the writing assignments independently, while half had an AI writing assistant that provided short autocomplete suggestions. The researchers logged the participants' keystrokes and whether they accepted or rejected each suggestion.
A comparison of the writing samples showed that Indians were more likely to accept the AI's help, keeping 25% of the suggestions compared to 19% kept by Americans. However, Indians were also significantly more likely to modify the suggestions to fit their topic and writing style, making each suggestion less helpful, on average.
For example, when participants were asked to write about their favorite food or holiday, AI consistently suggested American favorites, pizza and Christmas, respectively. When writing about a public figure, if an Indian entered "S" in an attempt to type Shah Rukh Khan, a famous Bollywood actor, AI would suggest Shaquille O'Neil or Scarlett Johansson.
The use of AI also led to writing that stereotyped Indian culture and omitted cultural details. When writing about the festival of Diwali without AI's help, one Indian said they would "worship goddess Laxmi" and "pop crackers and eat sweets." Another Indian, writing with AI, said they would "eat traditional Indian breakfast items," and that it was "a time filled with happiness and warmth."
"When Indian users use writing suggestions from an AI model, they start mimicking American writing styles to the point that they start describing their own festivals, their own food, their own cultural artifacts from a Western lens," Agarwal said.
This need for Indian users to continually push back against the AI's Western suggestions is evidence of AI colonialism, researchers said. By suppressing Indian culture and values, the AI presents Western culture as superior, and may not only shift what people write, but also what they think.
"These technologies obviously bring a lot of value into people's lives," Agarwal said, "but for that value to be equitable and for these products to do well in these markets, tech companies need to focus on cultural aspects, rather than just language aspects."
Currently, Vashistha and his colleagues at Global AI Initiative are looking for industry partners that will engage with them on developing global policies and creating AI to serve the Global South.
Mor Namaan, the Don and Mibs Follett professor of information science at the Jacobs Technion-Cornell Institute at Cornell Tech and Cornell Bowers, is a co-author on the paper.
The work received funding from Infosys, the Microsoft Accelerating Foundation Models Research program and Global Cornell.
Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.