AI Content Poses Triple Threat to Reddit Moderators

Reddit bills itself as "the most human place on the internet," but the proliferation of artificial intelligence-generated content is threatening to squeeze some of the humanity out of the news-sharing forum.

Content moderators on some of Reddit's most popular boards see some value in artificial intelligence-generated content, but they're generally fearful that AI will reduce the utility and social value of a community that prides itself on authenticity.

"They were concerned about it on three levels: decreasing content quality, disrupting social dynamics and being difficult to govern," said Travis Lloyd, doctoral student in the field of information science. "And to respond to this, they were enacting rules in their communities, which set norms, but they also then had to enforce those rules, which is challenging."

Lloyd is lead author of "'There Has To Be a Lot That We're Missing': Moderating AI-Generated Content on Reddit," which is being presented at the ACM SIGCHI Conference on Computer-Supported Cooperative Work and Social Computing, Oct. 18-22 in Bergen, Norway. The work received an honorable mention for best paper.

The senior author is Mor Naaman, the Don and Mibs Follett professor of information science at Cornell Tech, the Jacobs Technion-Cornell Institute and the Cornell Ann S. Bowers College of Computing and Information Science. The other co-author is Joseph Reagle, associate professor of communication studies at Northeastern University.

More than 110 million people are active on Reddit each day, discussing virtually any topic one can imagine - from news and politics to sports, entertainment, business, popular culture, Pokémon and pets. Users can post content (news links, photos, videos), comment on others' content, and upvote or downvote items. The criteria for up- and down-voting vary by categories, known as subreddits.

Earlier research from Lloyd, Naaman and colleagues sought to understand how the many Reddit communities were responding to AI content; this paper goes a step further, engaging directly with content moderators to see exactly how they try to preserve Reddit's humanity in an increasingly AI-infused world.

This work began in 2023, a year after the release of ChatGPT. Lloyd said he was curious about how this new tool would affect the information ecosystem.

"We had a hard time studying it because it (AI-generated content) is hard to detect," he said. "And then we realized community moderators would have a hard time with it, too."

For this work, the researchers recruited moderators of popular subreddits that also had rules regarding the use of AI content. The researchers wound up with 15 moderators who collectively oversaw more than 100 different subreddits, with memberships ranging from 10 people to more than 32 million.

Lloyd conducted the interviews and found that most moderators saw AI content as a negative. There were positives: One, for example, saw value in it as a tool for translating into English from one's native tongue. Said the moderator of the "Ask Historians" subreddit (r/AskHistorians): "… perhaps they're an expert on German history, but they don't speak English all that well. So they write their answer in German, and then use ChatGPT to try to translate it … it is their own intellectual contribution or content."

But the moderator of r/WritingPrompts was less flexible: "Let's be absolutely clear: you are not allowed to use AI in this subreddit, you will be banned."

Of the three main concerns, content quality was top of mind. According to one moderator the authors talked to, AI content "tries to meet the substance and depth of a typical post … however, there are frequent glaring errors in both style and content." Style, inaccuracy and divergence from the intended topic were chief issues.

Regarding social dynamics, several respondents expressed fear that AI would negatively impact meaningful one-to-one interactions, citing decreased opportunities for human connection, strained relationships and violation of community values as potential byproducts.

Moderators also feared that their already difficult job of patrolling content on their individual subreddits would be made even tougher with the increase in AI-generated content. Said the moderator of r/explainlikeimfive: "I would rate it as the most threatening concern … It's often hard to detect and we do see it as very disruptive to the actual running of the site."

Naaman said it is currently left up to the moderators - all volunteers - to help Reddit preserve the humanity it cherishes.

"It remains a huge question of how they will achieve that goal," he said. "A lot of it will inevitably go to the moderators, who are in limited supply and are overburdened. Reddit, the research community, and other platforms need to tackle this challenge or these online communities will fail under the pressure of AI."

"This study showed us there is an appetite for human interaction, too," Lloyd said. "And as long as there is that desire, which I don't see going away, I think people will try to create these human-only spaces. I don't think it's hopeless."

This work was supported in part by funding from the National Science Foundation.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.