Extremists Exploit Gaming Platforms for Recruitment

Anglia Ruskin University

New research published in the journal Frontiers in Psychology reveals how extremist groups are exploiting the popularity of video games to recruit and radicalise impressionable users.

The study shows that gaming-adjacent platforms, which allow users to chat and live stream while playing, are being used as "digital playgrounds" for extremist activity and that video game players are being deliberately "funnelled" by extremists from mainstream social media platforms to these sites, in part because of the challenges faced in moderating them.

The research was carried out by Dr William Allchorn and Dr Elisa Orofino, senior research fellows at Anglia Ruskin University's International Policing and Public Protection Research Institute (IPPPRI), and includes interviews with platform content moderators, tech industry experts and those involved in preventing and countering violent extremism.

It found that far-right extremism is the most common ideology shared on these gaming-adjacent platforms. This includes content promoting white supremacy, neo-Nazism and anti-Semitism, often accompanied by misogyny, racism, homophobia and conspiracy theories, including references to QAnon.

Islamist extremism was also reported, though less frequently, alongside "extremist-adjacent" material such as the glorification of school shootings – all content that violates the terms of service of mainstream platforms but often evades detection.

The study explains that hyper-masculine gaming titles, such as first-person shooter games, have particular appeal to extremists, and highlights how the unique nature of online gaming brings together strangers with a common interest.

After initial contact, funnelling takes place where interactions move to the less regulated gaming-adjacent platforms, providing an environment where extremists can socialise, share propaganda and subtly recruit.

One interviewee in the study explained how grooming might start: "That's where you have matchmaking. It's where you can build quick rapport with people. But that's the stuff that very quickly moves to adjacent platforms, where there's sort of less monitoring."

A recurring concern among participants was the danger of younger users coming under the influence of extremist influencers, who combined streaming live game play with extremist narratives.

Participants highlighted that law enforcement need to better understand how these platforms and their subcultures operate, and also emphasised the importance of educating parents, teachers and children about the risks of online radicalisation.

Moderators who took part in the study expressed frustration at inconsistent enforcement policies on their platforms and the burden of deciding whether content or users should be reported to local law enforcement agencies.

In-game chat is unmoderated, but the moderators still report being overwhelmed by the volume and complexity of harmful content, including the use of hidden symbols often used to circumvent banned words.

AI tools are being used to assist with moderation, but they struggle to interpret memes or when language is ambiguous or sarcastic. Phrases such as "I'm going to kill you" may be common in gameplay, but difficult for automated systems to interpret in context.

Co-author of the study Dr William Allchorn, Senior Research Fellow at Anglia Ruskin University (ARU), said: "These gaming-adjacent platforms offer extremists direct access to large, often young and impressionable audiences and they have become a key tool for extremist recruitment.

"Social media platforms have attracted most of the attention of lawmakers and regulators over the last decade, but these platforms have largely flown under the radar, while at the same time becoming digital playgrounds for extremists to exploit.

"The nature of radicalisation and the dissemination of extremist content is not confined to any single platform and our research identified a widespread lack of effective detection and reporting tools.

"Many users don't know how to report extremist content, and even when they do, they often feel their concerns aren't taken seriously. Strengthening moderation systems, both AI and human, is essential, as is updating platform policies to address content that is harmful but technically lawful. Decisive action works and platforms can be doing more to help curb the spread of extremism."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.