A new analysis suggests that posts in hate speech communities on the social media website Reddit share speech-pattern similarities with posts in Reddit communities for certain psychiatric disorders. Dr. Andrew William Alexander and Dr. Hongbin Wang of Texas A&M University, U.S., present these findings July 29th in the open-access journal PLOS Digital Health.
The ubiquity of social media has raised concerns about its role in spreading hate speech and misinformation, potentially contributing to prejudice, discrimination and real-world violence. Prior research has uncovered associations between certain personality traits and the act of posting online hate speech or misinformation.
However, whether any associations exist between psychological wellbeing and online hate speech or misinformation has been unclear. To help clarify, Alexander and Wang used artificial intelligence tools to analyze posts from 54 Reddit communities relevant to hate speech, misinformation, psychiatric disorders, or, for neutral comparison, none of those categories. Selected groups included r/ADHD, a community for discussing attention-deficit/hyperactivity disorder, r/NoNewNormal, dedicated to COVID-19 misinformation, and r/Incels, a community banned for hate speech.
The researchers used the large-language model GPT3 to convert thousands of posts from these communities into numerical representations capturing the posts' underlying speech patterns. These representations, or "embeddings," could then be analyzed through machine-learning techniques and a mathematical approach known as topological data analysis.
This analysis showed that speech patterns in hate speech communities were similar to speech patterns in communities for complex post-traumatic stress disorder, and borderline, narcissistic and antisocial personality disorders. Links between misinformation and psychiatric disorders were less clear, but with some connections to anxiety disorders.
Importantly, these findings do not at all suggest that people with psychiatric disorders are more prone to hate speech or misinformation. For one, there was no way of knowing if the analyzed posts were made by people actually diagnosed with disorders. More research is needed to understand the links and explore such possibilities as hate speech communities mimicking speech patterns seen in psychiatric disorders.
The authors suggest their findings could help inform new strategies to combat online hate speech and misinformation, such as treating them using elements of therapy developed for psychiatric disorders.
The authors add, "Our results show that the speech patterns of those participating in hate speech online have strong underlying similarities with those participating in communities for individuals with certain psychiatric disorders. Chief among these are the Cluster B personality disorders: Narcissistic Personality Disorder, Antisocial Personality Disorder, and Borderline Personality Disorder. These disorders are generally known for either lack of empathy/regard towards the wellbeing of others, or difficulties managing anger and relationships with others."
Alexander notes, "While we looked for similarities between misinformation and psychiatric disorder speech patterns as well, the connections we found were far weaker. Besides a potential anxiety component, I think it is safe to say at this point in time that most people buying into or spreading misinformation are actually quite healthy from a psychiatric standpoint."
Alexander concludes, "I want to emphasize that these results do not mean that individuals with psychiatric conditions are more likely to engage in hate speech. Instead, it suggests that people who engage in hate speech online tend to have similar speech patterns to those with cluster B personality disorders. It could be that the lack of empathy for others fostered by hate speech influences people over time and causes them to exhibit traits similar to those seen in Cluster B personality disorders, at least with regards to the target of their hate speech. While further studies would be needed to confirm this, I think it is a good indicator that exposing ourselves to these types of communities for long periods of time is not healthy and can make us less empathetic towards others."
In your coverage, please use this URL to provide access to the freely available paper in PLOS Digital Health: http://plos.io/4028vQ5
Citation: Alexander AW, Wang H (2025) Topological data mapping of online hate speech, misinformation, and general mental health: A large language model based study. PLOS Digit Health 4(7): e0000935. https://doi.org/10.1371/journal.pdig.0000935
Author countries: United States
Funding: AWA was a Burroughs Wellcome Fund Scholar supported by a Burroughs Wellcome Fund Physician Scientist Institutional Award (G-1020069) to the Texas A&M University Academy of Physician Scientists ( https://www.bwfund.org/funding-opportunities/biomedical-sciences/physician-scientist-institutional-award/grant-recipients/ ). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. HW received no specific funding for this work.