A new tool shows it is possible to turn down the partisan rancor in an X feed – without removing political posts and without the direct cooperation of the platform.
The Stanford-led research, published in Science, also indicates that it may one day be possible to let users take control of their own social media algorithms.
A multidisciplinary team created a seamless, web-based tool that reorders content to move posts lower in a user's feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters of the opposing party.
In an experiment using the tool with about 1,200 participants over 10 days during the 2024 election, those who had antidemocratic content downranked showed more positive views of the opposing party. The effect was also bipartisan, holding true for people who identified as liberals or conservatives.
"Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them," said Michael Bernstein, a professor of computer science in Stanford's School of Engineering and the study's senior author. "We have demonstrated an approach that lets researchers and end users have that power."
The tool could also open ways to create interventions that not only mitigate partisan animosity, but also promote greater social trust and healthier democratic discourse across party lines, added Bernstein, who is also a senior fellow at the Stanford Institute for Human-Centered Artificial Intelligence.
For this study, the team drew from previous sociology research from Stanford, identifying categories of antidemocratic attitudes and partisan animosity that can be threats to democracy. In addition to advocating for extreme measures against the opposing party, these attitudes include statements that show rejection of any bipartisan cooperation, skepticism of facts that favor the other party's views, and a willingness to forgo democratic principles to help the favored party.
Preventing emotional hijacking
There is often an immediate, unavoidable emotional response to seeing this kind of content, said study co-author Jeanne Tsai.
"This polarizing content can just hijack their attention by making people feel bad the moment they see it," said Tsai, a professor of psychology in the Stanford School of Humanities and Sciences .
The study brought together researchers from University of Washington and Northeastern, as well as Stanford, to tackle the problem from a range of disciplines, including computer science, psychology, information science, and communication.
The study's first author, Tiziano Piccardi, a former postdoctoral fellow in Bernstein's lab, created a web extension tool coupled with an artificial intelligence large language model that scans posts for these types of antidemocratic and extreme negative partisan sentiments. The tool then re-orders posts on the user's X feed in a matter of seconds.
Then, in separate experiments, the researchers had a group of participants, who consented to have their feeds modified, view X with this type of content downranked or upranked over 10 days, and compared their reactions to a control group. No posts were removed, but the more incendiary political posts appeared lower or higher in their content streams.
The impact on polarization was clear, said Piccardi, who is now an assistant professor of computer science at Johns Hopkins University.
"When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party," he said. "When they were exposed to more, they felt colder."
Small change with a potentially big impact
Before and after the experiment, the researchers surveyed participants on their feelings toward the opposing party on a scale of 1 to 100. Among the participants who had the negative content downranked, their attitudes improved on average by two points – equivalent to the estimated change in attitudes that has occurred among the general U.S. population over a period of three years.
Previous studies on social media interventions to mitigate this kind of polarization have shown mixed results. Those interventions have also been rather blunt instruments, the researchers said, such as ranking posts chronologically or stopping social media use altogether.
This study shows that a more nuanced approach is possible and effective, Piccardi said. It can also give people more control over what they see, and that might help improve their social media experience overall since downranking this content not only decreased participants' polarization but also their feelings of anger and sadness.
The researchers are now looking into other interventions using a similar method, including ones that aim to improve mental health. The team has also made the code of the current tool available, so other researchers and developers can use it to create their own ranking systems independent of a social media platform's algorithm.