A new study points to algorithm design as a potential way to reduce echo chambers-and polarization-online.
Scroll through social media long enough and a pattern emerges. Pause on a post questioning climate change or taking a hard line on a political issue, and the platform is quick to respond-serving up more of the same viewpoints, delivered with growing confidence and certainty.
That feedback loop is the architecture of an echo chamber: a space where familiar ideas are amplified, dissenting voices fade, and beliefs can harden rather than evolve.
But new research from the University of Rochester has found that echo chambers might not be a fact of online life. Published in IEEE Transactions on Affective Computing, the study argues that they are partly a design choice-one that could be softened with a surprisingly modest change: introducing more randomness into what people see.
The interdisciplinary team of researchers, led by Professor Ehsan Hoque from the Department of Computer Science, created experiments to identify belief rigidity and assess whether introducing more randomness into a social network could help reduce it. The researchers studied how 163 participants reacted to statements about topics like climate change after using simulated social media channels, some with feeds modeled on more traditional social media outlets and others with more randomness.
Importantly, "randomness" in this context doesn't mean replacing relevant content with nonsense. Rather, it means loosening the usual "show me more of what I already agree with" logic that drives many algorithms today. In the researchers' model, users were periodically exposed to opinions and connections they did not explicitly choose, alongside those they did.
A tweak to the algorithm, a crack in the echo chambers
"Across a series of experiments, we find that what people see online does influence their beliefs, often pulling them closer to the views they are repeatedly exposed to," says Adiba Mahbub Proma, a computer science PhD student and first author of the paper. "But when algorithms incorporate more randomization, this feedback loop weakens. Users are exposed to a broader range of perspectives and become more open to differing views."
The authors-who also include Professor Gourab Ghoshal from the Department of Physics and Astronomy, James Druckman, the Martin Brewer Anderson Professor of Political Science, PhD student Neeley Pate, and Raiyan Abdul Baten '16, '22 (PhD)-say that the recommendation systems social media platforms use can drive people into echo chambers that make divisive content more attractive. As an antidote, the researchers recommend simple design changes that do not eliminate personalization but that do introduce more variety while still allowing users control over their feeds.
The findings arrive at a moment when governments and platforms alike are grappling with misinformation, declining institutional trust, and polarized responses to elections and public health guidance. Proma recommends social media users keep the results in mind when reflecting on their own social media consumer habits.
"If your feed feels too comfortable, that might be by design," says Proma. "Seek out voices that challenge you. The most dangerous feeds are not the ones that upset us, but the ones that convince us we are always right."
The research was partially funded through the Goergen Institute for Data Science and Artificial Intelligence Seed Funding Program.