Pointing out potentially misleading posts on social media significantly reduces the number of reposts, likes, replies, and views generated by such content, according to a new study co-authored by Yale researchers. The finding, they say, suggests that crowd-sourced fact-checking can be a useful tool for curbing online misinformation.
The study, which was co-authored by researchers at the University of Washington and Stanford, appears in the journal Proceedings of the National Academy of Sciences.
"We've known for a while that rumors and falsehoods travel faster and farther than the truth," said Johan Ugander, an associate professor of statistics and data science in Yale's Faculty of Arts and Sciences, deputy director of the Yale Institute for Foundations in Data Science, and co-author of the new study.
"Rumors are exciting, and often surprising," he added. "Flagging such content seems like a good idea. But what we didn't know was if and when such interventions are actually effective in keeping it from spreading."
For the study, the researchers focused on Community Notes, a misinformation management framework adopted by X (the former social media platform Twitter) in 2021. Community Notes enables X users to propose and vet fact-checking notes that are attached to potentially misleading posts. Earlier this year, the social media platforms TikTok and Meta announced that they, too, are adding the same type of misinformation management framework for their sites.