Facebook Labels Lower User Engagement on Gov-Controlled Media

Carnegie Mellon University

Propaganda is an information quality concern on social media. Despite considerable focus on mis- and disinformation, government-controlled social media pages have quietly shared information to nudge users toward beliefs without disclosing content that may be clearly false. To combat the effect of these persuasion attempts by foreign governments, Facebook debuted a "state-controlled media" label in June 2020 to alert users when a post comes from a page associated with certain governments, including Russia and China.

In a new set of studies, researchers explored the causal impact of these labels on users' intentions to engage with Facebook content. They found that the labels reduced engagement on social media if users noticed them or were trained to notice them, and if the labels were associated with a country that was perceived negatively.

The studies were conducted by researchers at Carnegie Mellon University (CMU), Indiana University (IU), and the University of Texas at Austin (UT Austin). An article on the studies is published in Information Systems Research.

"Propaganda is a major concern on social media, but it has not received the same attention that mis- and disinformation have received, and it can be more insidious and even less obvious," explains Avinash Collis, professor of digital economy at CMU's Heinz College, who coauthored the research. "By understanding the impact of labeling propaganda, social media companies, news media companies, and users will be able to implement and respond to the labels more appropriately."

Propaganda efforts in social media include authoritarian countries like Russia, China, and Iran amplifying antidemocratic narratives. To determine whether the use of these labels slows the spread of information, researchers examined these countries' effectiveness in altering people's beliefs and behaviors on social media. They conducted two online randomized experiments and in a third study, analyzed field data from Facebook before and after the company began using labels.

In the first experiment, 1,200 individuals with U.S. Facebook accounts were shown posts with and without state-controlled media labels. Users who saw headlines with labels (which originated from China and Russia) were less likely to believe, like, read, share, and comment on the posts than users who saw headlines without labels, but only if they actively noticed the label.

In the second experiment, the researchers tested whether it was the label itself or the country listed in the label that influenced users. Nearly 2,000 individuals with U.S. Facebook accounts were shown posts with and without state-controlled media labels. Users' behavior was tied to public sentiment toward the country listed on the label. For example, they responded positively toward content labeled as coming from Canadian state-controlled media and negatively toward content labeled as coming from Chinese and Russian state-controlled media.

In addition, training users on the labels—notifying them of their presence and testing them on their meaning—significantly boosted their likelihood of noticing the labels and believing them when they came from Canadian state-controlled media.

In the third study, which analyzed field data, researchers tested the efficacy of the labels by examining users' engagement before and after June 4, when Facebook began using labels to identify Chinese and Russian state-controlled pages. Facebook's labeling policy had a significant effect: After Facebook implemented the policy, labeled posts were shared 34% less and liked 46% less than before the labels were added, confirming the first two online experiments.

"Our three studies suggest that state-controlled media labels reduced the spread of misinformation and propaganda on Facebook, depending on which countries were labelled," says Patricia L. Moravec, assistant professor of operations and decision technologies at IU's Kelley School of Business, who led the study.

Among the studies' limitations, the authors say they were unable to distinguish whether their findings were due to the labels themselves or to Facebook's black box newsfeed algorithms, which downrank labeled posts. In addition, in the online experiments, they measured users' beliefs, intentions to share, and intentions to like pages, not actual behavior.

"Although efforts are being made to reduce the spread of misinformation on social media platforms, efforts to reduce the influence of propaganda may be less successful," suggests Nicholas Wolczynski, a PhD student in computational data science and machine learning at UT Austin's McCombs School of Business, who coauthored the study. "Given that Facebook debuted the new labels quietly without informing users, many likely did not notice the labels, reducing their efficacy dramatically."

The authors suggest that social media platforms clearly alert and inform users of labeling policy changes, explain what they mean, and display the labels in ways that users notice.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.