Philosopher shines light on dark world of misinformation

A consequence of the social media business model is that their algorithms often serve us extreme content that arouses emotion, whether or not the content is based in fact.

A consequence of the social media business model is that their algorithms often serve us extreme content that arouses emotion, whether or not the content is based in fact.

Deception, conspiracy theories, and misleading information topics of UBCO event

Misinformation isn’t hard to find.

It’s polluting news feeds on social media, fills the pages of numerous websites-and it’s even alive and well on some cable television channels.

But how can we combat the spread of misinformation and conspiracy theories while preserving free speech?

Dr. Dan Ryder is an associate professor of philosophy in the Irving K. Barber Faculty of Arts and Social Sciences and organizer of this year’s Roger Gale Symposium. He’s organizing a virtual event, called the Misinformation Age, on Thursday, March 4 and Friday, March 5.

You’ve expressed growing concern over how authoritarian regimes are using mass media to misinform the public. Can you explain what you mean?

When I think about authoritarian regimes in relation to misinformation, two very different things come to mind. First, we have regimes like those in Russia and China who exert careful control over social media content, and spread misinformation within their own countries and abroad.

Second, we have these partial democracies-or democracies with authoritarian leanings-using the problem of misinformation as an excuse to interfere with free speech.

For example, the free press was already quite restricted in Singapore, but now they’re forced to operate within new laws created to stop the spread of ‘false statements of fact,’ with rule-breakers facing stiff penalties and government powers in place that can require retractions and corrections.

This means the Singapore government has added a new tool to its arsenal to control the media. It’s part of a worrisome worldwide trend.

As you mentioned, social media platforms are tools increasingly used to spread misinformation. Do owners of these platforms have a moral obligation to monitor and remove this type of content?

This is a tricky issue. Facebook’s CEO Mark Zuckerberg quite plausibly says that it isn’t his company’s place to decide what’s true, and that their role is only to provide an unfettered venue for the free exchange of ideas.

But on the other hand, Facebook, Twitter, YouTube and others aren’t just allowing people to post their ideas online without interference. Their opaque newsfeeds and search algorithms determine what users see; and they want us to keep reading, watching or scrolling so they can collect our data and use it to sell advertising.

A consequence of this business model is that their algorithms often serve us extreme content that arouses emotion, whether or not the content is based in fact. It’s Facebook that’s suggesting we join this group, or YouTube recommending this video. And there’s research to suggest these algorithms have pulled people down rabbit holes of falsehood, conspiracy and hatred. So it’s hard to argue the tech companies are blameless.

That said, they seem to have taken more responsibility lately. For example, Twitter removed Donald Trump’s account after ample evidence he was communicating in bad faith. But the question of how to fix the problem without harming free speech remains to be answered.

In your opinion, is it possible to combat misinformation while preserving free speech?

There are three different places to take action against misinformation: production, distribution and consumption. We’ve talked a bit about the first two-but it’s actually the third one, information consumption, where I think intervening is least likely to harm free speech.

The idea is to make sure people are resistant to misinformation-that they are media literate and have good critical thinking skills.

If people are more media literate it’s harder for misinformation to grow and spread. For example, good thinkers rightly scoff at QAnon nonsense and don’t share it.

Finland has been a bit of a poster child for this strategy. Their efforts to integrate media literacy and critical thinking beginning from the earliest ages are much lauded.

Is it enough? I’m not sure. And since it’s a pretty long-term solution, does it leave our short-term problems unaddressed? I’m hoping the panellists at this week’s event can cast some light on these questions.

/Public Release. This material comes from the originating organization and may be of a point-in-time nature, edited for clarity, style and length. View in full here.