From hashtags to deepfakes, Dr Cameron Edmond from the UNSW Faculty of Art & Design addresses how to navigate the murky waters of online misinformation.
Fake news. Misinformation. Disinformation. When does a lie become the truth? False information shared online is so powerful, it is believed to have influenced the 2016 US presidential election, the Brexit referendum and the 2019 Australian federal election.
Dr Cameron Edmond from the UNSW Faculty of Art & Design says now more than ever, it is now harder to sort the truth from the lies online.
“Misinformation and edited images have been around for a long time,” he says. “As a society, we’ve learnt ways to gauge that kind of thing, not perfectly, but to a point. But with bots, we’ve automated some of the process and found ways to weave this information into the conversation. It becomes a lot more convincing when you see something as part of the discussion around you, rather than just stemming from a single source.”
Is it a bot? Is it a human?
Dr Cameron say the word ‘bot’ initially meant something that was automated, such as a piece of software that is posting on a Twitter account.
“Now you see the term being applied to anyone who seems a bit dodgy,” he says. “A troll account might be “bot-like” in that they are pushing a particular narrative or rhetoric, but they are actually human. It’s not uncommon for people to call these accounts ‘bots’, and while I see how people are making that leap, it once again adds fuel to the fire.”
He says people have become so sceptical and pessimistic about social media that they assume that automation is behind every piece of misinformation or troll.
“Again, that is all part of the plan: to derail the conversation and stop people from trusting one another.”
Deepfakes are also of concern because they add an audio-visual element that can be believable, says Dr Edmond.
Deepfakes refer to an AI-based technology that allows individuals to produce or alter content such as video or audio, generating realistic visual or audio that never actually took place.
The most common victims of deepfakes are politicians and celebrities. One the most circulated deepfakes online is a 2018 video of former President Barack Obama swearing and calling President Donald Trump a dipsh*t.
“Not only do we have fake news sources able to generate this content, we then see it shared around, either by bots, trolls, or just people who have been tricked,” Dr Edmond says. “If a friend of yours who you trust has been fooled by a bot or a deepfake and promulgates this content online, you’re more likely to believe it.”
Dr Edmond says recent social media misinformation campaigns such as #ArsonEmergency, claiming Australian bushfires were started by arsonists, derails the conversation.
“People end up focused on anecdote rather than fact,” he says. “But what is really disturbing, and really where bots and trolls earn their keep, is they help cultivate distrust and resentment between different ideologies. If you look at these bot accounts, they tend to follow similar narratives.
“The arson emergency posts generally sport an Aussie flag in their username, or their bios harken back to the “good old days” in some way. They claim to be sick of “the left” telling them what to do. The narrative is going to resonate with certain people within society and nudge them closer to a more extreme view.”
The result, Dr Edmond says, is a politically bigger divide and a conversation thrown wildly off course.
“Instead of discussing facts, we end up having conversations about something made up. That is what makes these bots so sinister.
“If we think of social media as a space where people congregate, then these bots are basically troublemakers. They are there to rile others up, stir the pot and make a fight break out. They aren’t doing the fighting themselves. Once people are agitated, sharing the content and buying into the narrative, they’ll do the fighting themselves.”
Deepfakes add a level of believability
Dr Edmond says deepfakes add another layer to this.
“It’s still possible to tell a deepfake from the real deal – generally by looking at the edges of the face. But that’s a small comfort given how real some of them look, especially if someone is viewing the video at a lower resolution or on a smaller screen.
“We know videos can spread like wildfire and while a fake news article re-tweeted by a bot might cultivate a little bit of legitimacy, audio and visuals add more layers of believability. Once again, the real problem comes back to breeding mistrust and political divides. When a fake video seems real, it makes it harder to trust any outlets, so why believe the person trying to present you with facts?”
How do we discern the truth?
Dr Edmond says individuals need to be more careful about what they share online and to look at where that information is coming from.
“It isn’t enough to just see that an account was made recently – as cyber security organisation Symantec found, it is not uncommon for bot accounts to be set-up early on for later use, as that helps build their credibility. Individuals need to use more judgement. You should look at the kind of narratives these bots build in their bios. If you look at the average Twitter user, their tweets are generally a bit of a mixed bag. It’s rare to find legitimate accounts that are only interesting in one specific conspiracy or political movement.”
Another thing he says to be aware of is the lead is often buried. Bots work because they make the misinformation seem like part of the conversation. Fake news bots, like those Twitter recently shut down during the Hong Kong protests, are used to generate the content, and might be attached to a website or other figure that grants them a bit of legitimacy. The posts are then retweeted and shared by other bots that appear to be regular people.
“If you’re unsure about someone, follow the trail of retweets.”
Better digital literacy required
Dr Edmond says there is a need for better digital literacy. People have believed the tabloids and other false information for years, so it will always be a problem. He suggests we need to look at how we are educating not just kids, but everyone on how they digest information they find online.
“We need to teach people how to vet their sources and follow leads. It’s a difficult task, but not an impossible one. If we can instil in people the need to ask, ‘okay, where is this information coming from? Why aren’t more places reporting on it?’, we’re half-way there.
“The solution doesn’t need to be about building distrust, either. We can establish a positive mindset of, ‘well, this news is concerning or of interest to me. I’m going to find out more about it.'”
“Basically, the tools and technology are changing, but the solution is the same as it ever was. Don’t believe everything you read.”
An Australian Parliamentary inquiry into fake news will examine how social media can be used to undermine Australia’s democracy and values, with the final report due to be released by May 2020.