AI Deepfakes Invade Social Media

Macquarie University/The Lighthouse
AI content is getting hard to detect on social media, but there are still some tricks you can use to work out what's real.

It's happened to most of us – you're scrolling on social media when you find a video that delights or surprises you.

Maybe it's a sassy grandma walking her pet alligator on a leash, or perhaps it's a flock of hens bouncing solemnly on a trampoline in the middle of the night.

You send it to a friend only for them to reply with two letters: AI. You feel a little foolish, even ashamed.

You're not alone.

AI grandma with alligator

This grandmother with her pet alligator fooled plenty of people online.

"I got tricked a couple of months back," admits Professor Dali Kafaar, Executive Director of the Macquarie University Cyber Security Hub.

"These interesting documentary-style videos about animal behaviour started appearing on social media, and I was tricked by the first one."

The cyber security researcher started thinking about why he'd been so engaged by the video and why so many people are being fooled by what they see online.

The answer, he realised, was as simple as it was troubling: artificial intelligence is getting too good at producing believable content online.

"AI-generated generated content is becoming so sophisticated you can barely distinguish what's real or genuine from the artificially generated," Kafaar says.

The end of reality?

It was only very recently that we could tell something was made by AI because it looked, well, incredibly fake. But those days are already well behind us – AI technology has evolved that quickly.

"Many of us would remember, not so long ago, that when we were looking at images or videos, we could look for useful cues like extra fingers, or teeth that were too symmetrical," Kafaar says.

Even AI voices, once distinguishable because they were too emotionally flat or lacked the hesitations and interjections of normal human conversation, are hard to pick now.

As the founder and CEO of tech startup Apate.AI, Kafaar uses conversational AI bots to interact with online scammers. The more time fraudsters spend engaging the bots, the less time they have to swindle victims. Meanwhile, the data the bots collect is used to help organisations stay ahead of the latest scams.

"If I tested 100 people on our bots, more than 90 per cent of them would be fooled," he says. "Human speech is becoming inefficient as a metric."

Dr Stephen Collins from the School of Communication, Society and Culture at Macquarie University, is the co-editor of the Handbook of AI-based media disruption .

Cow at the beach

Real or fake? This cow at the beach raises questions.

"We're in such a strange time in society where we don't know whether anything's real or not," he says.

Collins admits he too has been tricked into thinking AI-generated content is real. For him, it was a clip of English physicist Brian Cox sharing breaking news about evidence of extraterrestrial life. It turned out to be a deepfake.

"Using generative AI is no longer an underground outlaw activity," he says. "We're even starting to see major news organisations using AI to generate stuff . It's going to create havoc with people's understanding of the world."

Collins cites the dead internet theory, a decade-old conspiracy that claims most online activity, especially on social media, is dominated by bots, AI and other non-humans. Could this idea be shaping our experience of social media? And if so, what are the implications?

"Social media platforms already enable people to live in these echo chambers that constantly reinforce their own biases and prejudices," Collin says.

"When you're also starting to see deepfakes of leaders like Obama, Trump or Albo and you don't know what's real, that's deeply disturbing."

Spotting the truth

So, if we're scrolling social media and something catches our eye, how can we tell if it's real or not?

Kafaar suggests that our default now should be to assume posts are AI-generated unless we know otherwise.

"We're moving from a world where we try to spot the fake from the genuine to one where we have to assume that content may be fake unless there is a reason to trust it," he says.

"AI generated content is becoming really good at bypassing the rational doubts or scepticism we generally have."

Appealing to people's emotions is one of the ways AI content bypasses our radars for what's real. Kafaar describes this as using emotional shortcuts – leveraging human trust and familiarity.

"When I was fooled by those animal behaviours, I was probably hoping that the animals really were behaving in that spectacular way," he says, referring to the video that got past his own defences. "That's the emotional hook that got me."

Videos and image that move us in some way should be viewed with caution.

"When content pushes you to feel something urgently, when you're flooded with emotions, that's when you want to step back," Kafaar says. "Stories that are way too perfect in their narrative also raise red flags."

Collins agrees. "People need to be critical of what they see," he says. "All the arts and humanities subjects that teach us how to ask questions about what we're seeing are super important now."

Kafaar also advises looking at the accounts posting the content you see in your feed.

"Look beyond the content itself," he said. "Some accounts post too regularly – like a machine – or have no organic history. Vague bios on their profiles can also be a sign. Maybe old posts don't quite align with the current story the account is telling."

Captions and comments can also provide clue – pay attention to phrases that seem pre-canned or emoji use that seems over the top.

"The giveaway often isn't the video itself but the account behind it," says Kafaar.

But individuals shouldn't have to take on sole responsibility for discerning what's AI on our social media feeds.

"How are the general public supposed to know what's real and what's not?" Collins asks. "And how do you get companies using AI technologies to acknowledge this?"

Kafaar and Collins note that the platforms must play a more active role.

Ai picture of a cat with a job

Cats with jobs are a big AI trend.

"Social media platforms can do a lot in terms of prevention," Kafaar says. "They have a massive role to play, even beyond government. You can't rely on users to solve this massive systematic and very profound problem."

Kafaar points to the existence of detection tools and valuable metadata that social media platforms can access to identify AI content and stop it from spreading.

"There's really a need for them to act and do provenance tracing or watermarking," he says.

Policymaking also has a role to play. The Government has already ruled out introducing a copyright exemption for AI companies hoping to train their large language models on works from Australian creators.

"We're also at the very cutting edge when it comes to governance and policy with the Scam Prevention Framework , which is a world-first," Kafaar says. "We could reproduce this kind of policy for online content too."

Collins believes that while AI won't go away, it may not have the world-changing impact tech companies claim.

"It's a fascinating time to be alive," he says. "But where this all ends up, who knows? It will take years for the dust to settle."

Kafaar, on the other hand, argues that AI is a gamechanger.

"Current AI developments represent a genuine technological revolution," he says. "Arguably one of the most significant shifts we've seen in decades, with profound implications for society, industry and our digital world."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.