Australians' Skill at Detecting AI Deepfake Scams

Commonwealth Bank

Nearly nine in ten Australians (89%) are confident to some extent they can spot an AI-generated scam, but new research from CommBank shows the opposite is true - with Australians only able to correctly distinguish between real and AI-generated images 42% of the time, which is below the chance of a random guess. Australians aged over 65 are only 6% less accurate than those younger than them - showing that deepfakes can fool people of all ages.

At the same time, less than half of Australians (42%) are familiar with AI-enhanced scams, despite deepfakes exploding across social media platforms, websites, messaging apps, and telecommunication channels.

Deepfakes are new but the steps to protect yourself haven't changed

James Roberts, General Manager of Group Fraud, said: "The findings reveal a growing gap between confidence and reality - and that gap is exactly what scammers are looking to exploit as they increasingly turn to AI to target everyday Australians and small businesses."

He said Australians should not feel overwhelmed by the pace of technological change.

"The good news is that the steps that keep people safe don't need to evolve at the same speed as the technology does. Deepfakes might be new, but the same tried-and-tested habits - slowing down, checking details and speaking with someone you know and trust, such as a family member, remains your best defence - even against AI-powered scams."

Some of the the images below are real and some are generated by AI. Can you tell which are real and which are AI?

icon/Control/ChevronForward24 Created with Sketch. Previous Slide icon/Control/ChevronForward24 Created with Sketch. Next Slide

Deepfakes work because they exploit trust

Professor Monica Whitty, Professor of Human Factors in Cyber Security at Monash University - who is well-known for her work on the prevention, disruption, and detection of cyber fraud - says deepfakes tap into people's natural instincts.

"Humans tend to trust faces, voices and familiar people. Deepfakes take advantage of that instinct."

She said lack of open discussion increases vulnerability.

"The data shows that many Australians don't talk openly about deepfake scams - with only a third discussing AI-generated scams with their relatives or friends. That means fewer opportunities to share warning signs or learn from others' experiences."

Despite nearly three-quarters of Australians (74%) agreeing that they should set up a safe word with their loved ones to confirm it's really them, only one in five (20%) say they have set one up.

Roberts says having a simple way to verify who you're speaking with is becoming increasingly important. "Scammers can fake voices now, so it's okay to double-check. In fact, it's smart."

That's also why CommBank introduced CallerCheck, allowing customers to verify whether a caller claiming to be from the bank is legitimate by triggering a security message in their CommBank app.

"Be vigilant. Educate yourself. And if things look suspicious talk with others about it," Professor Whitty added.

What Australians and small businesses are experiencing

Around one in four (27%) Australians say they had witnessed a deepfake scam in the past year. The most common types were:

  • Investment scams (59%)
  • Business email compromise scams (40%) and
  • Relationship scams (38%).

Around four in ten (41%) small business owners are familiar with deepfake scams.

Small businesses reported that half of all deepfake scam attempts (50%) arrived by email, yet only 55% had cross-checked supplier payment details in the last six months.

Roberts said more open conversations at home and work are essential.

"Scammers are using AI to create fake investment videos, deepfake celebrities, and even voice and text clones of loved ones, senior executives and government officials. Talking openly about this technology is one of the easiest ways to help stay ahead of it."

A national cross-sector effort is needed

Roberts says deepfakes require coordinated action across the scams' ecosystem.

"We recognise the impact of scams on Australians and support the Australian Government's Scam Prevention Framework to introduce obligations initially across banks, telcos and digital platforms. Deepfakes are showing up on social media, messaging platforms, websites and even through phone calls - and we welcome stronger protections across those industries, as well as banking.

"Deepfakes are new, but protecting yourself hasn't changed - and with stronger protections across all channels, we can help keep more Australians safe," Roberts added.

How to help protect yourself from deepfake scams

Roberts says the core approach remains unchanged.

"The principles of 'Stop. Check. Reject.' can still help beat even the most convincing AI-enhanced scams," Roberts said.

Investment scams - deepfake celebrities and experts with 'don't-miss-out' success stories.

Deepfake videos imitate well-known people to promote fake investments.

  • Stop: Avoid investing through a social media link and be especially cautious of any investment ad featuring a celebrity.
  • Check: Speak with someone you trust like your independent financial advisor before transferring money and check ASIC's Moneysmart Investor Alert List .
  • Reject: If you're unsure, block, delete and report suspicious content to the platform where you saw the deepfake.

"Hey Mum/Dad" phishing scams - urgent calls and texts from someone you love

Voice and text cloning technology can mimic a family member perfectly.

  • Stop: Slow down - urgency is a tactic used to create panic.
  • Check: Set up a safe word for your family to use to help protect each other.
  • Reject: Hang up and call back via their usual number.
/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.