The 48 hour window: Scam and cyber threat peaks around elections Microsoft Source Asia Most Australians who stumble across The Birdsville Herald wouldn't think twice about whether it's an authentic news site. There could be real news stories from reputable sources, however, unbeknown to readers there may also be fake news stories.
As it turns out, adversaries create these sites specifically by picking a town – often a regional or remote one – and adding words like 'Herald' or 'Times' to the name. They then mix real news with fabricated stories that can quickly start circulating through social media and potentially even end up on more mainstream, authentic news sites.
"Foreign influence campaigns use deceptive sites, known as pink slime sites, that appear credible however seek to trick readers into sharing false narratives," said Mark Anderson, National Security Officer, Microsoft Australia and New Zealand. "While they're not a new tactic, generative AI has made it easier and faster for threat actors to spin up these sites. The language translation capabilities have also made them more convincing."
The culprits? It's a behaviour seen among state-sponsored threat actors seeking to spread narratives through society. During elections, the number of these sites can increase as threat actors ramp up their activity.
"It's also the 48 hours either side of the election when we are most likely to see an increase in activity, so Australians should remain vigilant during this period," said Anderson.
The deceptive use of AI in elections Heading into the 2024 elections, there was significant concern about AI being used to manipulate voters, globally.
Ginny Badanes leads Microsoft's election protection work at Microsoft. "We didn't see this happen on the scale many feared, but there were still notable instances of AI-driven deception – some of which were incredibly difficult to detect," said Badanes.
In addition to 'Pink Slime Sites' like the Birdsville Herald example, AI was used to create voice, video and image deepfakes, and ramp up scams.
Badanes says voice deepfakes are one of the most convincing and hardest-to-detect forms of AI deception.
"Voice deepfakes were used in elections last year to manipulate public opinion by making real people appear to say things they never did," said Badanes.
In Australia, the ABC recently produced an AI-generated voice recording of Senator Jacqui Lambie (with her permission) to demonstrate how convincing it can be.
Transparency around this type of deepfake is key to helping citizens question what they are seeing and hearing.
AI-edited images can also be highly convincing.
"Last year we didn't see as many completely AI-generated image deepfakes, it was more around careful editing of mostly authentic images," said Badanes. "Tiny edits can completely change the meaning of the image and fuel the spread of false narratives."
Scams also ramp up during elections as cybercriminals exploit public interest, preying on urgency and tricking people into clicking on malicious links or handing over personal information.
"Australians should be aware of calls, texts or emails that ask them to urgently click a link to do things like update their electoral roll details or risk fines," said Anderson. "These very well could be malicious links, and the best advice is always to pause, verify and go directly to official sources if you're unsure."
Why do we see an increase in scams and cyber threats around elections? Any major event – whether it's the Olympics, peak shopping season or an election – creates an opportunity for cybercriminals. They thrive on heightened public interest, using it to ramp up phishing scams, disinformation campaigns and efforts to manipulate public perception.
Badanes says last year there was a steady stream of threat actor activity throughout the election cycle, with noticeable spikes at key moments.
"The exact motives behind these attacks aren't always clear. Often, it's about creating chaos and confusion. Other times, it's about influencing voter opinion in favour of a particular candidate," said Badanes. "Some attacks are espionage-driven, aimed at securing information, while others are financially motivated, preying on voters through election-related scams. And, in some cases, it's a combination."
Remaining vigilant to deceptive content During times of heightened emotion, conflict, and competition, manipulated images, audio, and video often travel further and faster across audiences than during an average news cycle. Foreign actors have proven nimble and capable of inserting deceptive content and distributing it rapidly during these moments. The final 48 hours ahead of an election is often one of these times.
Badanes explains that combating foreign influence campaigns starts with helping our society develop a healthy level of scepticism.
"If something you see online fits a narrative too perfectly, it's worth pausing to question if the source is credible or if the content could have been manipulated by AI or clever editing," said Badanes.
"Developing this level of critical thinking can make deceptive content less effective and helps slow the spread of misinformation and disinformation."
"Remember those 'Nigerian prince' scams? Most of us don't fall for them anymore because we've learned to question suspicious emails and links. We don't need to distrust everything – just make sure we're verifying information sources."