AI Bots in Social Media Wargame Can Sway Elections

Antoni Shkraba Studio/Pexels

On December 14 2025, a terrorist attack occurred at Bondi Beach in Sydney Australia, leaving 15 civilians and one gunman dead. While Australia was still reeling in shock, social media saw the rapid spread of misinformation generated and powered by generative artificial intelligence (AI).

Authors

  • Hammond Pearce

    Senior Lecturer, School of Computer Science & Engineering, UNSW Sydney

  • Alexandra Vassar

    Senior Lecturer, School of Computer Science and Engineering, UNSW Sydney

  • Rahat Masood

    Senior Lecturer, School of Computer Science & Engineering, UNSW Sydney

For example, a manipulated video of New South Wales Premier Chris Minns claimed one of the terrorists was an Indian national. X (formerly Twitter) was awash with celebrations of the hero defender "Edward Crabtree" . And a deepfake photo of Arsen Ostrovsky, a noted human rights lawyer and survivor of Hamas' October 7 attack in Israel, depicted him as a crisis actor with makeup artists applying fake blood .

This is an unfortunately common occurrence. From Bondi to Venezuela , Gaza and Ukraine , AI has supercharged the spread of online misinformation. In fact, around half of the content you see online is now made and spread by AI .

Generative AI can also create fake online profiles, or bots, which try to legitimise this misinformation through realistic-looking social media activity.

The goal is to deceive and confuse people - usually for political and financial reasons. But how effective are these bot networks? How hard is it to set them up? And crucially, can we mitigate their false content through cyber literacy?

To answer these questions, we set up Capture the Narrative - the world's first social media wargame for students to build AI bots to influence a fictional election, deploying tactics that mirror manipulation of real social media.

Online confusion and the 'liar's dividend'

Generative AI, used in services such as ChatGPT, can be prompted to quickly create realistic text and images. This is also how it can be used to generate highly persuasive fake content.

Once generated, realistic and relentless AI-driven bots create the illusion of consensus around the fake content by making hashtags or viewpoints trend.

Even if you know content is exaggerated or fake, it still has an impact on your perceptions , beliefs and mental health .

Worse, as bots evolve, becoming indistinguishable from real users, we all start to lose confidence in what we see. This creates a " liar's dividend ", where even real content is approached with doubt.

Authentic but critical voices can be dismissed as bots, shills, and fakes, making it harder to have real debates on difficult topics.

How hard is it to capture a narrative?

Our Capture the Narrative wargame offers rare, measurable evidence of how small teams armed with consumer‑grade AI can flood a platform, fracture public debate and even swing an election - fortunately, all inside a controlled simulation rather than the real world.

In this first-of-its-kind competition, we challenged 108 teams from 18 Australian universities to build AI bots to secure victory for either "Victor" (left-leaning) or "Marina" (right-leaning) in a presidential election. The effects were stark.

Over a four-week campaign using our in-house social media platform, more than 60% of content was generated by competitor bots, surpassing 7 million posts.

The bots from both sides battled to produce the most compelling content, diving freely into falsehoods and fiction.

This content was consumed by complex "simulated citizens" which interacted with the social media platform much like real-world voters. Then, on election night, each of these citizens cast their votes, leading to a (very marginal!) win by "Victor".

We then simulated the election again, without interference. This time, "Marina" won with a swing of 1.78%.

This means this misinformation campaign - built by students starting from simple tutorials and with inexpensive, consumer-grade AI - succeeded in changing the election result.

A need for digital literacy

Our competition reveals that online misinformation is both easy and fast to create with AI. As one finalist said,

It's scarily easy to create misinformation, easier than truth. It's really difficult to distinguish between genuine and manufactured posts.

We saw competitors identify topics and targets for their goals, even in some cases profiling which citizens were "undecided voters" suitable for micro-targeting.

At the same time, the use of emotional language was quickly identified as a powerful avenue - negative framing was used as a shortcut to provoke online reactions. As another finalist put it,

We needed to get a bit more toxic to get engagement.

Ultimately, just as on real social media, our platform became a "closed loop" where bots talked to bots to trigger emotional responses from humans, creating a manufactured reality designed to shift votes and drive clicks.

What our game shows us is that we urgently need digital literacy to raise awareness of misinformation online so Australians can recognise when they too are being exposed to fake content.

The Conversation

Hammond Pearce has received funding from Australia's Department of Education Australian Economic Accelerator (IGNITE), the Australian Research Council (Discovery Project), the Cybersecurity CSCRC, Google (Google Research Scholar), and Intel (Research Award). Capture the Narrative was achieved with support and funding from Day of AI Australia and M&C Saatchi World Services.

Alexandra Vassar receives funding from Department of Education Australian Economic Accelerator (IGNITE) Grant. She has also received funding from Google Academic Research Awards.

Rahat Masood receives funding from Australia's Defense Innovation Network (DIN), Office of National Intelligence (ONI), Cybersecurity CSCRC, and ASCA. She was a co-organiser of Capture the Narrative, which received support and funding from Day of AI Australia and M&C Saatchi World Services.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).