New Zealand Urged to Regulate AI Fake Images in Elections

Seeing is no longer believing in the age of images and videos generated by artificial intelligence (AI), and this is having an impact on elections in New Zealand and elsewhere.

Author

  • Bronwyn Isaacs

    Lecturer, Anthropology, University of Waikato

Ahead of the 2025 local body elections, voters are being warned by overseas politicians and local experts not to automatically trust that what they are looking at is real.

Deepfakes - images or video created with the use of AI to mislead or spread false information - were used in last year's United States presidential election . Early in the campaign, a deepfake voice clip impersonating then president Joe Biden told voters not to cast a ballot vote in New Hampshire's primaries.

There have also been concerns about the role of deepfakes on the campaign trail in Australia. The Labor Party, for example, released an AI-generated video of opposition leader Peter Dutton dancing on its TikTok account.

But the worry is not just that deepfakes will spread lies about politicians or other real people. AI is also used to create "synthetic deepfakes" - images of fake people who do not exist.

Using artificially generated images and videos of both real and fake people raises questions around transparency and the ethical treatment of cultural and ethnic groups.

Cultural offence with AI isn't a hypothetical concern. Australian voters have found some AI used in political advertising to be "cringe" and culturally clumsy , with one white female politician using auto-tuned rapping in her campaign.

Australians have also reported an increase in deepfake political content. The majority were unable to detect AI content .

Several countries including Australia and Canada are considering laws to manage the harms of AI use in political messaging.

Others have already passed legislation banning or limiting AI in elections. South Korea for example, banned the use of deepfakes in political advertising 90 days before an election. Singapore has banned digitally-altered material misrepresenting political candidates.

While New Zealand has several voluntary frameworks to address the growing use of AI in media, there are no explicit rules to prevent its use in political campaigns. To avoid cultural offence and to offer transparency, it is crucial for political parties to establish and follow clear ethical standards on AI use in their messaging.

Existing frameworks

The film industry is a good starting point for policymakers looking to establish a clear framework for AI in political messaging.

In my ongoing research about culture and technology in film production, industry workers have spoken about New Zealand's world-leading standards on culturally aware film production processes and the positive impact this had on shaping AI standards.

Released in March 2025, the New Zealand Film Commission's Artificial Intelligence Guiding Principles takes a "people first" approach to AI which prioritises the needs, wellbeing and empowerment of individuals when developing and implementing AI systems.

The principles also stress respect for matauranga Māori and transparency in the use of AI so that audiences are "informed about the use of AI in screen content they consume".

The government's Public Service AI framework , meanwhile, requires government agencies to publicly disclose how AI systems are used and to practice human-centred values such as dignity and self-determination.

AI in NZ politics

Meanwhile, the use of AI by some of New Zealand's political parties has already raised concerns.

During the 2023 election campaign, the National Party admitted using AI in their attack advertisements. And recent social media posts using AI by New Zealand's ACT party were criticised for their lack of transparency and cultural sensitivity.

An ACT Instagram post about interest rate cuts featured an AI generated image of a Māori couple from the software company Adobe's stock photo collection.

Act whip Todd Stephenson responded that using stock imagery or AI-generated imagery was not inherently misleading. But he said that the party "would never use an actor or AI to impersonate a real person".

My own search of the Adobe collection came up with other images used by ACT in its Instagram posts, including an AI generated image labelled as "studio photography portrait of a 40 years old Polynesian woman".

There are two key concerns with using AI like this. The first is that ACT didn't declare the use of AI in its Instagram posts. A lack of transparency around the use of deepfakes of any kind can undermine trust in the political system . Voters end up uncertain about what is real and what is fake.

Secondly, the images were synthetic fakes of ethnic minorities in New Zealand. There have long been concerns from academics and technology experts that AI generated images reproduce harmful stereotypes of diverse communities.

Legislation needed

While the potential for cultural offence and misinformation with faked content is not new, AI alters the scale at which such fakes can be created. It makes it easier and quicker to produce manipulative, fake and culturally offensive images.

At a minimum, New Zealand needs to introduce legalisation that requires political parties to acknowledge the use of AI in their advertising. And as the country moves into a new election season, political parties should commit to combating misinformation and cultural misrepresentation.

The Conversation

Bronwyn Isaacs is a member of the Association of Social Anthropologists of Aotearoa/New Zealand.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).