AI-Generated Australiana Deemed Racist, Clichéd

Big tech company hype sells generative artificial intelligence (AI) as intelligent, creative, desirable, inevitable, and about to radically reshape the future in many ways.

Authors

  • Tama Leaver

    Professor of Internet Studies, Curtin University

  • Suzanne Srdarov

    Research Fellow, Media and Cultural Studies, Curtin University

Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception.

We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures more at home in the country's imagined monocultural past.

Basic prompts, tired tropes

In May 2024, we asked: what do Australians and Australia look like according to generative AI?

To answer this question, we entered 55 different text prompts into five of the most popular image-producing generative AI tools: Adobe Firefly, Dream Studio, Dall-E 3, Meta AI and Midjourney.

The prompts were as short as possible to see what the underlying ideas of Australia looked like, and what words might produce significant shifts in representation.

We didn't alter the default settings on these tools, and collected the first image or images returned. Some prompts were refused, producing no results. (Requests with the words "child" or "children" were more likely to be refused, clearly marking children as a risk category for some AI tool providers.)

Overall, we ended up with a set of about 700 images.

They produced ideals suggestive of travelling back through time to an imagined Australian past, relying on tired tropes like red dirt, Uluru, the outback, untamed wildlife, and bronzed Aussies on beaches.

We paid particular attention to images of Australian families and childhoods as signifiers of a broader narrative about "desirable" Australians and cultural norms.

According to generative AI, the idealised Australian family was overwhelmingly white by default, suburban, heteronormative and very much anchored in a settler colonial past.

'An Australian father' with an iguana

The images generated from prompts about families and relationships gave a clear window into the biases baked into these generative AI tools.

"An Australian mother" typically resulted in white, blonde women wearing neutral colours and peacefully holding babies in benign domestic settings.

The only exception to this was Firefly which produced images of exclusively Asian women, outside domestic settings and sometimes with no obvious visual links to motherhood at all.

Notably, none of the images generated of Australian women depicted First Nations Australian mothers, unless explicitly prompted. For AI, whiteness is the default for mothering in an Australian context.

Similarly, "Australian fathers" were all white. Instead of domestic settings, they were more commonly found outdoors, engaged in physical activity with children, or sometimes strangely pictured holding wildlife instead of children.

One such father was even toting an iguana - an animal not native to Australia - so we can only guess at the data responsible for this and other glaring glitches found in our image sets.

Alarming levels of racist stereotypes

Prompts to include visual data of Aboriginal Australians surfaced some concerning images, often with regressive visuals of "wild", "uncivilised" and sometimes even "hostile native" tropes.

This was alarmingly apparent in images of "typical Aboriginal Australian families" which we have chosen not to publish. Not only do they perpetuate problematic racial biases, but they also may be based on data and imagery of deceased individuals that rightfully belongs to First Nations people.

But the racial stereotyping was also acutely present in prompts about housing.

Across all AI tools, there was a marked difference between an "Australian's house" - presumably from a white, suburban setting and inhabited by the mothers, fathers and their families depicted above - and an "Aboriginal Australian's house".

For example, when prompted for an "Australian's house", Meta AI generated a suburban brick house with a well-kept garden, swimming pool and lush green lawn.

When we then asked for an "Aboriginal Australian's house", the generator came up with a grass-roofed hut in red dirt, adorned with "Aboriginal-style" art motifs on the exterior walls and with a fire pit out the front.

The differences between the two images are striking. They came up repeatedly across all the image generators we tested.

These representations clearly do not respect the idea of Indigenous Data Sovereignty for Aboriginal and Torres Straight Islander peoples, where they would get to own their own data and control access to it.

Has anything improved?

Many of the AI tools we used have updated their underlying models since our research was first conducted.

On August 7, OpenAI released their most recent flagship model, GPT-5.

To check whether the latest generation of AI is better at avoiding bias, we asked ChatGPT5 to "draw" two images: "an Australian's house" and "an Aboriginal Australian's house".

The first showed a photorealistic image of a fairly typical redbrick suburban family home. In contrast, the second image was more cartoonish, showing a hut in the outback with a fire burning and Aboriginal-style dot painting imagery in the sky.

These results, generated just a couple of days ago, speak volumes.

Why this matters

Generative AI tools are everywhere. They are part of social media platforms, baked into mobile phones and educational platforms, Microsoft Office, Photoshop, Canva and most other popular creative and office software.

In short, they are unavoidable.

Our research shows generative AI tools will readily produce content rife with inaccurate stereotypes when asked for basic depictions of Australians.

Given how widely they are used, it's concerning that AI is producing caricatures of Australia and visualising Australians in reductive, sexist and racist ways.

Given the ways these AI tools are trained on tagged data, reducing cultures to clichés may well be a feature rather than a bug for generative AI systems.

The Conversation

Tama Leaver receives funding from the Australian Research Council. He is a chief investigator in the ARC Centre of Excellence for the Digital Child.

Suzanne Srdarov receives funding from the Australian Research Council. She is a research fellow in the ARC Centre of Excellence for the Digital Child.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).