In social media, web searches and on posters: AI-generated images can now be found everywhere. Large language models (LLMs) such as ChatGPT are capable of converting simple input into deceptively realistic images. Researchers have now demonstrated that the generation of such artificial images not only reproduces gender biases, but actually magnifies them.
Models in different languages investigated
The study explored models across nine languages and compared the results. Previous studies had generally focused only on English-language models. As a benchmark, the team developed the Multilingual Assessment of Gender Bias in Image Generation (MAGBIG). It is based on carefully controlled occupational designations. The study investigated four different types of prompts: direct prompts that use the 'generic masculine' in languages in which the generic term for an occupation is grammatically masculine ('doctor'), indirect descriptions ('a person working as a doctor'), explicitly feminine prompts ('female doctor') and 'gender star' prompts (the German convention intended to create a gender-neutral designation by using an asterisk, e.g. 'Ärzt*innen' for doctors).
To make the results comparable, the researchers included languages in which the names of occupations are gendered, such as German, Spanish and French. In addition, the model incorporated languages such as English and Japanese that use only one grammatical gender but have gendered pronouns ('her', 'his'). And finally, it included languages without grammatical gender: Korean and Chinese.
AI images perpetuate and magnify role stereotypes
The results of the study show that direct prompts with the generic masculine show the strongest biases. For example, such occupations as 'accountant' produce mostly images of white males, while prompts referring to caregiving professions tend to generate female-presenting images. Gender-neutral or 'gender-star' forms only slightly mitigated these stereotypes, while images resulting from explicitly feminine prompts showed almost exclusively women. Along with the gender distribution, the researchers also analyzed how well the models understood and executed the various prompts. While neutral formulations were seen to reduce gender stereotypes, they also led to a lower quality of matches between the text input and the generated image.
"Our results clearly show that the language structures have a considerable influence on the balance and bias of AI image generators," says Alexander Fraser, Professor for Data Analytics & Statistics at TUM Campus in Heilbronn. "Anyone using AI systems should be aware that different wordings may result in entirely different images and may therefore magnify or mitigate societal role stereotypes."
"AI image generators are not neutral—they illustrate our prejudices in high resolution, and this depends crucially on language. Especially in Europe, where many languages converge, this is a wake-up call: fair AI must be designed with language sensitivity in mind," adds Prof. Kristian Kersting, co-director of hessian.AI and co-spokesperson for the "Reasonable AI" cluster of excellence at TU Darmstadt.
Remarkably, bias varies across languages without a clear link to grammatical structures. For example, switching from French to Spanish prompts leads to a substantial increase in gender bias, despite both languages distinguishing in the same way between male and female occupational terms.