Kellogg Study: Some Intersectional Groups Over-Represented Online

Mining billions of words online, a new Northwestern University study of intersectionality has found a cavernous divide between the visibility of white men and the invisibility of Black women, which is likely to produce bias in artificial intelligence.

The research team led by Tessa Charlesworth, assistant professor of management and organizations at Northwestern's Kellogg School of Management, developed an approach in natural language processing, Flexible Intersectional Stereotype Extraction (FISE), which they applied to billions of words of English internet text. They found that while white men are associated with approximately 59% of the studied traits in everyday English language, Black women are associated with only about 5% of the traits.

The study was published March 19 in PNAS Nexus, a sibling journal to the Proceedings of the National Academy of Sciences (PNAS).

"This dominance, versus invisibility, of some groups in language is likely to have serious consequences for the increasing application of large language models and AI," Charlesworth said. "When women of color are made invisible in the data, it is likely that the applications of large language models and AI will be particularly inaccurate for any applications involving those groups."

Charlesworth says you can see these biases in Google or ChatGPT results when a user searches coveted occupations such as "doctor" and the results produce photos of majority white men. "Even in the simplest cases, like these text generations or text or image generation that we use with Dali or ChatGPT, you can see these kinds of imbalances coming out in the outputs," she said. "AI is a reflection of human behavior because we're always training these AI systems on data that's developed by humans. The problem really is if the data is biased in some way, then AI is going to learn those biases as well."

Intersectionality, the way gender, race and social class intersect and relate to systems of oppression, domination or discrimination, has been a theory of exploration for many scholars. Charlesworth's team merged intersectional theory and AI to reveal stereotypes about gender, race and class prevalent in English-speaking societies.

"Some of the first studies done on the biases and artificial intelligence, for instance, on image models and the ability to capture facial recognition showed that there was underrepresentation of Black women specifically in the training data," Charlesworth said. "We can see that this underrepresentation and, correspondingly, the overrepresentation of white men have downstream consequences."

They discovered that not only were white men associated with more positive or humane traits (strong, intellectual, powerful, rich) throughout internet texts, as compared to Black women, they were also associated with jobs that were seen as more desirable, including architect, engineer and manager.

According to the authors, the imbalances in trait frequencies indicate a pervasive male- and white-centric bias in English. Class also was a definitive factor, as all identities paired with "rich" had more positive connotations.

"The research tells us that even in these intersectional spaces where we are having race, gender and social class interacting, social class seems to be an overwhelming factor on how a person is represented," Charlesworth said. "It's the main wave that's shaping what we're seeing as positive and negative. We don't talk about social class because we're often focused on race and gender."

FISE, an approach built on earlier versions of natural language processing (word embeddings) compared to the more advanced large language models (i.e. ChatGPT), allows researchers to scan large quantities of text. It then analyzes how these words are used.

Out of the 840 billion words from the internet that were analyzed, 59% of traits in the English language are associated with white men, whereas only 5% are associated with Black women. In addition, 78% of traits associated with white and rich were positive, while only 21% of traits associated with Black and poor were positive. In general, these imbalances appeared to be largest across class.

"The influence of class and how it is overwhelmingly perceived in media was surprising," Charlesworth said. "It could even make a group that's typically seen as more negative or more subordinate in society, like women, be perceived as more powerful. The connotation of 'rich' changes the perception."

As the scholars expand their research, they hope to use this technology to analyze language across different cultures, media and history. "Let's look at the variation of these intersectional patterns across history and across language," Charlesworth said. "Let's continue to add nuance to these conversations."

Charlesworth hopes that the expansion of analytics will continue to draw out stereotypes at the intersections of social groups. "Empirical evidence on intersectionality remains understudied," she said. "Our research can show industries that such findings and methods illustrate the societal significance of how language embodies, propagates and even intensifies stereotypes of underrepresented social groups."

The study is titled "Extracting Intersectional Stereotypes from Embeddings: Developing and Validating the Flexible Intersectional Stereotype Extraction Procedure." In addition to Charlesworth, co-authors of the study include Kshitish Ghate, Harvard University; Aylin Caliskan, University of Washington; and Mahzarin R. Banaji, Harvard University.

Funding for the study was supported by a Social Sciences and Humanities Research Council of Canada Postdoctoral Fellowship; the Rand Innovation Fund from the Harvard Department of Psychology awarded to Tessa Charlesworth; and the Hodgson Innovation Fund from the Harvard Department of Psychology awarded to Mahzarin R. Banaji. This work also is supported by a U.S. National Institute of Standards and Technology (NIST) grant.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.