Large language models (LLMs) like ChatGPT increasingly shape how people see the world, yet their responses can mirror long-standing biases in the data they ingest.
New research from the Oxford Internet Institute (OII) at the University of Oxford and the University of Kentucky finds that ChatGPT systematically favors wealthier, Western regions in response to questions ranging from "Where are people more beautiful?" to "Which country is safer?"
The study, "The Silicon Gaze: A typology of biases and inequality in LLMs through the lens of place," published in Platforms and Society today, analyzed over 20 million ChatGPT queries. The study was led by Oxford's Francisco W. Kerche and Mark Graham, Ph.D., and University of Kentucky's Matthew Zook, Ph.D., a University Research Professor in UK's Department of Geography.
The authors introduce the concept of the "silicon gaze" to describe how generative AI reproduces long-standing global inequalities, rather than offering neutral representations of the world.
"When generative AI describes the world, it decides what becomes visible and what stays invisible, shaping how cities, regions and communities are seen and valued," Zook said. "This research project is less concerned about weird AI hallucinations, but the quiet gaps where whole places and communities are consistently overlooked or devalued. More transparency like this research can expose bias, but it won't erase it. It helps us see the problem, but not solve it. To do this, the makers of these systems need to provide greater clarity about where the data comes from and how the systems are designed."
What the study found
Across comparisons, ChatGPT tended to select higher-income regions such as the United States, Western Europe and parts of East Asia as "better," "smarter," "happier" or "more innovative." Meanwhile, large areas of Africa, the Middle East, and parts of Asia and Latin America were far more likely to rank at the bottom.
These patterns were consistent across both highly subjective prompts and prompts that appear more objective.
Examples from the research
To make these dynamics visible, the researchers produced maps and comparisons from their 20.3-million-query audit. For example:
- A world map ranking "Where are people smarter?" places almost all low-income countries, especially Africa, at the bottom.
- Neighborhood-level results in London, New York and Rio de Janeiro show ChatGPT's rankings closely align with existing social and racial divides, rather than meaningful characteristics of communities.
The research team has created a public website at inequalities.ai where anyone can explore how ChatGPT ranks their own country, city or neighborhood across topics such as food, culture, safety, environment or quality of life.
Why this happens
The authors argue that these biases are not errors that can simply be corrected, but structural features of generative AI. LLMs learn from data shaped by centuries of uneven information production, privileging places with extensive English-language coverage and strong digital visibility. The paper identifies five interconnected biases - availability, pattern, averaging, trope and proxy - that together help explain why richer, well-documented regions repeatedly rank favorably in ChatGPT's answers.
Why it matters
Generative AI is increasingly used in public services, education, business and everyday decision-making. Treating its outputs as neutral sources of knowledge risks reinforcing the inequalities the systems mirror.
The researchers call for greater transparency from developers and organizations using AI, and for auditing frameworks that allow independent scrutiny of model behavior. For the public, the research shows that generative AI does not offer an even map of the world. Its answers reflect the biases embedded in the data it is built on.
The Inequalities.AI project is based on a long-standing research collaboration on digital geographies between researchers at the University of Oxford and the University of Kentucky. Initial funding was provided by the John Fell Fund of the University of Oxford.