Research: ChatGPT Worsens Global Inequalities

New research from the Oxford Internet Institute at the University of Oxford, and the University of Kentucky, finds that ChatGPT systematically favours wealthier, Western regions in response to questions ranging from 'Where are people more beautiful?' to 'Which country is safer?' - mirroring long-standing biases in the data they ingest.

The study, The Silicon Gaze: A typology of biases and inequality in LLMs through the lens of place , by Francisco W. Kerche, Professor Matthew Zook and Professor Mark Graham , published in Platforms and Society on Tuesday 20th January, analysed over 20 million ChatGPT queries.

When AI learns from biased data, it amplifies those biases further and can broadcast them at scale. That is why we need more transparency and more independent scrutiny of how these systems make claims about people and places, and why users should be skeptical about using them to form opinions about communities.

Professor Mark Graham, Professor of Internet Geography, Oxford Internet Institute

Across comparisons, the researchers found that ChatGPT tended to select higher-income regions such as the United States, Western Europe, and parts of East Asia as 'better', 'smarter', 'happier', or 'more innovative'. Meanwhile, large areas of Africa, the Middle East, and parts of Asia and Latin America were far more likely to rank at the bottom.

These patterns were consistent across both highly subjective prompts and prompts that appear more objective.

To make these dynamics visible, the researchers produced maps and comparisons from their 20.3-million-query audit. For example:

  • A world map ranking 'Where are people smarter?' places almost all low-income countries, especially Africa, at the bottom.
  • Neighbourhood-level results in London, New York and Rio show ChatGPT's rankings closely align with existing social and racial divides, rather than meaningful characteristics of communities.

The research team has created a website at inequalities.ai where anyone can explore how ChatGPT ranks their own country, city or neighbourhood across topics such as food, culture, safety, environment, or quality of life.

Mark Graham, Professor of Internet Geography, said: 'When AI learns from biased data, it amplifies those biases further and can broadcast them at scale. That is why we need more transparency and more independent scrutiny of how these systems make claims about people and places, and why users should be skeptical about using them to form opinions about communities. If an AI system repeatedly associates certain towns or cities or countries with negative labels, those associations can spread quickly and start to shape perceptions, even when they are based on partial, messy or outdated information.'

Generative AI is increasingly used in public services, education, business and everyday decision-making. Treating its outputs as neutral sources of knowledge risks reinforcing the inequalities the systems mirror.

If an AI system repeatedly associates certain towns or cities or countries with negative labels, those associations can spread quickly and start to shape perceptions, even when they are based on partial, messy or outdated information.

Professor Mark Graham, Professor of Internet Geography, Oxford Internet Institute

The authors argue that these biases are not errors that can simply be corrected, but structural features of generative AI.

LLMs learn from data shaped by centuries of uneven information production, privileging places with extensive English-language coverage and strong digital visibility. The paper identifies five interconnected biases - availability, pattern, averaging, trope and proxy - that together help explain why richer, well-documented regions repeatedly rank favourably in ChatGPT's answers.

The researchers call for greater transparency from developers and organisations using AI, and for auditing frameworks that allow independent scrutiny of model behaviour. For the public, the research shows that generative AI does not offer an even map of the world: its answers reflect the biases embedded in the data it is built on.

The paper, The Silicon Gaze: A typology of biases and inequality in LLMs through the lens of place, by Francisco W. Kerche and Mark Graham, Oxford Internet Institute, University of Oxford and Matthew Zook, Department of Geography, University of Kentucky is published in Platforms and Society .

/University Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.