Everyday AI Fuels Overconsumption Concerns

From automatically generated overviews to chatbots in spreadsheets, so-called artificial intelligence is increasingly being integrated into our watches, phones, home assistants and other smart devices.

Authors

  • Jutta Haider

    Professor in Information Studies, Swedish School of Library and Information Science, University of Borås

  • Björn Ekström

    Lecturer in Information Studies, University of Borås

  • James White

    Postdoctoral Researcher, Sociology and Digital Tech, Lund University

AI-in-everything is becoming so ordinary and everyday that it is easy to overlook. But this normalisation is having a dangerous effect on the environment, the planet and our response to climate change.

AI's direct environmental costs are undeniable . Data centres consume large amounts of electricity and water and AI queries use up much more energy than a conventional internet search.

The same companies that develop and promote consumer AI - including Microsoft, Google and Amazon - also use it to help corporations find and extract oil and gas as quickly as possible. But when it comes to the indirect effects of AI, the environment remains a huge blind spot for most people.

Our research identifies hidden costs and draws attention to how AI encourages hyperconsumption and large carbon footprint lifestyles. We also study how the cultural values embedded within widely available AI applications emphasise individualism and commodification, while ignoring or downplaying the relevance of environmental issues.

Consumption-based emissions must fall to avoid runaway climate change, so how environmental values are expressed matters. Our research shows that many AI companies do not consider the environmental harm caused by their products to be something worth worrying about.

AI is embedded in the digital tools and platforms people use in their everyday lives . Search engines, social media and online marketplaces have all incorporated what they call "AI features" into their applications.

These are often default settings that are hard to disable or opt out of . Many people are unaware these functions are switched on, let alone that their ultimate purpose is to encourage purchases from which platform owners can extract a profit.

Such a business model accomplishes two things at once, generating both financial profits and data to be used as business intelligence. And it means emissions are generated twice: through the direct use of widely available applications, and in the additional emissions encouraged by the content being delivered to users. This is a double whammy for the environment.

As part of our research into big tech, we prompted Microsoft's prominent chatbot Copilot with the simple term "children's clothes". This generated a list of links to online shops and department stores. Our prompt did not say we wanted to buy new children's clothes.

To understand how the chatbot had turned our prompt into a web search, we asked it to describe its decisions. Copilot provided three phrases, all referring to consumption: children's clothing stores, best places to buy kids' clothes, and popular children's clothing brands.

Copilot's response could have been about typical materials and colours, sewing, or swapping and buying secondhand children's clothes. In fact, Ecosia, the search engine that uses profits to fund climate action, foregrounds buying sustainable alternatives and shows options for renting, borrowing and buying secondhand.

However, Copilot's AI search focused on shopping for new clothes - indirectly encouraging overconsumption. The same prompts in OpenAI's SearchGPT produced near-identical results , by interpreting the user's intent as that of a shopper. We also tested Google AI overviews and this gave us the same results, as did another search engine called Perplexity.

Nobody takes responsibility for these indirect emissions. They don't come from the producers of the children's clothes or the consumers. They fall outside most mechanisms for attributing, measuring and reporting environmental impacts.

By naming this phenomenon for the first time, we can bring greater attention to it. We use the term "algorithmically facilitated emissions" - and believe platform owners, whose profits depend on connecting producers with consumers and extracting value from their exchange, should bear the responsibility for them.

'Acceptable' environmental harm

We can tell that most AI developers do not pay attention to the environment by analysing what these companies allow and restrict. We studied the acceptable use policies they have for their AI models, which specify the kinds of queries, prompts and activities that users are not allowed to perform with their services. Very few of these AI policies include the environment or nature - and when they do, it is usually superficial.

For instance, animals are only mentioned in one-sixth of 30 use policies we investigated. When included, animals are listed as individuals after humans, not as species that need protection or are valuable to ecosystems.

Misinformation is frequently mentioned in these policies as unacceptable. While policies like this tend to be human-centered, there is a lack of regard for the environment, both in terms of misinformation and overall mentions. Contributing to climate change or other environmental harms does not feature as a risk that should be avoided.

Tech companies, policymakers, governments and business organisations should acknowledge that the continued growth of AI is having systemic consequences which harm the environment. These include direct effects of energy and resource use, plus indirect effects pertaining to consumption-focused lifestyles and social norms that disregard the environment.

But the normalisation of AI-in-everything helps bury these consequences - just when environmental awareness is needed most, and pressure on governing bodies to pass climate-forward policies should be maintained.

New language can help these dynamics be seen, talked about and measured . Platforms that connect producers and consumers play an important role in deciding what gets produced and consumed - yet the way we, as a society, typically think about consumption does not allow for this. New terms, such as algorithmically facilitated emissions, can hopefully help people rethink and redesign our information infrastructure.

If AI can be built to increase consumption, then the opposite is also possible . AI could promote environmental values and reduce consumption - not the other way around.

Don't have time to read about climate change as much as you'd like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation's environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 47,000+ readers who've subscribed so far.

The Conversation

Jutta Haider receives funding from Mistra - the Foundation for Environmental Strategic Research and from Formas - Swedish Research Council for Sustainable Development.

Björn Ekström receives funding from Mistra - the Foundation for Environmental Strategic Research.

James White receives funding from The Swedish Research Council.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).