Machine Learning Engineers' own perceptions of their environmental impact could be an impediment to the path towards greener AI.

A survey of twenty-three Machine Learning (ML) practitioners across the UK and other parts of the world conducted by King's College London highlighted feelings of alienation from the sustainability of models, suggesting environmental credentials were not viewed as part of an AI's performance.
Respondents responded with comments such as "As an individual, I suppose you can't really do much" despite a growing number of tools to track the environmental impact of AI, suggesting more needs to be done to empower developers to create greener models.
Dr Georgia Panagiotidou, paper author and Lecturer in Visualisation said "We highlight a fundamental lack of agency at the heart of the AI and sustainability conversation. Despite tracking tools and information about the effect of ML on the environment, many developers feel like their industry doesn't value sustainability and what they do won't matter.
"This work isn't about bashing individuals but working with them to identify blockers to achieving change in the space. By integrating sustainable thinking into all of AI practice, we can help address the lack of knowledge practitioners feel and give them the tools to make sustainable decisions in the face of the climate crisis."
I need to do my research and if I was to tell my supervisor no, I'm not going to use the HPC (high performance computer) because I feel bad for the penguins in the Antarctic, then that wouldn't go down so well."
Anonymous PhD study participant
ML, a sub-set of AI, has seen significant roll-out in recent years as AI tools play a larger part in the world economy. This has come at significant environmental cost. Global greenhouse emissions from the ICT industry have doubled in the past decade, and the resource needs of data centres training ML models have delayed the retirement of coal power plants.
To deal with concerns over scope-three emissions of ICT companies training AI, tools like CodeCarbon have been developed to estimate the emissions produced when executing code or training AI, giving developers a window into the sustainability of their models.
While research analysing the information given by smart meters influences how consumers make more sustainable decisions has been done, little work has been conducted on how eco-feedback tools impact the decisions of ML practitioners - until now.
Participants in the paper, presented at the 2025 ACM Conference on Fairness, Accountability, and Transparency, highlighted technical, individual, regulatory and cultural approaches to cutting carbon emissions, such as using eco-feedback tools and requiring for workplaces to rationalise energy use in AI training.
Despite this, most felt they had a limited responsibility to deal with environmental concerns, and that the blame lay with large tech providers of Large Language Models like ChatGPT.
Moreover, participants described how their own areas, whether academia or industry, saw sustainability as "not one of our results, it's not a metric of performance" and was secondary in cultures which prized high model accuracy and speed to produce papers and new products.
Responsible AI taught us that something at the periphery of the conversation can become deeply embedded in practice; our research highlights there is work to be done to repeat that success story with sustainability."
Sinem Görücü
A PhD student described sustainability taking a backseat in the competitive environment of publishing research, "I need to do my research and if I was to tell my supervisor no, I'm not going to use the HPC (high performance computer) because I feel bad for the penguins in the Antarctic, then that wouldn't go down so well."
Sinem Görücü, a PhD candidate at King's and first author of the paper said, "Qualitative interviews were vital to capture how each individual thought about their work, and we were surprised to find people so disempowered. This was a self-selective group of climate conscious people but who struggled to be climate active as developers.
"Responsible AI taught us that something at the periphery of the conversation can become deeply embedded in practice; our research highlights there is work to be done to repeat that success story with sustainability."
In the future, the team hope to do a large quantitative study of the environmental sustainability perceptions of Machine Learning practitioners.