Design Tweaks Drive AI's Eco-Friendly Evolution

Oregon State University

CORVALLIS, Ore. – Artificial intelligence systems that ask users to pause to consider AI's energy consumption and environmental impacts are likely to reduce unnecessary AI use, new research by Oregon State University suggests.

The findings, published in Science Communication , are important as AI is already using electricity on scales that can be meaningfully compared to households, factories and towns. For example, the electricity needed to train a large language model would power 120 homes for a year, the researchers note; one AI-generated image has roughly the same energy cost as charging a smartphone.

With about 85% of the world's energy still coming from fossil fuels, every megawatt-hour that can be carved from AI's electricity profile is significant, says the study's leader, Cheng "Chris" Chen of the OSU College of Liberal Arts.

"Despite AI's substantial environmental impacts, information about those impacts is rarely disclosed or effectively communicated to everyday users of AI systems," said Chen, assistant professor in the School of Communication. "That means people tend to be severely limited in their ability to make environmentally conscious decisions during their interactions with these systems, which often prod you to continue using them even after you've already gotten what you originally asked for."

Chen and collaborators at the University of Illinois and the University of Virginia sought to determine whether "design friction" – basically speed bumps for software users – would help people pause to consider the environmental aspects of asking AI to generate an image.

The found that action-based friction, which required the user to search for existing image resources and to specify details about the image they want to generate, made users want to be more ecologically responsible in their AI use.

Additionally, the researchers looked at cue-based friction – in this case, persuasive messaging about AI's effects on the environment. The study showed that cue-based friction increased users' trust but didn't tend to affect their intentions to use AI responsibly.

"Most generative AI systems prioritize efficiency for the user, with interfaces that focus primarily on functionality and output quality, meaning many AI users remain unaware of their ecological footprint," Chen said. "We've shown that when users are prompted to slow down and reflect, it gives them a chance for more responsible AI use."

With artificial intelligence already ubiquitous and growing more so by the day – it's been estimated that high-performance computing may account for one-fifth of the world's energy consumption by 2030 – mechanisms for encouraging responsible AI use are critical, Chen says.

Rules of thumb for responsible use, she says, include relying on AI only when comparably effective tools don't exist; avoiding redundancies across multiple AI projects; and closing the tool when your needs have been met rather than allowing yourself to be prompted to remain in the system.

"It's mainly a matter of people understanding that the image of happy pandas eating bamboo shoots that they want AI to make for them doesn't come for free from an environmental standpoint," she said. "And if you do decide to have the image made, save it. That way, if you need a similar image later, you won't have to have another one generated."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.