Artificial intelligence is getting more powerful - but it's also racking up a massive energy bill. Some estimate that one maximum-length ChatGPT query can use about twice as much power as an average U.S. home does in one minute. Multiply that by billions of queries and the enormous training AI models require, and the energy impact is staggering.
As researchers are racing to find greener ways to power AI, a new study led by Tianyi Chen, associate professor of electrical and computer engineering at Cornell Tech, with collaborators from IBM and Rensselaer Polytechnic Institute, explores a promising solution: analog in-memory computing (AIMC), utilizing analog chips.
Unlike traditional architectures, which constantly move data back and forth between memory and processors, AIMC stores and processes data in a single location. "This leverages physics to perform the math calculation instantly without moving the data, potentially slashing power consumption by 1,000 times and making the next generation of AI sustainable," said Chen, who is also associate professor at Cornell Engineering.
Although analog chips can run AI models with far less energy once those models are trained, teaching the models has been a major hurdle.
"Analog hardware behaves imperfectly," Chen said. "The electrical pulses that update model parameters can be uneven or distorted by noise, which leads to inaccurate gradients and degraded learning."
Chen's team tackled this challenge, delivering an oral presentation on Dec. 4 at the 2025 Annual Conference on Neural Information Processing Systems (NeurIPS). The team has developed an analog version of a popular AI training algorithm known as "backpropagation." The researchers call their analog version "Residual Learning," which systematically corrects imperfections and ensures training stays on track.
Rather than simply patching errors, the method takes a systematic approach to the quirks of analog hardware. Residual Learning introduces an extra layer that tracks these imperfections and corrects them in real time, keeping the learning process steady and unbiased. The result: analog chips can train AI models with accuracy close to digital systems while consuming much less energy, paving the way for greener, large-scale AI.
"In essence, we dynamically compensate for the hardware's natural imperfections so the model can be trained with accuracy comparable to digital hardware, but at a fraction of the energy cost," Chen said.
This breakthrough could be transformative, Chen said. It would make it practical to train and fine-tune large AI models with far less energy and cost. That could enable the use of applications that are currently out of reach for power-hungry systems, from health care devices and wearable technology to industrial sensors and autonomous robots. The development could also lead to entirely new AI model architectures designed specifically for analog hardware.
The team's next steps include tailoring the approach to open-source frontier models and exploring industry collaborations to scale the technology.
"This opens the door to models that are inherently far more energy-efficient than anything we have today," said Chen. "Our research could trigger a genuine change in how we build and use AI."
Grace Stanley is a staff writer-editor for Cornell Tech.