
<(From left) Professor Sang Wan Lee, Myoung Hoon Ha, and Dr. Yoondo Sung>
Artificial intelligence now plays Go, paints pictures, and even converses like a human. However, there remains a decisive difference: AI requires far more electricity than the human brain to operate. Scientists have long asked the question, "How can the brain learn so intelligently using so little energy?" KAIST researchers have moved one step closer to the answer.
KAIST (President Kwang Hyung Lee) announced on the 29th that a research team led by Distinguished Professor Sang Wan Lee of the Department of Brain and Cognitive Sciences has developed a new technology that applies the learning principles of the human brain to deep learning, enabling stable training even in deep artificial intelligence models.
Our brain does not passively receive the world. Instead of merely perceiving what is happening in the present, it first predicts what will happen next and, when reality differs from that prediction, adjusts itself to reduce the difference (i.e., prediction error). This is similar to anticipating an opponent's next move in Go and changing strategy if the prediction turns out to be wrong. This mode of information processing is known as "Predictive Coding."

< Predictive Coding (PC) Module >
Scientists have attempted to apply this principle to AI, but encountered difficulties. As neural networks become deeper, errors tend to concentrate in specific layers or vanish altogether, repeatedly leading to performance degradation.
The research team mathematically identified the cause of this problem and proposed a new solution. The key idea is simple: instead of predicting only the final outcome, the AI is designed to also predict how its prediction errors will change in the future. The team refers to this as "Meta Prediction." In simple terms, it is an AI that "thinks once more about its mistakes." When this method was applied, learning proceeded stably in deep neural networks without halting.

The experimental results were also impressive. In 29 out of 30 experiments, the proposed method achieved higher accuracy than the current standard AI training method, backpropagation. Backpropagation is the representative learning method in which AI "goes backward by the amount of error and corrects it."
Conventional AI training methods (backpropagation) require tightly interconnected layers, meaning the entire network must be computed and updated simultaneously. In contrast, this new approach demonstrates that, like the brain, large AI models can be effectively trained even when learning occurs in a distributed and partially independent manner.

>
This technology is expected to expand into various fields where power efficiency is critical, including neuromorphic computing, robot AI that must adapt to changing environments, and edge AI operating within devices.
Distinguished Professor Sang Wan Lee stated, "The key to this research is not simply imitating the structure of the brain, but enabling AI to follow the brain's learning principles themselves," adding, "We have opened the possibility of artificial intelligence that learns efficiently like the brain."
This study was conducted with Dr. Myoung Hoon Ha as the first author and Professor Sang Wan Lee as the corresponding author. The paper was accepted to the International Conference on Learning Representations (ICLR 2026) and was published online on January 26.
※ Paper title: "Stable and Scalable Deep Predictive Coding Networks with Meta Prediction Errors"
Original paper: https://openreview.net/forum?id=kE5jJUHl9i¬eId=e6T5T9cYqO
This research was supported by the Ministry of Science and ICT and the Institute of Information & Communications Technology Planning & Evaluation (IITP) through the Digital Global Research Support Program (joint research with Microsoft Research), the Samsung Electronics SAIT NPRC Program, and the SW Star Lab Program.