AI Mimics Brain: Temporal Growth Boosts Learning

Science China Press

How does artificial intelligence continue to improve its capabilities? For a long time, expanding model size has been regarded as an important way to enhance the performance of artificial neural networks, but it has also led to rising energy consumption and growing computational costs. In contrast, during development the human brain does not simply increase connection density; instead, it continuously gains new cognitive abilities through selective pruning.

Inspired by these, the research team proposed a temporally developmental continual learning framework for spiking neural networks. By enabling the temporal establishment and reorganization of connections across different regions, the approach achieves continual learning from simple to complex across perception–motor–interaction tasks while network size is progressively reduced, offering a new pathway toward low-energy, sustainably evolving general cognitive intelligence.

Temporally Development–Inspired Continual Learning Mechanism

Studies show that brain development follows clear temporal principles: neural connectivity first increases and then becomes refined, with cross-regional long-range connections gradually strengthening while local connections are selectively pruned. Primary brain regions mature earlier to support higher cognition, and feedback from higher cognitive functions in turn optimizes lower-level structures. Along this process, infants progressively acquire multiple cognitive functions from simple to complex. Building on these principles, the researchers proposed a temporally development-inspired continual learning method.

The approach allows cognitive modules in spiking neural networks to grow progressively following the learning sequence of perception, motor control, and interaction, while evolving cross-regional long-range connections to promote knowledge reuse across tasks. At the same time, feedback mechanisms are introduced to inhibit and prune redundant local connections from earlier tasks, enabling the network to become increasingly compact as learning progresses.

Energy-Efficient Cross-Domain Continual Learning

The research team found that the proposed method demonstrates stable and strong continual learning performance across multiple cognitive domains, including perception, motor control, and interaction, and achieves leading results on several widely used continual learning benchmarks. Experimental results show that the model learns complex tasks progressively along a "simple-to-complex" trajectory, clearly outperforming direct training or direct pruning approaches. Even as the network scale is continuously reduced, the model effectively preserves memory of previously learned tasks, significantly mitigating catastrophic forgetting while continuing to acquire new cognitive capabilities.

Further analysis indicates that this performance gain arises from brain-like dynamic changes within the network. As learning progresses, local connections first grow rapidly and are then selectively inhibited and pruned, reducing interference from irrelevant or outdated knowledge, while cross-regional long-range connections are continuously strengthened to support the selective reuse of prior knowledge with shared structure and semantics. Importantly, this process does not rely on conventional continual learning strategies such as regularization, experience replay, or weight freezing. The researchers note that this brain-inspired developmental mechanism enhances learning and memory in an efficient, low-energy manner, highlighting the potential of brain developmental principles to drive the next generation of artificial intelligence.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.