Neuroanatomy Boosts Hybrid AI in Acoustic Target Detection

Beijing Institute of Technology Press Co., Ltd

"Current automated STD methods perform well under controlled conditions but degrade sharply in low SNR or with unseen targets, while standalone BCI systems suffer from high false alarm rates. To overcome these limitations, we proposed a hybrid approach that combines the complementary strengths of neural perception and acoustic feature learning," explained study author Luzheng Bi, a researcher at the Beijing Institute of Technology. The core innovations include (a) Tri-SDANet, an EEG decoding model incorporating neuroanatomical priors from source analysis, (b) a confidence-driven fusion strategy that adaptively integrates BCI and automatic detection outputs, and (c) a streaming-mode experimental validation simulating real-world scenarios. "This integrated solution achieves robust detection performance with high generalization, offering a practical tool for security protection and environmental reconnaissance."

The hybrid system leverages multiple technical advancements: 3D printing is not involved here, but the Tri-SDANet model adopts a neuroanatomy-based spatial partitioning strategy, dividing 60-channel EEG signals into temporal, frontal, and parieto-occipital lobes, each processed with dedicated spatiotemporal filters. The automatic detection module integrates state-of-the-art models trained on log-Mel spectrogram features. "The fusion framework invokes BCI only when the automatic detector is uncertain, reducing human workload while maintaining accuracy," said Jianting Shi, the lead author.

"While the hybrid system shows promising results, it still faces challenges: EEG decoding latency, operator fatigue, and adaptation to more diverse sound targets. Future work will focus on algorithm and hardware optimization to reduce latency, develop user-friendly training protocols, and expand the dataset to cover broader acoustic scenarios," said Shi. Overall, this brain–machine hybrid intelligence framework provides a generalizable solution for robust STD, bridging the gap between laboratory performance and real-world application demands.

Authors of the paper include Jianting Shi, Jiaqi Wang, Weijie Fei, Aberham Genetu Feleke, and Luzheng Bi.

This work was supported by the National Natural Science Foundation of China under Grant 62573053 and the Beijing Natural Science Foundation of China under Grant IS23064.

The paper, "Neuroanatomy-Informed Brain–Machine Hybrid Intelligence for Robust Acoustic Target Detection" was published in the journal Cyborg and Bionic Systems on Oct. 17, 2025, at DOI: 10.34133/cbsystems.0438.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.