
< Professor Jun Han >
From smartphone facial recognition to autonomous vehicles, Artificial Intelligence (AI) has long been protected as a "black box." However, a joint research team from KAIST and international institutions has uncovered a new security threat capable of "peeking" at AI blueprints from behind walls. The team also presented corresponding defense technologies. This discovery is expected to be utilized in strengthening AI security across various sectors, including autonomous driving, healthcare, and finance.
On the 31st, Professor Jun Han's research team from the KAIST School of Computing announced that they, in collaboration with the National University of Singapore (NUS) and Zhejiang University, developed "ModelSpy"—an attack system capable of hijacking AI model structures from a distance using only a small antenna.
This technology works much like a bugging device, capturing and analyzing minute signals emitted while an AI is operational to reconstruct its internal structure. The research team focused on the electromagnetic (EM) waves generated by Graphics Processing Units (GPUs), which handle AI computations.
When an AI performs complex calculations, the GPU emits subtle electromagnetic signals. By analyzing the patterns of these signals, the team successfully restored the layer configurations and detailed parameter settings of the AI model.
Experimental results showed that the structure of AI models could be identified with high accuracy from up to 6 meters away or through walls, across five types of the latest GPUs. Notably, the team estimated the core structure—the layers of the deep learning model—with an accuracy of up to 97.6%.

< AI model structures can be stolen through walls using an antenna hidden in a bag >
This technology is considered a significant security threat because, unlike traditional hacking, it does not require direct server infiltration or malware installation. An attack can be carried out using only a portable antenna small enough to fit in a bag.
Recognizing that this technology could lead to the leakage of a company's core AI assets, the research team also proposed defensive measures, such as electromagnetic interference and computational obfuscation. This is being hailed as a responsible security study that goes beyond demonstrating an attack to suggesting realistic protection methods.
"This research demonstrates that AI systems can be exposed to new types of attacks even in physical environments," said Professor Jun Han. "To protect critical AI infrastructure, such as autonomous driving and national facilities, it is essential to establish 'cyber-physical security' systems that encompass both hardware and software."

< Research Image (AI-generated) >
Professor Jun Han of the KAIST School of Computing participated as a co-corresponding author. The study was presented at the NDSS (Network and Distributed System Security Symposium) 2026, a top-tier academic conference in computer security, where it received the Distinguished Paper Award in recognition of its innovation.
Paper Title: Peering Inside the Black-Box: Long-Range and Scalable Model Architecture Snooping via GPU Electromagnetic Side-Chan