Drones Navigate Blind: New Tech Unveiled

Aerospace Information Research Institute, Chinese Academy of Sciences

When Global Positioning System (GPS) signals fail, drones can quickly lose their sense of place, especially in tunnels, dense cities, forests, or hostile environments. This study presents a new artificial intelligence framework, called CLAK (CNN-LSTM-Attention-KAN), that helps unmanned aerial vehicles (UAVs) estimate latitude, longitude, and elevation without relying on cameras or satellite navigation. Instead, it learns from non-visual onboard sensors, including LiDAR, barometric altitude, and inertial measurements. By combining spatial encoding, time-sequence learning, attention-based feature selection, and flexible nonlinear regression, the system sharply improves localization accuracy and remains lightweight enough for practical deployment in demanding conditions where visual methods may struggle.

Accurate localization is essential for drone autonomy, but conventional Global Navigation Satellite Systems (GNSS)-based navigation is vulnerable to signal blockage, multipath interference, and spoofing. Visual localization methods can be highly accurate, yet they often require heavy computation, careful calibration, and favorable lighting and texture conditions. Their performance can deteriorate in low-light scenes, texture-poor landscapes, or resource-constrained platforms. Recent research has therefore moved toward lighter, sensor-efficient approaches that can still recover position under degraded conditions. Based on these challenges, more in-depth research is needed on robust non-visual, learning-based localization strategies for Unmanned Aerial Vehicles (UAVs) operating in GNSS-denied environments.

A team from Prince Sultan University in Saudi Arabia reported (DOI: 10.1186/s43020-026-00192-1) in Satellite Navigation in 2026 that its CLAK (CNN-LSTM-Attention-KAN) model can localize UAVs in GNSS-denied environments using only non-visual sensor streams. The study by Imen Jarraya and colleagues shows that by fusing LiDAR, barometric altitude, and inertial data through a hybrid CNN-LSTM-attention-KAN architecture, drones can recover global position with markedly improved precision across challenging simulated flight paths over the Taif region.

The researchers built CLAK as a four-stage deep learning pipeline. First, one-dimensional convolutional layers extracted short-term patterns from six input modalities derived from fused sensor streams. Next, stacked bidirectional LSTM layers modeled temporal dependencies in motion data. An attention module then highlighted the most informative moments in the sequence, while a Kolmogorov-Arnold Network, or KAN, performed the final nonlinear regression to predict latitude, longitude, and elevation. The model was trained on synthetic UAV data generated in a ROS2-based simulation environment integrating Gazebo, PX4, and QGroundControl, with terrain information taken from a digital elevation map of the Taif region in Saudi Arabia. In five-fold cross-validation, CLAK reduced MAE from 3.1953 m to 0.9042 m and RMSE from 6.9293 m to 2.3621 m compared with the LSTM baseline, while raising R² to 0.9979. In trajectory-level testing, it achieved average MAE of 0.800 m, RMSE of 1.915 m, and R² of 0.998, with up to 78.35% MAE reduction and 75.40% RMSE reduction on some routes.

According to the research team, CLAK stands out because it combines high-accuracy localization, millisecond-scale inference, and minimal sensor requirements in a single framework. The authors argue that its ability to generalize across different terrain types and flight patterns, without depending on vision, makes it a strong candidate for real-world deployment on resource-constrained UAVs. They also emphasize that the model's unified design reflects the growing promise of learning-based autonomy for operational navigation in environments where satellite signals are unreliable or unavailable.

The implications extend well beyond navigation alone. Reliable non-visual localization could support drone missions in urban canyons, disaster zones, mountainous terrain, indoor facilities, and other places where Global Positioning System (GPS) is weak or absent and cameras may fail. Because the model avoids heavy visual processing while preserving strong accuracy, it may help enable safer, more energy-efficient autonomy on smaller aerial platforms. The authors also point to future work on lightweight KAN designs, pruning, quantization, knowledge distillation, broader field trials, and cooperative localization among multiple UAVs. Together, these directions suggest a path toward practical, scalable drone navigation systems that remain resilient when conventional positioning breaks down.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.