
A KAIST research team has developed quadrupedal robot technology that not only enables walking by estimating terrain without visual information, but also allows the robot to perceive its surroundings through cameras and LiDAR sensors and make its own decisions while walking, much like animals that visually examine terrain and adjust their steps. This technology is also expected to be extended to various robotic platforms such as wheeled-legged robots and humanoid robots.
KAIST (President Kwang Hyung Lee) announced that a research team led by Professor Hyun Myung from the School of Electrical Engineering, in collaboration with the lab's startup EuRoboTics Co., Ltd., has developed "DreamWaQ++," a quadrupedal robot control technology that recognizes terrain based on visual information and adjusts locomotion strategies in real time.
The previously developed "DreamWaQ" by this research team is a "blind locomotion" technology that estimates terrain using only proprioceptive sensing such as joint encoders and inertial sensors, enabling robust movement even without visual information. It allows stable walking even in environments where visual information is difficult to obtain, such as disaster situations, but has the limitation that the robot can only adjust its movement after its legs directly contact obstacles.
The newly developed DreamWaQ++ overcomes this limitation by combining proprioceptive sensing with exteroceptive sensing based on cameras and LiDAR. The key is that it enables "perception-based locomotion," in which the robot recognizes obstacles in advance and proactively adjusts its walking strategy, going beyond simple reactive control to understanding and making decisions about the environment.

< (Representative image) (a) DreamWaQ++ walking on stairs (b) Terrain predicted by DreamWaQ++ compared with the ground truth (gray) >
To achieve this, the research team designed a multimodal reinforcement learning architecture and implemented it to enable real-time control based on lightweight computation. In addition, it simultaneously secured stability by automatically switching to locomotion based on other sensory modalities when sensor errors occur, and scalability that allows application to various robotic platforms.
Performance was also demonstrated through experiments. The robot equipped with DreamWaQ++ showed performance surpassing existing technologies in various challenging environments.
In stair locomotion experiments, it completed a course of 50 steps (30.03 m horizontally, 7.38 m vertically) in just 35 seconds, outperforming both blind locomotion controllers and commercial perception-based controllers.

< Locomotion controller trained with DreamWaQ++ >
In steep slope environments, it stably climbed a 35° incline, which is 3.5 times steeper than the training condition (10°), and actively adjusted its posture to reduce the rear leg motor torque by approximately 1.5 times compared to existing methods.
In addition, in various obstacle scenarios, it demonstrated learning-based perception capability by autonomously selecting more efficient paths without separate path planning, and in uncertain drop terrains, it exhibited "exploration behavior," where it voluntarily stops to inspect the ground before moving.
Along with this, it demonstrated high agility by overcoming obstacles of 41 cm, exceeding the robot's height, even while carrying a payload of 2.5 kg. In simulation, it was shown that it can handle obstacles up to 1.0 m with ANYmal-C (a representative quadrupedal robot developed at ETH Zurich) and up to 1.5 m with KAIST HOUND (a quadrupedal robot developed by Professor Hae-Won Park's group at KAIST).

< DreamWaQ++ training process >
In particular, even though it was trained only on relatively low obstacles (27 cm), it achieved a success rate of about 80% on actual higher stairs of 42 cm. This means that the robot is not simply repeating learned situations but has the ability to adapt to new environments on its own.
The research team expects that this technology can be applied in environments where conventional wheeled robots have difficulty accessing, such as disaster response, industrial facility inspection, forestry, and agriculture.

< Racing and experiment scenes >
Professor Hyun Myung said, "This research shows that robots have advanced beyond simply moving to a level where they understand the environment and make decisions on their own," adding, "We will further expand this into intelligent mobility technologies applicable in various real-world environments."
This study was led by I Made Aswin Nahrendra (first author, current researcher at Krafton, KAIST PhD graduate), with co-authors Byeongho Yu (EuRoboTics Co., Ltd. CEO), Minho Oh (EuRoboTics Co., Ltd. CTO), Dongkyu Lee (EuRoboTics Co., Ltd. CTO), Seunghyun Lee (KAIST), Hyeonwoo Lee (KAIST), and Dr. Hyungtae Lim (MIT postdoctoral researcher). The study was published in February in the world-renowned robotics journal IEEE Transactions on Robotics (T-RO).
※ Paper title: DreamWaQ++: Obstacle-Aware Quadrupedal Locomotion With Resilient Multi-Modal Reinforcement Learning (link to original Paper: https://arxiv.org/abs/2409.19709)
※ Videos demonstrating DreamWaQ++ operation and locomotion
● Main video: https://youtu.be/DECFbMdpfps
● Additional video: https://youtu.be/Img5a_yKjMs
● Humanoid application video of improved DreamWaQ: https://youtu.be/Kt5PgEiOijQ?si=I4O0flDSOV8ccX3d, https://www.youtube.com/watch?v=sWQY6prcQXw
● Wheeled-legged robot application video of improved DreamWaQ: https://youtu.be/7ruz6u5IhUE
● Project page: https://dreamwaqpp.github.io
This research was supported by the Korea Evaluation Institute of Industrial Technology (KEIT) under the Ministry of Trade, Industry and Energy (Project No. 20018216, "Development and Field Deployment of Mobility Intelligence Software for Autonomous Locomotion of Walking Robots in Dynamic and Unstructured Environments"), and by the Korea Forest Service (Korea Forestry Promotion Institute) through the Forest Science and Technology R&D Program (Project No. RS-2025-25424472).