Enabling autonomous exploration

Carnegie Mellon University

A research group in Carnegie Mellon University's Robotics Institute is creating the next generation of explorers — robots.

The Autonomous Exploration Research Team has developed a suite of robotic systems and planners enabling robots to explore more quickly, probe the darkest corners of unknown environments, and create more accurate and detailed maps. The systems allow robots to do all this autonomously, finding their way and creating a map without human intervention.

"You can set it in any environment, like a department store or a residential building after a disaster, and off it goes," said Ji Zhang, a systems scientist in the Robotics Institute. "It builds the map in real-time, and while it explores, it figures out where it wants to go next. You can see everything on the map. You don't even have to step into the space. Just let the robots explore and map the environment."

The team has worked on exploration systems for more than three years. They've explored and mapped several underground mines, a parking garage, the Cohon University Center, and several other indoor and outdoor locations on the CMU campus. The system's computers and sensors can be attached to nearly any robotic platform, transforming it into a modern-day explorer. The group uses a modified motorized wheelchair and drones for much of its testing.

Robots can explore in three modes using the group's systems. In one mode, a person can control the robot's movements and direction while autonomous systems keep it from crashing into walls, ceilings or other objects. In another mode, a person can select a point on a map and the robot will navigate to that point. The third mode is pure exploration. The robot sets off on its own, investigates the entire space and creates a map.

"This is a very flexible system to use in many applications, from delivery to search-and-rescue," said Howie Choset, a professor in the Robotics Institute.

The group combined a 3D scanning lidar sensor, forward-looking camera and inertial measurement unit sensors with an exploration algorithm to enable the robot to know where it is, where it has been and where it should go next. The resulting systems are substantially more efficient than previous approaches, creating more complete maps while reducing the algorithm run time by half.

The new systems work in low-light, treacherous conditions where communication is spotty, like caves, tunnels and abandoned structures. A version of the group's exploration system powered Team Explorer, an entry from CMU and Oregon State University in DARPA's Subterranean Challenge. Team Explorer placed fourth in the final competition but won the Most Sectors Explored Award for mapping more of the route than any other team.

"All of our work is open-sourced. We are not holding anything back. We want to strengthen society with the capabilities of building autonomous exploration robots," said Chao Cao, a Ph.D. student in robotics and the lead operator for Team Explorer. "It's a fundamental capability. Once you have it, you can do a lot more."

The group's most recent work appeared in Science Robotics, which published "Representation Granularity Enables Time-Efficient Autonomous Exploration in Large, Complex Worlds" online this week. Past work has received top awards at prestigious robotics conferences. "TARE: A Hierarchical Framework for Efficiently Exploring Complex 3D Environments" won the Best Paper and Best Systems Paper awards at the Robotics Science and Systems Conference in 2021. It was the first time in the conference's history that a paper received both awards. "FAR Planner: Fast, Attemptable Route Planner Using Dynamic Visibility Update" won the Best Student Paper Award at the International Conference on Intelligent Robots and Systems in 2022.

More information is available on the group's website.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.