New Study Paves Way for Large-Scale Geospatial Exploration

Comparison of search pathway using visual active search (VAS) (left) and the most competitive state-of-the-art approach, greedy selection (right). The VAS framework developed by McKelvey engineers quickly learns to take advantage of visual similarities between regions.

When combatting complex problems like illegal poaching and human trafficking, efficient yet broad geospatial search tools can provide critical assistance in finding and stopping the activity. A visual active search (VAS) framework for geospatial exploration developed by researchers at the McKelvey School of Engineering at Washington University in St. Louis uses a novel visual reasoning model and aerial imagery to learn how to search for objects more effectively.

The team led by Yevgeniy Vorobeychik and Nathan Jacobs, professors of computer science and engineering, aims to shift computer vision - a field typically concerned with how computers learn from visual information - toward real-world applications and impact.

The team's approach to VAS builds on prior work by collaborator Roman Garnett, an associate professor of computer science and engineering at McKelvey Engineering. It marries active search, an area in which Garnett did pioneering research, with visual reasoning and relies on teamwork between humans and artificial intelligence.

First author Anindya Sarkar, a doctoral student in Vorobeychik's lab, presented the findings Jan. 6 at the Winter Conference on Applications of Computer Vision in Waikoloa, Hawaii.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.