Drones, AI Struggle to Quickly Locate Flood Victims

For search and rescue, AI is not more accurate than humans, but it is far faster.

Authors

  • Robin R. Murphy

    Professor of Computer Science and Engineering, Texas A&M University

  • Thomas Manzini

    Ph.D. Student in Robotics, Texas A&M University

Recent successes in applying computer vision and machine learning to drone imagery for rapidly determining building and road damage after hurricanes or shifting wildfire lines suggest that artificial intelligence could be valuable in searching for missing persons after a flood.

Machine learning systems typically take less than one second to scan a high-resolution image from a drone versus one to three minutes for a person. Plus, drones often produce more imagery to view than is humanly possible in the critical first hours of a search when survivors may still be alive.

Unfortunately, today's AI systems are not up to the task.

We are robotics reseachers who study the use of drones in disasters. Our experiences searching for victims of flooding and numerous other events show that current implementations of AI fall short.

However, the technology can play a role in searching for flood victims. The key is AI-human collaboration.

AI's potential

Searching for flood victims is a type of wilderness search and rescue that presents unique challenges. The goal for machine learning scientists is to rank which images have signs of victims and indicate where in those images search-and-rescue personnel should focus. If the responder sees signs of a victim, they pass the GPS location in the image to search teams in the field to check.

The ranking is done by a classifier , which is an algorithm that learns to identify similar instances of objects - cats, cars, trees - from training data in order to recognize those objects in new images. For example, in a search-and-rescue context, a classifier would spot instances of human activity such as garbage or backpacks to pass to wilderness search-and-rescue teams, or even identify the missing person themselves.

A classifier is needed because of the sheer volume of imagery that drones can produce. For example, a single 20-minute flight can produce over 800 high-resolution images. If there are 10 flights - a small number - there would be over 8,000 images. If a responder spends only 10 seconds looking at each image, it would take over 22 hours of effort. Even if the task is divided among a group of "squinters," humans tend to miss areas of images and show cognitive fatigue .

The ideal solution is an AI system that scans the entire image, prioritizes images that have the strongest signs of victims, and highlights the area of the image for a responder to inspect. It could also decide whether the location should be flagged for special attention by search-and-rescue crews.

Where AI falls short

While this seems to be a perfect opportunity for computer vision and machine learning, modern systems have a high error rate . If the system is programmed to overestimate the number of candidate locations in hopes of not missing any victims, it will likely produce too many false candidates. That would mean overloading squinters or, worse, the search-and-rescue teams, which would have to navigate through debris and muck to check the candidate locations.

Developing computer vision and machine learning systems for finding flood victims is difficult for three reasons.

One is that while existing computer vision systems are certainly capable of identifying people visible in aerial imagery , the visual indicators of a flood victim are often very different compared with those for a lost hiker or fugitive. Flood victims are often obscured, camouflaged, entangled in debris or submerged in water. These visual challenges increase the possibility that existing classifiers will miss victims.

Second, machine learning requires training data, but there are no datasets of aerial imagery where humans are tangled in debris, covered in mud and not in normal postures. This lack also increases the possibility of errors in classification.

Third, many of the drone images often captured by searchers are oblique views, rather than looking straight down. This means the GPS location of a candidate area is not the same as the GPS location of the drone. It is possible to compute the GPS location if the drone's altitude and camera angle are known, but unfortunately those attributes rarely are . The imprecise GPS location means teams have to spend extra time searching.

How AI can help

Fortunately, with humans and AI working together, search-and-rescue teams can successfully use existing systems to help narrow down and prioritize imagery for further inspection.

In the case of flooding, human remains may be tangled among vegetation and debris. Therefore, a system could identify clumps of debris big enough to contain remains. A common search strategy is to identify the GPS locations of where flotsam has gathered, because victims may be part of these same deposits .

An AI classifier could find debris commonly associated with remains, such as artificial colors and construction debris with straight lines or 90-degree corners. Responders find these signs as they systematically walk the riverbanks and flood plains, but a classifier could help prioritize areas in the first few hours and days, when there may be survivors, and later could confirm that teams didn't miss any areas of interest as they navigated the difficult landscape on foot.

The Conversation

Robin R. Murphy receives funding from the National Science Foundation. She is affiliated with the Center for Robot-Assisted Search and Rescue.

Thomas Manzini is affiliated with the Center for Robot Assisted Search & Rescue (CRASAR), and his work is funded by the National Science Foundation's AI Institute for Societal Decision Making (AI-SDM).

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).