Patricia DeLucia has spent decades studying something many of us never think about: judgments about collision that are crucial for safety. But the roots of her research stretch back to her childhood, long before she became a professor of psychological sciences at Rice University.
"I grew up playing sports, and when you're on the field, collision judgment is everything — whether a ball is coming at you, whether a player is about to run into you, whether you have time to move," she said. "I didn't know it then, but that experience shaped my entire research path."
That lifelong curiosity eventually led DeLucia to conduct research on collision judgments made by people with visual impairment, specifically those with age-related macular degeneration (AMD).
Published in PLOS One , the new study used a virtual reality system to examine how adults with and without AMD estimate when an approaching vehicle would reach them. The virtual reality system, based on the one set up by Daniel Oberfeld at Johannes Gutenberg University Mainz, paired visual simulations with realistic car sounds, allowing participants to experience an approaching vehicle through sight, sound or both before it disappeared. They then pressed a button to indicate when the vehicle would have reached their location. The project was carried out by a multidisciplinary team from multiple sites in the United States and Europe and was funded by a grant from the National Eye Institute at the National Institutes of Health.
"We wanted to understand whether people with impaired vision rely more heavily on sound and whether having both sight and sound provides an advantage compared to having vision alone," DeLucia said. "There are few studies that look specifically at collision judgments in people with visual impairments, even though tasks like crossing a street or navigating busy environments depend on this ability."
The researchers expected that even with impaired central vision, people with AMD would continue to rely at least partly on their remaining vision rather than depend solely on sound.
"Surprisingly, the people with AMD in both eyes performed very similarly to the people who had normal vision when estimating when the vehicle would reach them," DeLucia said. "They were able to achieve comparable performance but showed higher importance of the less reliably accurate cues."
Even when central vision was impaired, participants still relied on visual information and continued to use both modalities when available.
"People with impaired vision didn't use just auditory information. They used both vision and audition," she said.
When sight or sound was presented alone, both groups showed perceptual biases reported by DeLucia and Oberfeld in their earlier studies:
• Louder vehicles were judged to arrive sooner than quieter vehicles.
• Larger vehicles were judged to arrive sooner than smaller vehicles.
These "heuristic" shortcuts appeared slightly more often in the AMD group, which the researchers expected due to reduced access to detailed visual information. But the effect size was small.
"Thanks to our advanced audiovisual simulation system and customised data analysis, we gained an almost microscopic insight into how pedestrians use auditory and visual information to estimate the arrival time of an approaching vehicle," Oberfeld said. "This goes beyond what we knew from previous studies."
The team also predicted that combining sight and sound would improve accuracy, but it didn't.
"That multimodal advantage didn't happen with either group. Having both vision and the hearing was no better than just having vision," DeLucia said.
DeLucia emphasized that clinical measures like visual acuity do not always predict real-world functioning.
"There's not this one-to-one relationship between the severity of eye disease and visual acuity or daily function," she said. "For example, some may have severe retinal damage but still have reasonably good visual acuity and yet encounter deficits in daily tasks requiring vision."
That disconnect may help explain why some AMD participants performed nearly on par with the control group.
The team cautions that these results don't mean people with impaired vision should assume they can navigate traffic safely as people with no impairment.
"This was just VR simulations of traffic scenes — a very simple scenario with one vehicle approaching on a single-lane road," DeLucia said. "We need to see if these results generalize to more complex situations, for example with multiple cars that accelerate or decelerate, and it would be interesting to include quieter electric vehicles."
Still, she said she hopes the findings will help steer future work on mobility, rehabilitation and safety.
"Ultimately, we want to understand how people with visual impairment make judgments about collisions that are crucial for safety, so we can enhance their mobility and independence," she said.
This work was supported by the National Eye Institute of the National Institutes of Health under award number R01EY030961. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.