Contrary to long-standing beliefs, motion from eye movements helps the brain perceive depth-a finding that could enhance virtual reality.
When you go for a walk, how does your brain know the difference between a parked car and a moving car? This seemingly simple distinction is challenging because eye movements, such as the ones we make when watching a car pass by, make even stationary objects move across the retina-motion that has long been thought of as visual "noise" the brain must subtract out.
Now, researchers at the University of Rochester have discovered that instead of being meaningless interference, the visual motion of an image caused by eye movements helps us understand the world. The specific patterns of visual motion created by eye movements are useful to the brain for figuring out how objects move and where they are located in 3D space.
"The conventional idea has been that the brain needs to somehow discount, or subtract off, the image motion that is produced by eye movements, as this motion has been thought to be a nuisance," says Greg DeAngelis, George Eastman Professor; professor in the Departments of Brain and Cognitive Sciences, Neuroscience, and Biomedical Engineering and the Center for Visual Science; member of the Del Monte Institute for Neuroscience; and lead author of the new research, published in Nature Communications. "But we found that the visual motion produced by our eye movements is not just a nuisance variable to be subtracted off; rather, our brains analyze these global patterns of image motion and use this to infer how our eyes have moved relative to the world."
The research team developed a new theoretical framework to predict how humans should perceive an object's motion and depth during different types of eye movements. They tested these predictions by having participants view 3D virtual environments in which a target object moved through a scene while the participants kept their eyes focused on a single point. In one task, participants estimated the direction the target object was moving by using a dial to match its motion with a second object. In a second task that measured depth perception, participants reported whether the target object appeared nearer or farther than the fixation spot. Across both tasks, the researchers found consistent, predictable patterns of errors that closely matched the theoretical predictions.
"We show that the brain considers many pieces of information to understand the 3D structure of the world through vision, including the patterns of image motion caused by eye movements," says DeAngelis. "Contrary to conventional ideas, the brain doesn't ignore or suppress image motion produced by eye movement. Instead, it uses this image motion to understand a scene and accurately estimate an object's motion and depth."
This research has important implications for understanding visual perception, which informs how the brain interprets everyday activities like reading and recognizing faces. But it could also provide insight and new applications for visual technologies, such as virtual reality headsets.
"VR headsets don't factor in how the eyes are moving relative to the scene when they compute the images to show to each eye. There may be a stark mismatch between the image motion that is shown to the observer in VR and what the brain is expecting to receive based on the eye movements that the observer is making," says DeAngelis. This could be what causes some people to experience motion sickness while using a VR headset.
Additional authors include first author Zhe-Xin Xu '25 (PhD), a former graduate student in the DeAngelis lab who is now a postdoctoral fellow at Harvard University; Jiayi Pang '25 (BS), who is now a graduate student at Brown University; and Akiyuki Anzai, a research associate at the University of Rochester. The National Institutes of Health supported this research.