NEW YORK, NY — Scientists at Columbia University's Zuckerman Institute have the first direct evidence from human studies that brain-controlled hearing technology can help people single out a voice in a crowd. These early findings suggest that researchers may one day develop a hearing augmentation device that can, among other feats, overcome the problems that conventional hearing aids have with noisy surroundings. Their research was published online today in Nature Neuroscience.
"We have developed a system that acts as a neural extension of the user, leveraging the brain's natural ability to filter through all the sounds in a complex environment to dynamically isolate the specific conversation they wish to hear," said senior author Nima Mesgarani, PhD , a principal investigator at Columbia's Zuckerman Institute and an associate professor of electrical engineering at Columbia's Fu Foundation School of Engineering and Applied Science.
"This science empowers us to think beyond traditional hearing aids, which simply amplify sound, toward a future where technology can restore the sophisticated, selective hearing of the human brain," Dr. Mesgarani added.
In the new study, Columbia researchers teamed up with surgeons and their epilepsy patients who were undergoing brain surgery to better pinpoint the sources of their seizures. The hospital patients, who volunteered to be part of this study, already had electrodes implanted in their brains. Dr. Mesgarani's system used the electrodes to measure the brain activity of the patients as they focused on one of two overlapping conversations played simultaneously. The system then automatically detected which conversation a patient was paying attention to and adjusted the volume in real time, turning up that conversation while quieting the other.
For one volunteer, the experience of controlling the system with her brain was literally unbelievable. She accused the researchers of secretly adjusting the volumes. Others told stories about friends and family with hearing impairments who could benefit from such a technology. One person said: "It seems like science fiction." (You can view videos from the researchers here and here .)
Modern hearing aids excel at amplifying speech while suppressing certain kinds of background noise, such as traffic. But they cannot separate and enhance particular voices of interest; they boost every voice coming into the microphone indiscriminately. This makes it difficult for people to concentrate on a specific talker amidst a jumble of voices.
A promising solution to this problem is a hearing device that could mimic the way in which the human brain can typically identify and focus on just one speaker in a crowd, a phenomenon sometimes called the cocktail party effect .
In 2012 , Dr. Mesgarani and his colleagues discovered ways to identify which sets of brain signals are linked with specific conversations amidst crowds of speakers. For example, the timing of peaks and valleys of brain waves can match up with the sounds and silences within a conversation. They also found that a distinct pattern of brain activity can reveal which conversation a person was focusing on and which they were filtering out.
These discoveries could one day lead to real-world hearing assistance and augmentation devices that can monitor brain waves to detect and amplify the conversation a person is most interested in. Over the course of hundreds of more studies in the past decade or so, Dr. Mesgarani and others have overcome a host of challenges attempting to make this dream a reality, such as developing computer algorithms to automatically separate out the voices of multiple speakers in a group, and then compare the voice of each speaker to the brain waves of a listener.
"The central unanswered question was whether brain-controlled hearing technology could move beyond incremental advances, towards a prototype that could help someone hear better in real time," said Vishal Choudhari , the paper's first author, who received his PhD in electrical engineering while in Dr. Mesgarani's lab and who led the development and evaluation of the system. "For the first time, we have shown that such a system that reads brain signals to selectively enhance conversations can provide a clear real-time benefit. This moves brain-controlled hearing from theory toward practical application."
The researchers partnered with physicians and patients who volunteered to be part of the study at the Hofstra Northwell School of Medicine; the Feinstein Institutes for Medical Research, New York University School of Medicine; and the University of California San Francisco's Department of Neurological Surgery.
The scientists developed real-time machine-learning algorithms that could examine the brainwaves and identify which conversation the patients were paying attention to. Once deployed, their system could rapidly deduce which conversation each listener was paying attention to and make it easier for them to hear it. This happened both when the researchers guided the subjects toward a particular conversation, and when the subjects chose freely, as would be necessary in a real-world conversation.
"For this to work in real time, the system has to be very fast, accurate and stable for the experience to feel pleasant for the listener," Dr. Mesgarani said.
The scientists found their new system correctly identified which conversation the volunteers paid attention to. This dramatically improved the intelligibility of the speech the volunteers focused on, reduced listening effort, and was consistently preferred by the volunteers when compared to conversations the system did not provide assistance with.
One volunteer recalled her uncle, who had hearing problems. "Can you imagine if this technology existed in a world [where] … he could access it? He might actually live a much more peaceful… life."
According to the World Health Organization, more than 430 million people worldwide live with disabling hearing loss, many of whom struggle most in noisy social environments. Untreated hearing loss is a leading modifiable risk factor for dementia, as well as a primary contributor to depression and social isolation. Scientists say this research lays the groundwork for future wearable systems that could one day integrate brain sensing with advanced audio processing. This would assist people with hearing loss and potentially augment hearing and reduce fatigue from listening for anyone in everyday challenging environments such as restaurants, classrooms, busy workplaces and family gatherings.
The scientists note that a great deal of work is needed before this technology is available in a wearable form that can work in a minimally invasive way in more complicated real-world scenarios. For instance, they would one day like to see how well their system can perform in real-world listening conditions which are more complex, Dr. Mesgarani said.
"The results mark an important step toward a new generation of brain-controlled hearing technologies that align with the listener's intent, potentially transforming how people navigate noisy, multi-talker environments," Dr. Choudhari added.