Researchers at Xi'an Jiaotong-Liverpool University developing the cognitive hearing aid

Researchers at Xi'an Jiaotong-Liverpool University (XJTLU) in Suzhou, China, are investigating how to use technology to replicate the brain's natural abilities to process speech. Their work has the potential to greatly improve hearing aids, and has applications in other areas.

(Caption: Mouth tracking methods developed by XJTLU researchers )

Dr Andrew Abel and his fellow researchers are moving towards what's being called a 'cognitive hearing aid' - one inspired by the brain. "Many conventional hearing aids work by amplifying certain frequencies that the user has trouble hearing," says Dr Abel.

"Some hearing aids have noise-cancelling algorithms, reducing the volume of frequencies not used in human speech, or directional microphones to detect sound only from specific directions. The next step is to design hearing aids that can use additional information, for example visual information provided by a camera, to improve the filtering."

Ultimately, Dr Abel and his colleagues hope to incorporate word recognition and prediction-based speech processing, as well as environment recognition and other visual information, into an improved hearing aid that thinks like we do.

"When we talk to each other, we don't just rely on sound," says Dr Abel, who is from the Department of Computer Science and Software Engineering at XJTLU. "We look at each other's faces, we look at each other's body language, and we all lip-read to an extent. So far, we've been unable to incorporate these things into hearing aid technology. That's ultimately what we're looking to change."

"When we can understand and replicate what actually happens when we listen, we will not only be able to improve hearing aids, but we will learn so much about ourselves, and how our minds work."

Dr Abel is currently focussed on studying the fundamentals of how visual input can be effective. The first step is processing images to isolate relevant information about lip movement.

A new system devised by Dr Abel, XJTLU graduate Chengxiang Gao, and researchers from the University of Stirling, can track the movements of a subject's mouth, determining whether the mouth is open or closed, and the width and depth of the mouth when it is open (referred to as 'lip features').

"The information is used to build up three-dimensional representations of what is going on in the mouth," explains Dr Abel. "This 3D representation can be used to estimate volume and pitch characteristics of speech, which could then potentially be applied to the noise-reduction function of a hearing aid. They can also be used for lip reading, and this is also part of our research."

Xi'an Jiaotong-Liverpool University (XJTLU), a partnership between Xi'an Jiaotong University and the University of Liverpool, UK, is an international university that aims to combine the best practices of eastern and western education to produce creative thinkers and global citizens. All programmes are delivered in English.

Read the full story here: https://www.xjtlu.edu.cn/.../12/research-developing-the-cognitive-hearing-aid

Source: Xi'an Jiaotong-Liverpool University

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).