Personalised immersive audio will help virtual social interaction feel more real

Imperial College London researchers have won a €5.7million EU Horizon 2020 grant to develop AI-informed immersive audio techniques.

The project, called SONICOM, will see researchers develop immersive 3D sound for virtual and alternative reality situations like online meetings, lectures, and gaming.

Our technology could help users have more real, more immersive experiences that convey the nuanced feelings and intentions of face-to-face conversations. Dr Lorenzo Picinali Dyson School of Design Engineering

Immersive audio creates sounds that are perceived as coming from different directions, to emulate how we experience audio in the real world. Although similar technologies are used in spaces like cinemas, virtual spaces like video chat, gaming, and doctor appointments lack this kind of technology.

During the COVID-19 pandemic where many are working from home, emulating real-life scenarios more accurately could help rebuild the conversational nuances and social cues that can be lost during online communication.

SONICOM will use artificial intelligence (AI) to make online spaces more realistic – by, for example, having colleagues sound like they are speaking to your right or left, possibly also at different distances, like in a real meeting space.

Lead investigator Dr Lorenzo Picinali, of Imperial's Dyson School of Design Engineering, said: "Our current online interactions are very different to real life scenarios: social cues and tone of voice can be lost, and all the sound comes from one direction. Our technology could help users have more real, more immersive experiences that convey the nuanced feelings and intentions of face-to-face conversations."

This is one of the first times AI will be used to personalise sound. Everyone perceives 3D, or spatial, sound differently thanks to the particular shape of their ears and head, and so high-quality simulations must be uniquely tailored to each individual.

The researchers will employ AI and a data-driven approach to link the physiological characteristics of the ear to the individual acoustic filters which allow people to hear where sound is coming from. This will provide personalised audio spatialisation from only a small amount of data acquired from users.

Imagine a virtual meeting space where you see colleagues to your right, left, and across from you. We want to make this possible in audio form. Dr Lorenzo Picinali Dyson School of Design Engineering

They will also alter spatial sound rendering depending on its context. Doctor and therapy appointments might sound 'closer up' for greater intimacy, whereas lecturers and talks could sound farther away to emulate the real-world lecture experience.

Once the personalised immersive audio techniques have been developed, the researchers will explore, map and model how their use influences listeners' behaviour and physiology during social interactions.

Dr Picinali said: "Imagine a virtual meeting space where you see colleagues to your right, left, and across from you. We want to make this possible in audio form, using AI not only to improve and personalise sound, but also to the reactions of the listeners and predict how this could influence the conversation."

They hope to have developed the techniques within three years and will make them and the accompanying AI available to researchers and companies who want to use them in new hardware.

The project includes researchers from seven research and academic institutions and three companies across six European countries: Imperial (UK), Sorbonne University (France), Austrian Academy of Sciences (Austria), University of Milan (Italy), National and Kapodistrian University of Athens (Greece), University of Malaga (Spain), University of Glasgow (UK), Dreamwaves (Austria), Reactify (UK), and USound (Austria).

Dr Picinali and the SONICOM consortium will be part of a cross-collaborative initiative with three other projects awarded under the same EU Horizon 2020 call known as 'Artificial Intelligence for extended social interaction.' The aim is to identify synergies within their projects and refine a joint vision to maximise the impact of each project's research in their respective emerging technological paradigms.

SONICOM builds on several previous Imperial research projects such as the 3D Tune-In Project, also funded by EU Horizon 2020 and coordinated by the Audio Experience Design team at Imperial. 3D Tune-In set the grounds for the development of advanced techniques and tools for immersive audio in virtual reality, and SONICOM is taking this initiative forward at a larger scale, engaging with most of the major players in the field within Europe.

Audio: Dr Lorenzo Picinali

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.