Which Speaker Are You Listening to? Hearing Aid of the Future Uses Brainwaves to Find Out

Summary: EEG and AI technology can directly decode the direction in which people are listening from brainwaves alone, without having to link them to direct sounds.

Source: KU Leuven

In a noisy room with many speakers, hearing aids can suppress background noise, but they have difficulties isolating one voice – that of the person you’re talking to at a party, for instance. KU Leuven researchers have now addressed that issue with a technique that uses brainwaves to determine within one second whom you’re listening to.

Having a casual conversation at a cocktail party is a challenge for someone with a hearing aid, says Professor Tom Francart from the Department of Neurosciences at KU Leuven: “A hearing aid may select the loudest speaker in the room, for instance, but that is not necessarily the person you’re listening to. Alternatively, the system may take into account your viewing direction, but when you’re driving a car, you can’t look at the passenger sitting next to you.”

Researchers have been working on solutions that take into account what the listener wants. “An electroencephalogram (EEG) can measure brainwaves that develop in response to sounds. This technique allows us to determine which speaker someone wants to listen to. The system separates the sound signals produced by different speakers and links them to the brainwaves. The downside is that you have to take into account a delay of ten to twenty seconds to get it right with reasonable certainty.”

Artificial intelligence to speed up the process

A new technique makes it possible to step up the pace, Professor Alexander Bertrand from the Department of Electrical Engineering at KU Leuven continues: “Using artificial intelligence, we found that it is possible to directly decode the listening direction from the brainwaves alone, without having to link them to the actual sounds.”

“We trained our system to determine whether someone is listening to a speaker on their left or their right. Once the system has identified the direction, the acoustic camera redirects its aim, and the background noise is suppressed. On average, this can now be done within less than one second. That’s a big leap forward, as one second constitutes a realistic timespan to switch from one speaker to the other.”

From lab to real life

However, it will take at least another five years before we have smart hearing aids that work with brainwaves, Professor Francart continues. “To measure someone’s brainwaves in the lab, we make them wear a cap with electrodes. This method is obviously not feasible in real life. But research is already being done into hearing aids with built-in electrodes.”

This shows a brain and colorful waves
“A hearing aid may select the loudest speaker in the room, for instance, but that is not necessarily the person you’re listening to. Alternatively, the system may take into account your viewing direction, but when you’re driving a car, you can’t look at the passenger sitting next to you.” Image is in the public domain

The new technique will be further improved as well, PhD student Simon Geirnaert adds. “We’re already conducting further research, for instance into the problem of combining multiple speaker directions at once. The current system simply chooses between two directions. While first experiments show that we can expand that to other possible directions, we need to refine our artificial intelligence system by feeding the system with more brainwave data from users who are also listening to speakers from other directions.”

About this neurotech and auditory neuroscience research news

Source: KU Leuven
Contact: Alexander Bertrand – KU Leuven
Image: The image is in the public domain

Original Research: Closed access.
Fast EEG-based decoding of the directional focus of auditory attention using common spatial patterns” by Simon Geirnaert; Tom Francart; Alexander Bertrand. IEEE Transactions on Biomedical Engineering


Abstract

Fast EEG-based decoding of the directional focus of auditory attention using common spatial patterns

Objective: Noise reduction algorithms in current hearing devices lack information about the sound source a user attends to when multiple sources are present. To resolve this issue, they can be complemented with auditory attention decoding (AAD) algorithms, which decode the attention using electroencephalography (EEG) sensors. State-of-the-art AAD algorithms employ a stimulus reconstruction approach, in which the envelope of the attended source is reconstructed from the EEG and correlated with the envelopes of the individual sources. This approach, however, performs poorly on short signal segments, while longer segments yield impractically long detection delays when the user switches attention.

Methods: We propose decoding the directional focus of attention using filterbank common spatial pattern filters (FB-CSP) as an alternative AAD paradigm, which does not require access to the clean source envelopes.

Results: The proposed FB-CSP approach outperforms both the stimulus reconstruction approach on short signal segments, as well as a convolutional neural network approach on the same task. We achieve a high accuracy (80% for 1s windows and 70% for quasi-instantaneous decisions), which is sufficient to reach minimal expected switch durations below 4s. We also demonstrate that the decoder can adapt to unlabeled data from an unseen subject and works with only a subset of EEG channels located around the ear to emulate a wearable EEG setup.

Conclusion: The proposed FB-CSP method provides fast and accurate decoding of the directional focus of auditory attention. Significance: The high accuracy on very short data segments is a major step forward towards practical neuro-steered hearing devices.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.