First Real-Time Brain-Controlled Hearing Device

Summary: Researchers provided the first direct evidence that brain-controlled technology can help listeners isolate a single voice in a crowded environment. The study demonstrates a system that acts as a “neural extension,” utilizing real-time brain signals to identify which speaker a person is focusing on and automatically amplifying that specific voice.

This breakthrough addresses the “cocktail party effect”, a major limitation of conventional hearing aids, which often struggle to distinguish between overlapping conversations in noisy settings.

Key Research Findings

  • The Brain-First Approach: Unlike traditional hearing aids that indiscriminately amplify all incoming sounds, this system leverages the brain’s natural ability to filter complex environments.
  • Real-Time Identification: Using machine-learning algorithms, the system monitors the timing of brain wave “peaks and valleys” to match them with the specific patterns of a conversation.
  • Direct Human Evidence: The study involved epilepsy patients with pre-implanted electrodes; the system correctly identified their focus and adjusted volumes instantly, significantly improving speech intelligibility and reducing “listening effort”.
  • Dynamic Flexibility: The technology functioned successfully both when subjects were guided to a speaker and when they chose a conversation freely, mirroring real-world social dynamics.
  • Practical Application: This research marks the transition of brain-controlled hearing from theoretical science to a functional prototype that provides immediate, real-time benefits.

Source: Columbia University

Scientists at Columbia University’s Zuckerman Institute have the first direct evidence from human studies that brain-controlled hearing technology can help people single out a voice in a crowd.

These early findings suggest that researchers may one day develop a hearing augmentation device that can, among other feats, overcome the problems that conventional hearing aids have with noisy surroundings.

This shows a head and soundwaves.
By monitoring the synchronization of brain waves with the rhythms of specific voices, this real-time system acts as a neural extension that amplifies a listener’s intended conversation while silencing competing background noise. Credit: Neuroscience News

Their research was published online today in Nature Neuroscience.

“We have developed a system that acts as a neural extension of the user, leveraging the brain’s natural ability to filter through all the sounds in a complex environment to dynamically isolate the specific conversation they wish to hear,” said senior author Nima Mesgarani, PhD, a principal investigator at Columbia’s Zuckerman Institute and an associate professor of electrical engineering at Columbia’s Fu Foundation School of Engineering and Applied Science. 

“This science empowers us to think beyond traditional hearing aids, which simply amplify sound, toward a future where technology can restore the sophisticated, selective hearing of the human brain,” Dr. Mesgarani added.

In the new study, Columbia researchers teamed up with surgeons and their epilepsy patients who were undergoing brain surgery to better pinpoint the sources of their seizures. The hospital patients, who volunteered to be part of this study, already had electrodes implanted in their brains.

Dr. Mesgarani’s system used the electrodes to measure the brain activity of the patients as they focused on one of two overlapping conversations played simultaneously. The system then automatically detected which conversation a patient was paying attention to and adjusted the volume in real time, turning up that conversation while quieting the other.

For one volunteer, the experience of controlling the system with her brain was literally unbelievable. She accused the researchers of secretly adjusting the volumes. Others told stories about friends and family with hearing impairments who could benefit from such a technology. One person said: “It seems like science fiction.”

Modern hearing aids excel at amplifying speech while suppressing certain kinds of background noise, such as traffic. But they cannot separate and enhance particular voices of interest; they boost every voice coming into the microphone indiscriminately. This makes it difficult for people to concentrate on a specific talker amidst a jumble of voices. 

A promising solution to this problem is a hearing device that could mimic the way in which the human brain can typically identify and focus on just one speaker in a crowd, a phenomenon sometimes called the cocktail party effect.

In 2012, Dr. Mesgarani and his colleagues discovered ways to identify which sets of brain signals are linked with specific conversations amidst crowds of speakers. For example, the timing of peaks and valleys of brain waves can match up with the sounds and silences within a conversation. They also found that a distinct pattern of brain activity can reveal which conversation a person was focusing on and which they were filtering out.

These discoveries could one day lead to real-world hearing assistance and augmentation devices that can monitor brain waves to detect and amplify the conversation a person is most interested in.

Over the course of hundreds of more studies in the past decade or so, Dr. Mesgarani and others have overcome a host of challenges attempting to make this dream a reality, such as developing computer algorithms to automatically separate out the voices of multiple speakers in a group, and then compare the voice of each speaker to the brain waves of a listener.

“The central unanswered question was whether brain-controlled hearing technology could move beyond incremental advances, towards a prototype that could help someone hear better in real time,” said Vishal Choudhari, the paper’s first author, who received his PhD in electrical engineering while in Dr. Mesgarani’s lab and who led the development and evaluation of the system.

“For the first time, we have shown that such a system that reads brain signals to selectively enhance conversations can provide a clear real-time benefit. This moves brain-controlled hearing from theory toward practical application.”

The researchers partnered with physicians and patients who volunteered to be part of the study at the Hofstra Northwell School of Medicine; the Feinstein Institutes for Medical Research, New York University School of Medicine; and the University of California San Francisco’s Department of Neurological Surgery. 

The scientists developed real-time machine-learning algorithms that could examine the brainwaves and identify which conversation the patients were paying attention to. Once deployed, their system could rapidly deduce which conversation each listener was paying attention to and make it easier for them to hear it.

This happened both when the researchers guided the subjects toward a particular conversation, and when the subjects chose freely, as would be necessary in a real-world conversation.

“For this to work in real time, the system has to be very fast, accurate and stable for the experience to feel pleasant for the listener,” Dr. Mesgarani said.

The scientists found their new system correctly identified which conversation the volunteers paid attention to. This dramatically improved the intelligibility of the speech the volunteers focused on, reduced listening effort, and was consistently preferred by the volunteers when compared to conversations the system did not provide assistance with.

One volunteer recalled her uncle, who had hearing problems. “Can you imagine if this technology existed in a world [where] … he could access it? He might actually live a much more peaceful… life.”

According to the World Health Organization, more than 430 million people worldwide live with disabling hearing loss, many of whom struggle most in noisy social environments. Untreated hearing loss is a leading modifiable risk factor for dementia, as well as a primary contributor to depression and social isolation.

Scientists say this research lays the groundwork for future wearable systems that could one day integrate brain sensing with advanced audio processing. This would assist people with hearing loss and potentially augment hearing and reduce fatigue from listening for anyone in everyday challenging environments such as restaurants, classrooms, busy workplaces and family gatherings.

The scientists note that a great deal of work is needed before this technology is available in a wearable form that can work in a minimally invasive way in more complicated real-world scenarios. For instance, they would one day like to see how well their system can perform in real-world listening conditions which are more complex, Dr. Mesgarani said.

“The results mark an important step toward a new generation of brain-controlled hearing technologies that align with the listener’s intent, potentially transforming how people navigate noisy, multi-talker environments,” Dr. Choudhari added.

The full list of authors includes Vishal Choudhari, Maximilian Nentwich, Sarah Johnson, Jose L. Herrero, Stephan Bickel, Ashesh D. Mehta, Daniel Friedman, Adeen Flinker, Edward F. Chang, and Nima Mesgarani.

Funding: This work was funded by grants from the Marie-Josee and Henry R. Kravis Foundation and the National Institute of Health’s National Institute on Deafness and Other Communication Disorders. 

Key Questions Answered:

Q: Will I need brain surgery to use this in the future?

A: While this study used surgical electrodes for precision, the ultimate goal is to develop wearable, minimally invasive systems that integrate brain sensing with audio processing for everyday use.

Q: How does the system “know” which person I’m listening to?

A: Every voice has a unique “rhythm” of sound and silence. Your brain activity syncs up with those specific rhythms when you focus. The system’s algorithms compare the brain waves of the listener to the separate voice streams it detects and “matches” them.

Q: Why is this considered a major health breakthrough?

A: Disabling hearing loss affects over 430 million people and is a primary risk factor for dementia, depression, and social isolation. Technology that makes social environments “peaceful” again could significantly reduce the cognitive fatigue and isolation associated with hearing impairment.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this auditory neuroscience and neurotech research news

Author: Nima Mesgarani, PhD
Source: Columbia University
Contact: Nima Mesgarani, PhD – Columbia University
Image: The image is credited to Neuroscience News

Original Research: Open access.
Real-time brain-controlled selective hearing enhances speech perception in multi-talker environments” by Vishal Choudhari, Maximilian Nentwich, Sarah Johnson, Jose L. Herrero, Stephan Bickel, Ashesh D. Mehta, Daniel Friedman, Adeen Flinker, Edward F. Chang & Nima Mesgarani. Nature Neuroscience
DOI:10.1038/s41593-026-02281-5


Abstract

Real-time brain-controlled selective hearing enhances speech perception in multi-talker environments

Understanding speech in noisy environments is difficult for many people, and current hearing aids often fail because they amplify all sounds rather than the talker of interest.

Auditory attention decoding (AAD) offers a potential solution by using the listener’s brain signals to identify and enhance the attended speaker, but it has been unclear whether this can provide real-time perceptual benefits.

Here we used high-resolution intracranial electroencephalography in patients undergoing neurosurgical procedures to implement a closed-loop system that achieves the decoding fidelity necessary to dynamically amplify the attended talker.

Across multiple experiments, the system improved speech intelligibility, reduced listening effort and was consistently preferred by subjects. It also tracked both instructed and self-initiated attention shifts.

By providing direct evidence that a real-time, brain-controlled hearing system can enhance perception, this work establishes a key performance benchmark for future auditory brain–computer interfaces and advances AAD from a theoretical concept to a validated solution for personalized assistive hearing.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.