Researchers Translate Brain Signals Directly Into Speech

Summary: Researchers have developed a new system which utilizes artificial intelligence technology to turn brain signals to recognizable speech. The breakthrough could help restore a voice to those with limited, or no ability, to speak.

Source: Zuckerman Institute.

In a scientific first, Columbia neuroengineers have created a system that translates thought into intelligible, recognizable speech. By monitoring someone’s brain activity, the technology can reconstruct the words a person hears with unprecedented clarity. This breakthrough, which harnesses the power of speech synthesizers and artificial intelligence, could lead to new ways for computers to communicate directly with the brain. It also lays the groundwork for helping people who cannot speak, such as those living with as amyotrophic lateral sclerosis (ALS) or recovering from stroke, regain their ability to communicate with the outside world.

These findings were published today in Scientific Reports.

“Our voices help connect us to our friends, family and the world around us, which is why losing the power of one’s voice due to injury or disease is so devastating,” said Nima Mesgarani, PhD, the paper’s senior author and a principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute. “With today’s study, we have a potential way to restore that power. We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”

Decades of research has shown that when people speak — or even imagine speaking — telltale patterns of activity appear in their brain. Distinct (but recognizable) pattern of signals also emerge when we listen to someone speak, or imagine listening. Experts, trying to record and decode these patterns, see a future in which thoughts need not remain hidden inside the brain — but instead could be translated into verbal speech at will.

But accomplishing this feat has proven challenging. Early efforts to decode brain signals by Dr. Mesgarani and others focused on simple computer models that analyzed spectrograms, which are visual representations of sound frequencies.

But because this approach has failed to produce anything resembling intelligible speech, Dr. Mesgarani’s team turned instead to a vocoder, a computer algorithm that can synthesize speech after being trained on recordings of people talking.

“This is the same technology used by Amazon Echo and Apple Siri to give verbal responses to our questions,” said Dr. Mesgarani, who is also an associate professor of electrical engineering at Columbia’s Fu Foundation School of Engineering and Applied Science.

To teach the vocoder to interpret to brain activity, Dr. Mesgarani teamed up with Ashesh Dinesh Mehta, MD, PhD, a neurosurgeon at Northwell Health Physician Partners Neuroscience Institute and co-author of today’s paper. Dr. Mehta treats epilepsy patients, some of whom must undergo regular surgeries.

“Working with Dr. Mehta, we asked epilepsy patients already undergoing brain surgery to listen to sentences spoken by different people, while we measured patterns of brain activity,” said Dr. Mesgarani. “These neural patterns trained the vocoder.”

Next, the researchers asked those same patients to listen to speakers reciting digits between 0 to 9, while recording brain signals that could then be run through the vocoder. The sound produced by the vocoder in response to those signals was analyzed and cleaned up by neural networks, a type of artificial intelligence that mimics the structure of neurons in the biological brain.

brain
But because this approach has failed to produce anything resembling intelligible speech, Dr. Mesgarani’s team turned instead to a vocoder, a computer algorithm that can synthesize speech after being trained on recordings of people talking. NeuroscienceNews.com image is in the public domain.

The end result was a robotic-sounding voice reciting a sequence of numbers. To test the accuracy of the recording, Dr. Mesgarani and his team tasked individuals to listen to the recording and report what they heard.

“We found that people could understand and repeat the sounds about 75% of the time, which is well above and beyond any previous attempts,” said Dr. Mesgarani. The improvement in intelligibility was especially evident when comparing the new recordings to the earlier, spectrogram-based attempts. “The sensitive vocoder and powerful neural networks represented the sounds the patients had originally listened to with surprising accuracy.”

Dr. Mesgarani and his team plan to test more complicated words and sentences next, and they want to run the same tests on brain signals emitted when a person speaks or imagines speaking. Ultimately, they hope their system could be part of an implant, similar to those worn by some epilepsy patients, that translates the wearer’s thoughts directly into words.

“In this scenario, if the wearer thinks ‘I need a glass of water,’ our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech,” said Dr. Mesgarani. “This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”

About this neuroscience research article

Funding: This research was supported by the National Institutes of Health (DC014279), the Pew Charitable Trusts and the Pew Biomedical Scholars Program.

The authors report no financial or other conflicts of interest.

Source: Anne Holden – Zuckerman Institute
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is in the public domain.
Original Research: Open access research for “Towards reconstructing intelligible speech from the human auditory cortex” by Hassan Akbari, Bahar Khalighinejad, Jose L. Herrero, Ashesh D. Mehta & Nima Mesgarani in Scientific Reports. Published January 29 2019.
doi:10.1038/s41598-018-37359-z

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]Zuckerman Institute”Researchers Translate Brain Signals Directly Into Speech.” NeuroscienceNews. NeuroscienceNews, 29 January 2019.
<https://neurosciencenews.com/brain-signals-speech-10660/>.[/cbtab][cbtab title=”APA”]Zuckerman Institute(2019, January 29). Researchers Translate Brain Signals Directly Into Speech. NeuroscienceNews. Retrieved January 29, 2019 from https://neurosciencenews.com/brain-signals-speech-10660/[/cbtab][cbtab title=”Chicago”]Zuckerman Institute”Researchers Translate Brain Signals Directly Into Speech.” https://neurosciencenews.com/brain-signals-speech-10660/ (accessed January 29, 2019).[/cbtab][/cbtabs]


Abstract

Towards reconstructing intelligible speech from the human auditory cortex

Auditory stimulus reconstruction is a technique that finds the best approximation of the acoustic stimulus from the population of evoked neural activity. Reconstructing speech from the human auditory cortex creates the possibility of a speech neuroprosthetic to establish a direct communication with the brain and has been shown to be possible in both overt and covert conditions. However, the low quality of the reconstructed speech has severely limited the utility of this method for brain-computer interface (BCI) applications. To advance the state-of-the-art in speech neuroprosthesis, we combined the recent advances in deep learning with the latest innovations in speech synthesis technologies to reconstruct closed-set intelligible speech from the human auditory cortex. We investigated the dependence of reconstruction accuracy on linear and nonlinear (deep neural network) regression methods and the acoustic representation that is used as the target of reconstruction, including auditory spectrogram and speech synthesis parameters. In addition, we compared the reconstruction accuracy from low and high neural frequency ranges. Our results show that a deep neural network model that directly estimates the parameters of a speech synthesizer from all neural frequencies achieves the highest subjective and objective scores on a digit recognition task, improving the intelligibility by 65% over the baseline method which used linear regression to reconstruct the auditory spectrogram. These results demonstrate the efficacy of deep learning and speech synthesis algorithms for designing the next generation of speech BCI systems, which not only can restore communications for paralyzed patients but also have the potential to transform human-computer interaction technologies.

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.
  1. Artificial Intelligence is such a brilliant invention from humans. Now, computers can speak what human brains think. This is just amazing technology and in the future, humans will never have to work. The introduction of AI lessen the tasks already and it can be applied in any field. This technology is really awesome and useful to communicate with people who cannot speak. Great! Thank you for the articles.

Comments are closed.