Scientists translate brain signals into speech sounds

Summary: A neural decoder is able to use sound representations encoded in human cortical activity to synthesize audible speech. The technology could be used to help those with difficulties speaking to communicate freely.

Source: NIH/NINDS

Scientists used brain signals recorded from epilepsy patients to program a computer to mimic natural speech–an advancement that could one day have a profound effect on the ability of certain patients to communicate. The study was supported by the National Institutes of Health’s Brain Research through Advancing Innovative Technologies (BRAIN) Initiative.

“Speech is an amazing form of communication that has evolved over thousands of years to be very efficient,” said Edward F. Chang, M.D., professor of neurological surgery at the University of California, San Francisco (UCSF) and senior author of this study published in Nature. “Many of us take for granted how easy it is to speak, which is why losing that ability can be so devastating. It is our hope that this approach will be helpful to people whose muscles enabling audible speech are paralyzed.”

In this study, speech scientists and neurologists from UCSF recreated many vocal sounds with varying accuracy using brain signals recorded from epilepsy patients with normal speaking abilities. The patients were asked to speak full sentences, and the data obtained from brain scans were then used to drive computer-generated speech. Furthermore, simply miming the act of speaking provided sufficient information to the computer for it to recreate several of the same sounds.

The loss of the ability to speak can have devastating effects on patients whose facial, tongue and larynx muscles have been paralyzed due to stroke or other neurological conditions. Technology has helped these patients to communicate through devices that translate head or eye movements into speech. Because these systems involve the selection of individual letters or whole words to build sentences, the speed at which they can operate is very limited. Instead of recreating sounds based on individual letters or words, the goal of this project was to synthesize the specific sounds used in natural speech.

“Current technology limits users to, at best, 10 words per minute, while natural human speech occurs at roughly 150 words/minute,” said Gopala K. Anumanchipalli, Ph.D., speech scientist, UCSF and first author of the study. “This discrepancy is what motivated us to test whether we could record speech directly from the human brain.”

This shows a chip board and a pink brain
The patients were asked to speak full sentences, and the data obtained from brain scans was then used to drive computer-generated speech. Furthermore, simply miming the act of speaking provided sufficient information to the computer for it to recreate several of the same sounds. The image is in the public domain.

The researchers took a two-step approach to solving this problem. First, by recording signals from patients’ brains while they were asked to speak or mime sentences, they built maps of how the brain directs the vocal tract, including the lips, tongue, jaw, and vocal cords, to make different sounds. Second, the researchers applied those maps to a computer program that produces synthetic speech.

Volunteers were then asked to listen to the synthesized sentences and to transcribe what they heard. More than half the time, the listeners were able to correctly determine the sentences being spoken by the computer.

By breaking down the problem of speech synthesis into two parts, the researchers appear to have made it easier to apply their findings to multiple individuals. The second step specifically, which translates vocal tract maps into synthetic sounds, appears to be generalizable across patients.

“It is much more challenging to gather data from paralyzed patients, so being able to train part of our system using data from non-paralyzed individuals would be a significant advantage,” said Dr. Chang.

The researchers plan to design a clinical trial involving paralyzed, speech-impaired patients to determine how to best gather brain signal data which can then be applied to the previously trained computer algorithm.

“This study combines state-of-the-art technologies and knowledge about how the brain produces speech to tackle an important challenge facing many patients,” said Jim Gnadt, Ph.D., program director at the NIH’s National Institute of Neurological Disorders and Stroke. “This is precisely the type of problem that the NIH BRAIN Initiative is set up to address: to use investigative human neuroscience to impact care and treatment in the clinic.”

Funding: This research was funded by the NIH BRAIN Initiative (DP2 OD008627 and U01 NS098971-01), the New York Stem Cell Foundation, the Howard Hughes Medical Institute, the McKnight Foundation, the Shurl and Kay Curci Foundation, and the William K. Bowes Foundation.

About this neuroscience research article

Source:
NIH/NINDS
Media Contacts:
Carl P. Wonders – NIH/NINDS
Image Source:
The image is in the public domain.

Original Research: Closed access.
“Speech synthesis from neural decoding of spoken sentences”
Gopala K. Anumanchipalli, Josh Chartier & Edward F. Chang. Nature 568, 493-498. doi:10.1038/s41586-019-1119-1

Abstract

Speech synthesis from neural decoding of spoken sentences

Technology that translates neural activity into speech would be transformative for people who are unable to communicate as a result of neurological impairments. Decoding speech from neural activity is challenging because speaking requires very precise and rapid multi-dimensional control of vocal tract articulators. Here we designed a neural decoder that explicitly leverages kinematic and sound representations encoded in human cortical activity to synthesize audible speech. Recurrent neural networks first decoded directly recorded cortical activity into representations of articulatory movement, and then transformed these representations into speech acoustics. In closed vocabulary tests, listeners could readily identify and transcribe speech synthesized from cortical activity. Intermediate articulatory dynamics enhanced performance even with limited data. Decoded articulatory representations were highly conserved across speakers, enabling a component of the decoder to be transferrable across participants. Furthermore, the decoder could synthesize speech when a participant silently mimed sentences. These findings advance the clinical viability of using speech neuroprosthetic technology to restore spoken communication.

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.