Unlocking the Secret of How the Brain Encodes Speech

Summary: Researchers report the brain controls speech production in a similar manner to how it controls the production of arm and hand movements. The findings could help to build better speech decoders for BMI, helping those who are unable to speak to find their voice.

Source: Northwestern University.

People like the late Stephen Hawking can think about what they want to say, but are unable to speak because their muscles are paralyzed. In order to communicate, they can use devices that sense a person’s eye or cheek movements to spell out words one letter at a time. However, this process is slow and unnatural.

Scientists want to help these completely paralyzed, or “locked-in,” individuals communicate more intuitively by developing a brain machine interface to decode the commands the brain is sending to the tongue, palate, lips and larynx (articulators.)

The person would simply try to say words and the brain machine interface (BMI) would translate into speech.

New research from Northwestern Medicine and Weinberg College of Arts and Sciences has moved science closer to creating speech-brain machine interfaces by unlocking new information about how the brain encodes speech.

Scientists have discovered the brain controls speech production in a similar manner to how it controls the production of arm and hand movements. To do this, researchers recorded signals from two parts of the brain and decoded what these signals represented. Scientists found the brain represents both the goals of what we are trying to say (speech sounds like “pa” and “ba”) and the individual movements that we use to achieve those goals (how we move our lips, palate, tongue and larynx). The different representations occur in two different parts of the brain.

“This can help us build better speech decoders for BMIs, which will move us closer to our goal of helping people that are locked-in speak again,” said lead author Dr. Marc Slutzky, associate professor of neurology and of physiology at Northwestern University Feinberg School of Medicine and a Northwestern Medicine neurologist.

The study will be published Sept. 26 in the Journal of Neuroscience.

The discovery could also potentially help people with other speech disorders, such as apraxia of speech, which is seen in children as well as after stroke in adults. In speech apraxia, an individual has difficulty translating speech messages from the brain into spoken language.

How words are translated from your brain into speech

Speech is composed of individual sounds, called phonemes, that are produced by coordinated movements of the lips, tongue, palate and larynx. However, scientists didn’t know exactly how these movements, called articulatory gestures, are planned by the brain. In particular, it was not fully understood how the cerebral cortex controls speech production, and no evidence of gesture representation in the brain had been shown.

“We hypothesized speech motor areas of the brain would have a similar organization to arm motor areas of the brain,” Slutzky said. “The precentral cortex would represent movements (gestures) of the lips, tongue, palate and larynx, and the higher level cortical areas would represent the phonemes to a greater extent.”

That’s exactly what they found.

people talking
Speech is composed of individual sounds, called phonemes, that are produced by coordinated movements of the lips, tongue, palate and larynx. NeuroscienceNews.com image is in the public domain.

“We studied two parts of the brain that help to produce speech,” Slutzky said. “The precentral cortex represented gestures to a greater extent than phonemes. The inferior frontal cortex, which is a higher level speech area, represented both phonemes and gestures.”

Chatting up patients in brain surgery to decode their brain signals

Northwestern scientists recorded brain signals from the cortical surface using electrodes placed in patients undergoing brain surgery to remove brain tumors. The patients had to be awake during their surgery, so researchers asked them to read words from a screen.

After the surgery, scientists marked the times when the patients produced phonemes and gestures. Then they used the recorded brain signals from each cortical area to decode which phonemes and gestures had been produced, and measured the decoding accuracy. The brain signals in the precentral cortex were more accurate at decoding gestures than phonemes, while those in the inferior frontal cortex were equally good at decoding both phonemes and gestures. This information helped support linguistic models of speech production. It will also help guide engineers in designing brain machine interfaces to decode speech from these brain areas.

The next step for the research is to develop an algorithm for brain machine interfaces that would not only decode gestures but also combine those decoded gestures to form words.

About this neuroscience research article

Funding: This work was supported in part by the Doris Duke Charitable Foundation, Northwestern Memorial Foundation Dixon Translational Research Award (including partial funding from National Center for Advancing Translational Sciences, UL1TR000150 and UL1TR001422), NIH grants F32DC015708 and R01NS094748 and National Science Foundation 1321015.

This was an interdisciplinary, cross-campus investigation; authors included a neurosurgeon, a neurologist, a computer scientist, a linguist, and biomedical engineers. In addition to Slutzky, Northwestern authors are Emily M. Mugler, Matthew C. Tate (neurological surgery), Jessica W. Templer (neurology) and Matthew A. Goldrick (linguistics).

Source: Marla Paul – Northwestern University
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is in the public domain.
Original Research: Abstract for “Differential Representation of Articulatory Gestures and Phonemes in Precentral and Inferior Frontal Gyri” by Emily M. Mugler, Matthew C. Tate, Karen Livescu, Jessica W. Templer, Matthew A. Goldrick and Marc W. Slutzky in Journal of Neuroscience. Published September 26 2018.
doi:10.1523/JNEUROSCI.1206-18.2018

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]Northwestern University”Unlocking the Secret of How the Brain Encodes Speech.” NeuroscienceNews. NeuroscienceNews, 26 September 2018.
<https://neurosciencenews.com/speech-encoding-9920/>.[/cbtab][cbtab title=”APA”]Northwestern University(2018, September 26). Unlocking the Secret of How the Brain Encodes Speech. NeuroscienceNews. Retrieved September 26, 2018 from https://neurosciencenews.com/speech-encoding-9920/[/cbtab][cbtab title=”Chicago”]Northwestern University”Unlocking the Secret of How the Brain Encodes Speech.” https://neurosciencenews.com/speech-encoding-9920/ (accessed September 26, 2018).[/cbtab][/cbtabs]


Abstract

Differential Representation of Articulatory Gestures and Phonemes in Precentral and Inferior Frontal Gyri

Speech is a critical form of human communication and is central to our daily lives. Yet, despite decades of study, an understanding of the fundamental neural control of speech production remains incomplete. Current theories model speech production as a hierarchy from sentences and phrases down to words, syllables, speech sounds (phonemes) and the actions of vocal tract articulators used to produce speech sounds (articulatory gestures). Here, we investigate the cortical representation of articulatory gestures and phonemes in ventral precentral and inferior frontal gyri in men and women. Our results indicate that ventral precentral cortex represents gestures to a greater extent than phonemes, while inferior frontal cortex represents both gestures and phonemes. These findings suggest that speech production shares a common cortical representation with that of other types of movement, such as arm and hand movements. This has important implications both for our understanding of speech production and for the design of brain machine interfaces to restore communication to people who cannot speak.

SIGNIFICANCE STATEMENT

Despite being studied for decades, the production of speech by the brain is not fully understood. In particular, the most elemental parts of speech, speech sounds (phonemes) and the movements of vocal tract articulators used to produce these sounds (articulatory gestures) have both been hypothesized to be encoded in motor cortex. Using direct cortical recordings, we found evidence that primary motor and premotor cortices represent gestures to a greater extent than phonemes. Inferior frontal cortex (part of Broca’s area) appears to represent both gestures and phonemes. These findings suggest that speech production shares a similar cortical organizational structure with movement of other body parts.

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.