Neural Implant Translates Brainwaves Into Words

Summary: A new speech prosthetic offers hope for those with speech-impairing neurological disorders.

By converting brain signals into speech using high-density sensors and machine learning, the technology represents a significant advancement over current slower communication aids.

Though still in early stages, the device has achieved a 40% accuracy in decoding spoken data during limited trials and is moving towards a cordless design.

Key Facts:

  1. The device uses 256 brain sensors on a flexible plastic to read brain activity related to speech and translate it into actual words.
  2. Initial tests with patients undergoing unrelated brain surgery achieved a 40% accuracy in decoding sounds from brain activity.
  3. A future cordless version of the device is being developed, offering more freedom for users.

Source: Duke University

A speech prosthetic developed by a collaborative team of Duke neuroscientists, neurosurgeons, and engineers can translate a person’s brain signals into what they’re trying to say.

Appearing Nov. 6 in the journal Nature Communications, the new technology might one day help people unable to talk due to neurological disorders regain the ability to communicate through a brain-computer interface.

“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,” said Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project.

This shows a woman's head.
While their work is encouraging, there’s still a long way to go for Viventi and Cogan’s speech prosthetic to hit the shelves anytime soon. Credit: Neuroscience News

“But the current tools available to allow them to communicate are generally very slow and cumbersome.”

Imagine listening to an audiobook at half-speed. That’s the best speech decoding rate currently available, which clocks in at about 78 words per minute. People, however, speak around 150 words per minute.

The lag between spoken and decoded speech rates is partially due the relatively few brain activity sensors that can be fused onto a paper-thin piece of material that lays atop the surface of the brain. Fewer sensors provide less decipherable information to decode.

To improve on past limitations, Cogan teamed up with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D., whose biomedical engineering lab specializes in making high-density, ultra-thin, and flexible brain sensors.

For this project, Viventi and his team packed an impressive 256 microscopic brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic. Neurons just a grain of sand apart can have wildly different activity patterns when coordinating speech, so it’s necessary to distinguish signals from neighboring brain cells to help make accurate predictions about intended speech.

After fabricating the new implant, Cogan and Viventi teamed up with several Duke University Hospital neurosurgeons, including Derek Southwell, M.D., Ph.D., Nandan Lad, M.D., Ph.D., and Allan Friedman, M.D., who helped recruit four patients to test the implants.

The experiment required the researchers to place the device temporarily in patients who were undergoing brain surgery for some other condition, such as  treating Parkinson’s disease or having a tumor removed. Time was limited for Cogan and his team to test drive their device in the OR.

“I like to compare it to a NASCAR pit crew,” Cogan said. “We don’t want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and the medical team said ‘Go!’ we rushed into action and the patient performed the task.”

The task was a simple listen-and-repeat activity. Participants heard a series of nonsense words, like “ava,” “kug,” or “vip,” and then spoke each one aloud. The device recorded activity from each patient’s speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw, and larynx.

Afterwards, Suseendrakumar Duraivel, the first author of the new report and a biomedical engineering graduate student at Duke, took the neural and speech data from the surgery suite and fed it into a machine learning algorithm to see how accurately it could predict what sound was being made, based only on the brain activity recordings.

For some sounds and participants, like /g/ in the word “gak,”  the decoder got it right 84% of the time when it was the first sound in a string of three that made up a given nonsense word.

Accuracy dropped, though, as the decoder parsed out sounds in the middle or at the end of a nonsense word. It also struggled if two sounds were similar, like /p/ and /b/.

Overall, the decoder was accurate 40% of the time. That may seem like a humble test score, but it was quite impressive given that similar brain-to-speech technical feats require hours or days-worth of data to draw from. The speech decoding algorithm Duraivel used, however, was working with only 90 seconds of spoken data from the 15-minute test.

Duraivel and his mentors are excited about making a cordless version of the device with a recent $2.4M grant from the National Institutes of Health.

“We’re now developing the same kind of recording devices, but without any wires,” Cogan said. “You’d be able to move around, and you wouldn’t have to be tied to an electrical outlet, which is really exciting.”

While their work is encouraging, there’s still a long way to go for Viventi and Cogan’s speech prosthetic to hit the shelves anytime soon.

“We’re at the point where it’s still much slower than natural speech,” Viventi said in a recent Duke Magazine piece about the technology, “but you can see the trajectory where you might be able to get there.”

Funding: This work was supported by grants from the National Institutes for Health (R01DC019498, UL1TR002553), Department of Defense (W81XWH-21-0538), Klingenstein-Simons Foundation, and an Incubator Award from the Duke Institute for Brain Sciences.

About this neurotech research news

Author: Dan Vahaba
Source: Duke University
Contact: Dan Vahaba – Duke University
Image: The image is credited to Neuroscience News

Original Research: Open access.
High-resolution Neural Recordings Improve the Accuracy of Speech Decoding” by Gregory B. Cogan et al. Nature Communications


Abstract

High-resolution Neural Recordings Improve the Accuracy of Speech Decoding

Patients suffering from debilitating neurodegenerative diseases often lose the ability to communicate, detrimentally affecting their quality of life. One solution to restore communication is to decode signals directly from the brain to enable neural speech prostheses.

However, decoding has been limited by coarse neural recordings which inadequately capture the rich spatio-temporal structure of human brain signals.

To resolve this limitation, we performed high-resolution, micro-electrocorticographic (µECoG) neural recordings during intra-operative speech production.

We obtained neural signals with 57× higher spatial resolution and 48% higher signal-to-noise ratio compared to macro-ECoG and SEEG. This increased signal quality improved decoding by 35% compared to standard intracranial signals.

Accurate decoding was dependent on the high-spatial resolution of the neural interface. Non-linear decoding models designed to utilize enhanced spatio-temporal neural information produced better results than linear techniques.

We show that high-density µECoG can enable high-quality speech decoding for future neural speech prostheses.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.