Similar brain glitch found in slips of signing and speaking

Summary: EEG study reveals both hearing people and deaf signer language users showed a common neural mechanism when it came to language errors.

Source: San Diego State University

When we speak, we give little thought to how the words form in our brain before we say them. It’s similar for deaf people using sign language.

Speaking and signing come naturally, except when we stumble over words, or swap one word for another when we speak or sign too quickly, are tired or preoccupied.

Fluency and the occasional disfluency both happen because of how we choose what to say or sign, when a neural mechanism takes place in our brains as we make decisions and monitor how we communicate.

It’s this mechanism that fascinates San Diego State University researchers Stephanie Ries and Karen Emmorey in the School of Speech, Language and Hearing Sciences. Their analysis could help inform rehabilitation therapy for those relearning how to speak or sign after a stroke.

Using electroencephalogram (EEG) recordings, they studied how hearing and deaf signers process the act of signing and found the same monitoring mechanism took place in the brains of both groups. Among deaf signers, it was more prevalent with those for whom American Sign Language (ASL) is their first language.

“When we are doing an action, whether it’s speaking, signing, pressing buttons or typing, we see the same mechanism,” Ries said. “Any time we are making a decision to do something, this neural mechanism comes into play.”

Their study, published by MIT Press in the Journal of Cognitive Neuroscience on April 30, may advance our understanding of how deaf individuals recover their ability to sign after a traumatic brain injury or stroke, when they suffer aphasia: the inability to understand others or express themselves due to brain damage.

“When stroke victims are more aware of their speech errors and have a better functioning speech monitoring mechanism, they have a better chance of recovering than those who don’t have that awareness,” Ries said. “This study helped us extend that understanding to signing ability for deaf people.”

Melding speech with sign language expertise

The work also represents a long-held dream to combine the skills and training of two researchers with niche expertise in complementary fields – speech monitoring and sign monitoring.

Ries is an assistant professor specializing in the neuroscience of speech and language disorders who first met Emmorey at a workshop on language production in 2007 when Ries was a Ph.D. student in Marseille. Emmorey, a distinguished professor, sign language expert and director of the Laboratory for Language and Cognitive Neuroscience at SDSU, presented a study about sign monitoring which sparked an abiding interest in Ries, who wanted to work with Emmorey. When they crossed paths at another conference five years ago, Emmorey urged her to apply for the assistant professorship at SDSU, and they eventually began working together.

“I’ve always been interested in what inner signing would be like, and if it’s similar to inner speech,” said Emmorey, the study’s senior author. “It’s an internal process. When you speak, you can hear yourself. But if you’re signing, are you seeing yourself like in a mirror, or is it a mental image of you signing, or a motor representation so you can feel how you sign?”

These were the underlying aspects of signing no one quite understood, and it has long been Emmorey’s goal to tease them apart so we truly understand what sign language processing is like. Knowing this will help sign language educators figure out the best learning strategy for signers, much like the techniques used to teach hearing people foreign languages.

Since Ries was already working on speech monitoring with hearing people in France, when she joined SDSU, the two researchers combined their expertise to study sign monitoring in hearing and deaf people.

Monitoring for self-editing

They used the EEG data recorded with 21 hearing signers and 26 deaf signers in the Neurocognition Lab of Philip Holcomb and Katherine Midgley, colleagues in the psychology department. The participants were shown pictures to identify by signing, while wearing an EEG cap with 32-channel tin electrodes to monitor the mechanism behind signing.

“We wanted to study sign monitoring in-depth to understand the underlying mechanism and whether it’s universal,” Ries said. “Before people start to sign, you see this component rising, and we observed it happen with hearing signers as well, except it wasn’t as clear.”

This shows a woman in an eeg cap
TraciAnn Hoglind, a researcher in the SDSU Laboratory for Language and Cognitive Neuroscience, demonstrates the EEG cap worn by study participants while they identified pictures by signing. Image is credited to SDSU.

This difference was possibly because deaf signers were more proficient in ASL than hearing signers. It’s important to note that both deaf and hearing signers are bilingual in English and ASL, except ASL is more dominant for deaf signers.

“When we’re speaking we catch ourselves when we are about to make an error. That’s thanks to this monitoring process which is located in the medial frontal cortex of the brain,” Ries said. “It peaks 40 milliseconds after you begin speaking, so it’s extremely fast. We make an error because we may not have selected the right word when semantically related words are competing in your brain.”

Words that share similar meanings such as ‘oven’ and ‘fridge’ or names may be switched in the brain (e.g., swapping your children’s names). Other times, syllables get transposed.

Such errors can happen in signing too, when signs for different words are mixed up or an incorrect handshape is swapped for the desired handshape, which indicates signers are actually assembling phonological units during language production, similar to assembling the phonemes in a spoken word.

“Learning how sign production is represented in the brain will help us understand sign language disorders, and if a signer needs epileptic surgery we will know which part of the brain processes sign,” Emmorey said.

The study’s co-authors include Linda Nadalet and Soren Mickelson, who were master’s students in speech language pathology, and Megan Mott, who was a master’s student in psychology.

Funding: Funding came from a grant from the SDSU Center for Cognitive and Clinical Neuroscience, designed to encourage interdisciplinary collaborations across campus. Emmorey and Ries are also funded by grants from the National Institute for Deafness and other Communication Disorders within the National Institutes for Health.

About this neuroscience research article

Source:
San Diego State University
Media Contacts:
Padma Nagappan – San Diego State University
Image Source:
The image is credited to SDSU.

Original Research: Open access
“Pre-output Language Monitoring in Sign Production”. by Stephanie K. Riès, Linda Nadalet, Soren Mickelsen, Megan Mott,.
Journal of Cognitive Neuroscience doi:10.1162/jocn_a_01542

Abstract

Pre-output Language Monitoring in Sign Production

A domain-general monitoring mechanism is proposed to be involved in overt speech monitoring. This mechanism is reflected in a medial frontal component, the error negativity (Ne), present in both errors and correct trials (Ne-like wave) but larger in errors than correct trials. In overt speech production, this negativity starts to rise before speech onset and is therefore associated with inner speech monitoring. Here, we investigate whether the same monitoring mechanism is involved in sign language production. Twenty deaf signers (American Sign Language [ASL] dominant) and 16 hearing signers (English dominant) participated in a picture–word interference paradigm in ASL. As in previous studies, ASL naming latencies were measured using the keyboard release time. EEG results revealed a medial frontal negativity peaking within 15 msec after keyboard release in the deaf signers. This negativity was larger in errors than correct trials, as previously observed in spoken language production. No clear negativity was present in the hearing signers. In addition, the slope of the Ne was correlated with ASL proficiency (measured by the ASL Sentence Repetition Task) across signers. Our results indicate that a similar medial frontal mechanism is engaged in preoutput language monitoring in sign and spoken language production. These results suggest that the monitoring mechanism reflected by the Ne/Ne-like wave is independent of output modality (i.e., spoken or signed) and likely monitors prearticulatory representations of language. Differences between groups may be linked to several factors including differences in language proficiency or more variable lexical access to motor programming latencies for hearing than deaf signers.

Feel Free To Share This Neuroscience News.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.