This shows a woman surrounded by speech bubbles.
Importantly, every listener seems to adopt a hybrid, with a different degree of each strategy. Credit: Neuroscience News

How We Recognize Words in Real-Time

Summary: A recent study has identified three distinct strategies people use to recognize words: “Wait and See,” “Sustained Activation,” and “Slow Activation.” These strategies were observed in both individuals with normal hearing and those with cochlear implants, revealing that word-recognition processes are highly individualized.

This discovery offers new insights into language processing and could help improve interventions for those with hearing impairments. The findings also suggest that differences in word recognition may be more widespread than previously thought, extending beyond those with hearing difficulties.

Key Facts:

  1. Researchers identified three word-recognition strategies: “Wait and See,” “Sustained Activation,” and “Slow Activation.”
  2. These strategies were observed in both normal-hearing individuals and cochlear implant users, highlighting individualized language processing.
  3. The study could lead to improved interventions for hearing impairments by better understanding how people recognize words.

Source: University of Iowa

University of Iowa researchers have defined how people recognize words.

In a new study with people who use cochlear implants to hear, the researchers identified three main approaches that people with or without hearing impairment use to recognize words, an essential building block for understanding spoken language.

Which approach depends on the person, regardless of hearing aptitude or ability: Some wait a bit before identifying a word, while others may tussle between two or more words before deciding which word has been heard.

When a person hears a word, the brain briefly considers hundreds, if not thousands, of options and rules out most of them in less than a second. When someone hears “Hawkeyes,” for example, the brain might briefly consider “hot dogs,” “hawk,” “hockey,” and other similar-sounding words before settling on the target word.

While the brain operates quickly and differences in word-recognition strategies may be subtle, the findings in this study are important because they could open new ways for hearing specialists to identify word-recognition difficulties in early childhood or in older adults (who tend to lose hearing) and more effectively manage those conditions.

“With this study, we found people don’t all work the same way, even at the level of how they recognize a single word,” says Bob McMurray, F. Wendell Miller Professor in the Department of Psychological and Brain Sciences and the study’s corresponding author.

“People seem to adopt their own unique solutions to the challenge of recognizing words. There’s not one way to be a language user. That’s kind of wild when you think about it.” 

McMurray has been studying word recognition in children and in older adults for three decades. His research has shown differences in how people across all ages recognize spoken language. But those differences tended to be so slight that it made it difficult to precisely categorize.

So, McMurray and his research team turned to people who use cochlear implants — devices used by the profoundly deaf or severely hard-of-hearing that bypass the normal pathways by which people hear, using electrodes to deliver sound. 

“It’s like replacing millions of hair cells and thousands of frequencies with 22 electrodes. It just smears everything together. But it works, because the brain can adapt,” McMurray says.

The research team enlisted 101 participants from the Iowa Cochlear Implant Clinical Research Center at University of Iowa Health Care Medical Center. The participants listened through loudspeakers as a word was spoken, then selected among four images on a computer screen the one that matched the word they had heard.

The hearing and selection activities were recorded with eye-tracking technology, which allowed the researchers to follow, in a fraction of a second, how and when each participant decided on a word they had heard.

The experiments revealed that the cochlear-implant users — even with a different way to hear — employed the same basic process when choosing spoken words as normal hearing people.

The researchers termed three word-recognition dimensions:

  • Wait and See
  • Sustained Activation
  • Slow Activation 

Most cochlear implant participants utilized Wait and See to some degree, the researchers found, meaning they waited for as much as a quarter of a second after hearing the word to firmly decide which word they heard.

Previous research in McMurray’s lab has shown that children with early hearing loss have Wait and See tendencies, but this hasn’t been observed more generally.

“Maybe it’s a way for them to avoid a bunch of other word competitors in their heads,” McMurray says. “They can kind of slow down and keep it simple.”

The researchers also learned that some cochlear implant participants tended toward Sustained Activation, in which listeners tussle for a bit between words before settling on what they think is the word they heard, or they utilize Slow Activation, meaning they’re slower to recognize words. Importantly, every listener seems to adopt a hybrid, with a different degree of each strategy.

The dimensions match the patterns by which people without hearing impairment, from youth to older ages, tend to recognize words, as shown in a previous study by McMurray’s team.

“Now that we’ve identified the dimensions with our cochlear implant population, we can look at people without hearing impairment, and we see that the exact same dimensions apply,” McMurray says. “What we see very clearly with how cochlear implant users recognize words is also going on under the hood in lots of people.” 

The researchers now hope to apply the findings to develop strategies that may help people who are at the extreme ends of a particular word-recognition dimension. About 15% of adults in the United States have hearing loss, which could cascade into cognitive decline, fewer social interactions, and greater isolation.

“We aim to have a more refined way than simply asking them, ‘How well are you listening; do you struggle to perceive speech in the real world?’” McMurray says.

The study, “Cochlear implant users reveal the underlying dimensions of real-time word recognition,” was published online Aug. 29 in the journal Nature Communications.

Contributing authors, all from Iowa, include Francis Smith, Marissa Huffman, Kristin Rooff, John Muegge, Charlotte Jeppsen, Ethan Kutlu, and Sarah Colby.

Funding: The National Institutes of Health and the U.S. National Science Foundation funded the research, as part of its 30 years of funding the Iowa Cochlear Implant Clinical Research Center.

About this language and neuroscience research news

Author: Richard Lewis
Source: University of Iowa
Contact: Richard Lewis – University of Iowa
Image: The image is credited to Neuroscience News

Original Research: Open access.
Underlying dimensions of real-time word recognition in cochlear implant users” by Bob McMurray et al. Nature Communications


Abstract

Underlying dimensions of real-time word recognition in cochlear implant users

Word recognition is a gateway to language, linking sound to meaning. Prior work has characterized its cognitive mechanisms as a form of competition between similar-sounding words. However, it has not identified dimensions along which this competition varies across people.

We sought to identify these dimensions in a population of cochlear implant users with heterogenous backgrounds and audiological profiles, and in a lifespan sample of people without hearing loss. Our study characterizes the process of lexical competition using the Visual World Paradigm.

A principal component analysis reveals that people’s ability to resolve lexical competition varies along three dimensions that mirror prior small-scale studies. These dimensions capture the degree to which lexical access is delayed (“Wait-and-See”), the degree to which competition fully resolves (“Sustained-Activation”), and the overall rate of activation.

Each dimension is predicted by a different auditory skills and demographic factors (onset of deafness, age, cochlear implant experience). Moreover, each dimension predicts outcomes (speech perception in quiet and noise, subjective listening success) over and above auditory fidelity. Higher degrees of Wait-and-See and Sustained-Activation predict poorer outcomes.

These results suggest the mechanisms of word recognition vary along a few underlying dimensions which help explain variable performance among listeners encountering auditory challenge.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.