Neural ‘Auto-Correct’ Feature We Use to Process Ambiguous Sounds Discovered

Summary: Researchers report the brain re-evaluates the interpretation of speech sounds the moment subsequent sounds are heard in order to update interpretations as necessary.

Source: NYU.

Our brains have an “auto-correct” feature that we deploy when re-interpreting ambiguous sounds, a team of scientists has discovered. Its findings, which appear in the Journal of Neuroscience, point to new ways we use information and context to aid in speech comprehension.

“What a person thinks they hear does not always match the actual signals that reach the ear,” explains Laura Gwilliams, a doctoral candidate in NYU’s Department of Psychology, a researcher at the Neuroscience of Language Lab at NYU Abu Dhabi, and the paper’s lead author. “This is because, our results suggest, the brain re-evaluates the interpretation of a speech sound at the moment that each subsequent speech sound is heard in order to update interpretations as necessary.

“Remarkably, our hearing can be affected by context occurring up to one second later, without the listener ever being aware of this altered perception.”

“For example, an ambiguous initial sound, such as ‘b’ and ‘p,’ is heard one way or another depending on if it occurs in the word ‘parakeet’ or ‘barricade,’ ” adds Alec Marantz, principal investigator of the project, a professor in NYU’s departments of Linguistics and Psychology, and co-director of NYU Abu Dhabi’s Neuroscience of Language Lab, where the research was conducted. “This happens without conscious awareness of the ambiguity, even though the disambiguating information doesn’t come until the middle of the third syllable.”

The study–the first to unveil how the brain uses information gathered after an initial sound is detected to aid speech comprehension–also included David Poeppel, a professor of Psychology and Neural Science, and Tal Linzen, an assistant professor in Johns Hopkins University’s Department of Cognitive Science.

It’s well known that the perception of a speech sound is determined by its surrounding context–in the form of words, sentences, and other speech sounds. In many instances, this contextual information is heard later than the initial sensory input.

This plays out in every-day life–when we talk, the actual speech we produce is often ambiguous. For example, when a friend says she has a “dent” in her car, you may hear “tent.” Although this kind of ambiguity happens regularly, we, as listeners, are hardly aware of it.

“This is because the brain automatically resolves the ambiguity for us–it picks an interpretation and that’s what we perceive to hear,” explains Gwilliams. “The way the brain does this is by using the surrounding context to narrow down the possibilities of what the speaker may mean.”

In the Journal of Neuroscience study, the researchers sought to understand how the brain uses this subsequent information to modify our perception of what we initially heard.

To do this, they conducted a series of experiments in which the subjects listened to isolated syllables and similarly sounding words (e.g., barricade, parakeet). In order to gauge the subjects’ brain activity, the scientists deployed magnetoencephalography (MEG), a technique that maps neural movement by recording magnetic fields generated by the electrical currents produced by our brain.

a woman in headphones
Throughout the experiment, the volunteer hears syllables and words through specially-made plastic earphones. NeuroscienceNews.com image is credited to Kate Lord/New York University.

Their results yielded three primary findings:

  • The brain’s primary auditory cortex is sensitive to how ambiguous a speech sound is at just 50 milliseconds after the sound’s onset.
  • The brain “re-plays” previous speech sounds while interpreting subsequent ones, suggesting re-evaluation as the rest of the word unfolds
  • The brain makes commitments to its “best guess” of how to interpret the signal after about half a second.

“What is interesting is the fact that this context can occur after the sounds being interpreted and still be used to alter how the sound is perceived,” Gwilliams adds.

For example, the same sound will be perceived as “k” at the onset of “kiss” and “g” at the onset of “gift,” even though the difference between the words (“ss” vs. “ft”) come after the ambiguous sound.

“Specifically, we found that the auditory system actively maintains the acoustic signal in auditory cortex, while concurrently making guesses about the identity of the words being said,” says Gwilliams. “Such a processing strategy allows the content of the message to be accessed quickly, while also permitting re-analysis of the acoustic signal to minimize hearing mistakes.”

About this neuroscience research article

Funding: This research was supported by the NYU Abu Dhabi Research Institute (G1001), the European Research Council (ERC-2011-AdG 295810 BOOTPHON), France’s National Research Agency (ANR-10-IDEX-0001-02 PSL, ANR-10-LABX-0087 IEC), and the National Institutes of Health (2R01DC05660).

Source: James Devitt – NYU
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is credited to Kate Lord/New York University.
Original Research: Abstract for “In spoken word recognition the future predicts the past” by Laura Gwilliams, Tal Linzen, David Poeppel and Alec Marantz in Journal of Neuroscience. Published July 16 2018.
doi:10.1523/JNEUROSCI.0065-18.2018

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]NYU”Neural ‘Auto-Correct’ Feature We Use to Process Ambiguous Sounds Discovered.” NeuroscienceNews. NeuroscienceNews, 22 August 2018.
<https://neurosciencenews.com/ambiguous-sound-auto-correct-9727/>.[/cbtab][cbtab title=”APA”]NYU(2018, August 22). Neural ‘Auto-Correct’ Feature We Use to Process Ambiguous Sounds Discovered. NeuroscienceNews. Retrieved August 22, 2018 from https://neurosciencenews.com/ambiguous-sound-auto-correct-9727/[/cbtab][cbtab title=”Chicago”]NYU”Neural ‘Auto-Correct’ Feature We Use to Process Ambiguous Sounds Discovered.” https://neurosciencenews.com/ambiguous-sound-auto-correct-9727/ (accessed August 22, 2018).[/cbtab][/cbtabs]


Abstract

In spoken word recognition the future predicts the past

Speech is an inherently noisy and ambiguous signal. In order to fluently derive meaning, a listener must integrate contextual information to guide interpretations of the sensory input. While many studies have demonstrated the influence of prior context on speech perception, the neural mechanisms supporting the integration of subsequent context remain unknown. Using magnetoencephalography (MEG) to record from human auditory cortex, we analysed responses to spoken words with a varyingly ambiguous onset phoneme, the identity of which is later disambiguated at the lexical uniqueness point. Fifty participants (both male and female) were recruited across two MEG experiments. Our findings suggest that primary auditory cortex is sensitive to phonological ambiguity very early during processing — at just 50 ms after onset. Subphonemic detail is preserved in auditory cortex over long timescales, and re-evoked at subsequent phoneme positions. Commitments to phonological categories occur in parallel, resolving on the shorter time-scale of ∼450 ms. These findings provide evidence that future input determines the perception of earlier speech sounds by maintaining sensory features until they can be integrated with top-down lexical information.

Significance statement

The perception of a speech sound is determined by its surrounding context, in the form of words, sentences, and other speech sounds. Often, such contextual information becomes available later than the sensory input. The present study is the first to unveil how the brain uses this subsequent information to aid speech comprehension. Concretely, we find that the auditory system actively maintains the acoustic signal in auditory cortex, while concurrently making guesses about the identity of the words being said. Such a processing strategy allows the content of the message to be accessed quickly, while also permitting re-analysis of the acoustic signal to minimise parsing mistakes.

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.