Mechanisms of real-time speech interpretation in the human brain revealed

Summary: Study reveals the dynamic patterns of information flow between critical language regions of the brain.

Source: University of Cambridge

Scientists have come a step closer to understanding how we’re able to understand spoken language so rapidly, and it involves a huge and complex set of computations in the brain..

In a study published today in the journal PNAS, researchers at the University of Cambridge developed novel computational models of the meanings of words, and tested these directly against real-time brain activity in volunteers.

“Our ability to put words into context, depending on the other words around them, is an immediate process and it’s thanks to the best computer we’ve ever known: the brain in our head. It’s something we haven’t yet managed to fully replicate in computers because it is still so poorly understood,” said Lorraine Tyler, Director of the Centre for Speech, Language and the Brain at the University of Cambridge, which ran the study.

Central to understanding speech are the processes involved in what is known as ‘semantic composition’ – in which the brain combines the meaning of words in a sentence as they are heard, so that they make sense in the context of what has already been said. This new study has revealed the detailed real-time processes going on inside the brain that make this possible.

By saying the phrase: “the elderly man ate the apple” and watching how the volunteers’ brains responded, the researchers could track the dynamic patterns of information flow between critical language regions in the brain.

As the word ‘eat’ is heard, it primes the brain to put constraints on how it interprets the next word in the sentence: ‘eat’ is likely to be something to do with food. The study shows how these constraints directly affect how the meaning of the next word in the sentence is understood, revealing the neural mechanisms underpinning this essential property of spoken language – our ability to combine sequences of words into meaningful expressions, millisecond by millisecond as the speech is heard.

This shows a man and woman talking
By saying the phrase: “the elderly man ate the apple” and watching how the volunteers’ brains responded, the researchers could track the dynamic patterns of information flow between critical language regions in the brain. The image is in the public domain.

“The way our brain enables us to understand what someone is saying, as they’re saying it, is remarkable,” said Professor Tyler. “By looking at the real-time flow of information in the brain we’ve shown how word meanings are being rapidly interpreted and put into context.”

About this neuroscience research article

Source:
University of Cambridge
Media Contacts:
Jacqueline Garget – University of Cambridge
Image Source:
The image is in the public domain.

Original Research: Open access
“Neural dynamics of semantic composition”. Bingjiang Lyu, Hun S. Choi, William D. Marslen-Wilson, Alex Clarke, Billi Randall, and Lorraine K. Tyler.
PNAS doi:10.1073/pnas.1903402116.

Abstract

Neural dynamics of semantic composition

Human speech comprehension is remarkable for its immediacy and rapidity. The listener interprets an incrementally delivered auditory input, millisecond by millisecond as it is heard, in terms of complex multilevel representations of relevant linguistic and nonlinguistic knowledge. Central to this process are the neural computations involved in semantic combination, whereby the meanings of words are combined into more complex representations, as in the combination of a verb and its following direct object (DO) noun (e.g., “eat the apple”). These combinatorial processes form the backbone for incremental interpretation, enabling listeners to integrate the meaning of each word as it is heard into their dynamic interpretation of the current utterance. Focusing on the verb-DO noun relationship in simple spoken sentences, we applied multivariate pattern analysis and computational semantic modeling to source-localized electro/magnetoencephalographic data to map out the specific representational constraints that are constructed as each word is heard, and to determine how these constraints guide the interpretation of subsequent words in the utterance. Comparing context-independent semantic models of the DO noun with contextually constrained noun models reflecting the semantic properties of the preceding verb, we found that only the contextually constrained model showed a significant fit to the brain data. Pattern-based measures of directed connectivity across the left hemisphere language network revealed a continuous information flow among temporal, inferior frontal, and inferior parietal regions, underpinning the verb’s modification of the DO noun’s activated semantics. These results provide a plausible neural substrate for seamless real-time incremental interpretation on the observed millisecond time scales.

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.