Summary: A new study found that brain wave timing shapes our perception of speech. Researchers discovered that more probable sounds and words are perceived during less excitable brain wave phases, while less probable ones are noticed in more excitable phases.
Using ambiguous speech stimuli and MEG recordings, they showed how neural timing affects language comprehension. This research has significant implications for theories of predictive coding in speech perception.
Key Facts:
- Brain wave timing influences the perception of speech sounds and words.
- Probable sounds are perceived during less excitable brain wave phases.
- Findings support the role of neural timing in language comprehension and predictive coding.
Source: Max Planck Institute
The timing of our brain waves shapes how we perceive our environment. We are more likely to perceive events when their timing coincides with the timing of relevant brain waves.
Lead scientist Sanne ten Oever and her co-authors set out to determine whether neural timing also shapes speech perception. Is the probability of speech sounds or words encoded in our brain waves and is this information used to recognise words?
The team first created ambiguous stimuli for both sounds and words. For instance, the initial sounds in da and ga differ in probability: ‘d’ is more common than ‘g’.
The Dutch words dat “that” and gat “hole” also differ in word frequency: dat “that” is more common than gat “hole”. For each stimulus pair, the researchers created a spoken stimulus that was in between.
Next, participants were exposed to each ambiguous stimulus and asked to select what they thought they heard (for instance, dat or gat). The team used magnetoencephalography (MEG) to record the timing of brain waves.
Excitable phases
The researchers found that brain waves bias perception towards more probable sounds or words when stimuli were presented in a less ‘excitable’ brain wave phase. Perception was biased to less probable sounds or words when stimuli were presented in a more ‘excitable’ brain wave phase.
This means that both the probability of an event and its timing influenced what people perceived. Brain regions classically associated with speech sounds vs. word processing were sensitive to the probability of occurrence of sounds vs. words. Computational modeling confirmed the relationship between neural timing and perception.
“We conclude that brain waves provide a temporal structure that enhances the brain’s ability to predict and process speech based on the probability of linguistic units”, says Ten Oever.
“Predictable speech sounds and words have a lower threshold for activation, and our brain waves reflect this. Knowledge about how probable something is, and what it is (which phoneme or which word) work hand in hand to create language comprehension.”
Predictive coding
“Our study has important consequences for theories of predictive coding, adds senior author Andrea Martin.
“We show that the time (or phase) of information processing has direct consequences for whether something is interpreted as a more or less likely event, determining which words or sounds we hear.
” In the fields of speech and language processing, most emphasis has been put on the neural communication role of neural oscillations. However, we show that properties of phase coding are also used for interpreting speech input and recognising words.”
About this speech processing and neuroscience research news
Author: Anniek Corporaal
Source: Max Planck Institute
Contact: Anniek Corporaal – Max Planck Institute
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Brain waves shape the words we hear” by Sanne ten Oever et al. PNAS
Abstract
Brain waves shape the words we hear
Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood.
We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable.
Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds.
Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling.
With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model.
These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.