Summary: A new study comparing stroke survivors with healthy adults reveals that post-stroke language disorders stem not from slower hearing but from weaker integration of speech sounds. While patients detected sounds as quickly as controls, their brains processed speech features with far less strength, especially when words were unclear.
Healthy listeners extended processing during uncertainty, but stroke survivors did not, suggesting they may abandon sound analysis too early to fully grasp difficult words. The findings highlight neural patterns essential for verbal comprehension and point to faster, story-based diagnostic tools for language impairments.
Key Facts
- Weakened Integration: Stroke survivors process speech sound features with much lower neural strength despite normal sound detection speed.
- Reduced Persistence: When words are unclear, they do not sustain processing long enough to resolve ambiguity.
- Diagnostic Potential: Simple story-listening tasks may replace lengthy behavioral tests for language disorders.
Source: SfN
Following stroke, some people experience a language disorder thatย hinders their ability to process speech sounds. How do their brains change from stroke?ย
Researchers led byย Laura Gwilliams,ย faculty scholar at the Wu Tsai Neuroscience Institute and Stanford Data Science andย assistantย professorย atย the Stanford School of Humanities and Sciences, and Maaikeย Vandermosten, associate professor at the Department of Neurosciences at KU Leuven,ย compared the brains of 39 patients following stroke and 24 healthy age-matched controls to unveil language processing brain mechanisms.ย ย
As reported in theirย Journal of Neuroscienceย paper,ย the researchers recorded brain activity while volunteers listened to a story.
ย People with verbal speech processing issuesย from strokeย were not slower to process speechย sounds butย had much weakerย processingย than healthy participants.
According to the researchers, this suggests thatย people with this language disorderย can hearย soundsย of all kindsย as well as healthyย people butย have issues integratingย speechย soundsย toย understand language.ย
Additionally,ย whenย there was uncertaintyย about whatย words were being said, healthyย peopleย processedย speech sound featuresย longerย compared to those who had experienced a stroke.ย
Thisย could meanย that, following stroke, people do not process speech sounds long enough to successfullyย comprehendย wordsย that are difficult to detect.ย
Thisย workย pointsย toย brain activity patternsย that may be crucial for understanding verbal language, according to the authors.ย
First author Jillย Kriesย expresses excitementย about continuing to explore howย thisย simple approachโlistening to a storyโcan be used to improve diagnostics forย conditionsย characterized by language processing issues,ย which currentlyย involveย hours of behavioral tasks.
Key Questions Answered:
A: Their brains detect sounds normally but integrate speech features with reduced strength, making comprehension harder even when hearing is intact.
A: Healthy listeners process sound features longer to resolve ambiguity, but stroke survivors stop too soon, leading to missed meaning.
A: Story-listening brain recordings may provide a quick, naturalistic alternative to hours of behavioral language testing.
Editorial Notes:
- This article was edited by a Neuroscience News editor.
- Journal paper reviewed in full.
- Additional context added by our staff.
About this stroke and speech processing research news
Author: SfN Media
Source: SfN
Contact: SfN Media – SfN
Image: The image is credited to Neuroscience News
Original Research: Closed access.
“The Spatio-Temporal Dynamics of Phoneme Encoding in Aging and Aphasia” by Laura Gwilliams et al. Journal of Neuroscience
Abstract
The Spatio-Temporal Dynamics of Phoneme Encoding in Aging and Aphasia
During successful language comprehension, speech sounds (phonemes) are encoded within a series of neural patterns that evolve over time.
Here we tested whether these neural dynamics of speech encoding are altered for individuals with a language disorder. We recorded EEG responses from human brains of 39 individuals with post-stroke aphasia (13โ/26โ) and 24 healthy age-matched controls (i.e., older adults; 8โ/16โ) during 25 minutes of natural story listening.
We estimated the duration of phonetic feature encoding, speed of evolution across neural populations, and the spatial location of encoding over EEG sensors.
First, we establish that phonetic features are robustly encoded in EEG responses of healthy older adults.
Second, when comparing individuals with aphasia to healthy controls, we find significantly decreased phonetic encoding in the aphasic group after shared initial processing pattern (0.08-0.25s after phoneme onset).
Phonetic features were less strongly encoded over left-lateralized electrodes in the aphasia group compared to controls, with no difference in speed of neural pattern evolution.
Finally, we observed that healthy controls, but not individuals with aphasia, encode phonetic features longer when uncertainty about word identity is high, indicating that this mechanism – encoding phonetic information until word identity is resolved – is crucial for successful comprehension.
Together, our results suggest that aphasia may entail failure to maintain lower-order information long enough to recognize lexical items.

