Brain’s ability to process consonants in noisy environment may reflect child’s literacy potential.
A quick biological test may be able to identify children who have literacy challenges or learning disabilities long before they learn to read, according to new research from Northwestern University.
The study, published July 14 in PLOS Biology, centers on the child’s ability to decipher speech — specifically consonants — in a chaotic, noisy environment. Preliterate children whose brains inefficiently process speech against a background of noise are more likely than their peers to have trouble with reading and language development when they reach school age, the researchers found.
This newfound link between the brain’s ability to process spoken language in noise and reading skill in pre-readers “provides a biological looking glass into a child’s future literacy,” said study senior author Nina Kraus, director of Northwestern’s Auditory Neuroscience Laboratory.
“There are excellent interventions we can give to struggling readers during crucial pre-school years, but the earlier the better,” said Kraus, a professor of communication sciences, neurobiology and physiology in the School of Communication. “The challenge has been to identify which children are candidates for these interventions, and now we have discovered a way.”
Noisy environments, such as homes with blaring televisions and wailing children, loud classrooms or urban streetscapes, can disrupt brain mechanisms associated with literacy development in school-age children.
The Northwestern study, which directly measured the brain’s response to sound using electroencephelography (EEG), is one of the first to find the deleterious effect in preliterate children. This suggests that the brain’s ability to process the sounds of consonants in noise is fundamental for language and reading development.
Speech and communication often occur in noisy places, environments that tax the brain. Noise particularly affects the brain’s ability to hear consonants, rather than vowels, because consonants are said very quickly and vowels are acoustically simpler, Kraus said.
“If the brain’s response to sound isn’t optimal, it can’t keep up with the fast, difficult computations required to process in noise,” Kraus said.
“Sound is a powerful, invisible force that is central to human communication. Everyday listening experiences bootstrap language development by cluing children in on which sounds are meaningful. If a child can’t make meaning of these sounds through the background noise, he or she won’t develop the linguistic resources needed when reading instruction begins.”
In the study, EEG wires were placed on children’s scalps; this allowed the researchers to assess how the brain reacted to the sound of the consonants. In the right ear, the young study participants heard the sound ‘da’ superimposed over the babble of six talkers. In the left ear, they heard the soundtrack of the movie of their choice, which was shown to keep them still.
“Every time the brain responds to sound it gives off electricity, so we can capture how the brain pulls speech out of the noise,” Kraus said. “We can see with extreme granularity how well the brain extracts each meaningful detail in speech.”
The researchers captured three different aspects of the brain’s response to sound: the stability with which the circuits were responding; the speed with which the circuits were firing; and the quality with which the circuits represented the timbre of the sound.
Using these three pieces of information, they developed a statistical model to predict children’s performance on key early literacy tests.
In a series of experiments with 112 kids between the ages of 3 and 14, Kraus’ team found that their 30-minute neurophysiological assessment predicts with a very high accuracy how a 3-year-old child will perform on multiple pre-reading tests and how, a year later at age 4, he or she will perform across multiple language skills important for reading.
The model proved its breadth by also accurately predicting reading acumen in school-aged children, in addition to whether they’d been diagnosed with a learning disability.
“The importance of our biological approach is that we can see how the brain makes sense of sound and its impact for literacy, in any child,” Kraus said. “It’s unprecedented to have a uniform biological metric we can apply across ages.”
Other Northwestern co-authors include Travis White-Schwoch, Kali Woodruff Carr, Elaine C. Thompson, Samira Anderson, Trent Nicol, Ann R. Bradlow, and Steven G. Zecker, all of the Auditory Neuroscience Laboratory and department of communication sciences at Northwestern.
The team will continue to follow these children in its “Biotots” project as they progress through school.
About this psychology research
Background noise disrupts brain mechanisms involved in literacy development
One of the first studies to establish brain-behavior links in pre-readers
Results provide ‘a biological looking glass into a child’s future literacy’
New way to identify which children are candidates for reading interventions
Funding: The research was funded by the NIH.
Source: Julie Deardorff – Northwestern University Image Credit: Image credited to Auditory Neuroscience Laboratory at Northwestern University Original Research: Full open access research for “Auditory Processing in Noise: A Preschool Biomarker for Literacy” by NTravis White-Schwoch, Kali Woodruff Carr, Elaine C. Thompson, Samira Anderson, Trent Nicol, Ann R. Bradlow, Steven G. Zecker, and Nina Kraus in PLOS Biology. Published online July 14 2015 doi:10.1371/journal.pbio.1002196
Auditory Processing in Noise: A Preschool Biomarker for Literacy
Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child’s future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children. Here, in a series of experiments in 112 children (ages 3–14 y), we show brain–behavior relationships between the integrity of the neural coding of speech in noise and phonology. We harness these findings into a predictive model of preliteracy, revealing that a 30-min neurophysiological assessment predicts performance on multiple pre-reading tests and, one year later, predicts preschoolers’ performance across multiple domains of emergent literacy. This same neural coding model predicts literacy and diagnosis of a learning disability in school-aged children. These findings offer new insight into the biological constraints on preliteracy during early childhood, suggesting that neural processing of consonants in noise is fundamental for language and reading development. Pragmatically, these findings open doors to early identification of children at risk for language learning problems; this early identification may in turn facilitate access to early interventions that could prevent a life spent struggling to read.
“Auditory Processing in Noise: A Preschool Biomarker for Literacy” by NTravis White-Schwoch, Kali Woodruff Carr, Elaine C. Thompson, Samira Anderson, Trent Nicol, Ann R. Bradlow, Steven G. Zecker, and Nina Kraus in PLOS Biology. Published online July 14 2015 doi:10.1371/journal.pbio.1002196