How Do Children Hear Anger?

Summary: Researchers use neuroimaging technology and acoustical analysis to better understand how kids process emotion in speech.

Source: Acoustical Society of America.

Researchers pair acoustical analysis with brain mapping to understand how children process emotion in speech and how it might influence their developmenty.

Even if they don’t understand the words, infants react to the way their mother speaks and the emotions conveyed through speech. What exactly they react to and how has yet to be fully deciphered, but could have significant impact on a child’s development. Researchers in acoustics and psychology teamed up to better define and study this impact.

Peter Moriarty, a graduate researcher at Pennsylvania State University, will present the results of these studies, conducted with Michelle Vigeant, professor of acoustics and architectural engineering, and Pamela Cole professor of psychology, at the Acoustical Society of America and Acoustical Society of Japan joint meeting being held Nov. 28-Dec. 2 in Honolulu, Hawaii.

The team used functional magnetic resonance imaging (fMRI) to capture real-time information about the brain activity of children while they listening to samples of their mothers’ voice with different affects — or non-verbal emotional cues. Acoustic analysis of the voice samples was performed in conjunction with the fMRI data to correlate brain activity to quantifiable acoustical characteristics.

“We’re using acoustic analysis and fMRI to look at the interaction and specifically how the child’s brain responds to specific acoustic cues in their mother’s speech,” Moriarty said. Children in the study heard 15 second voice samples of the same words or sentences, but each conveyed either anger, happiness, or were neutral in affect for control purposes. The emotional affects were defined and predicted quantitatively by a set of acoustic parameters.

“Most of these acoustic parameters are fairly well established,” Moriarty said. “We’re talking about things like the pitch of speech as a function of time … They have been used in hundreds of studies.” In a more general sense, they are looking at what’s called prosody, or the intonations of voice.

However, there are many acoustic parameters relevant to speech. Understanding patterns within various sets of these parameters, and how they relate to emotion and emotional processing, is far from straight forward.

“You can’t just talk to Siri [referring to Apple’s virtual assistant] and Siri knows that you’re angry or not. There’s a very complicated model that you have to produce in order to make these judgements,” Moriarty explained. “The problem is that there’s a very complicated interaction between these acoustic parameters and the type of emotion … and the negativity or positivity we’d associate with some of these emotions.”

This work is a pilot study done as an early stage of a larger project called, The Processing of the Emotional Environment Project (PEEP). In this early stage, the team is looking for the best set of variables to predict these emotions, as well as the effects these emotions have on processes in the brain. “[We want] an acoustic number or numbers doing a good job at predicting that we’re saying, ‘yes, we can say quantitatively that this was angry or this was happy,'” Vigeant said.

In the work to be presented, the team has demonstrated the importance of looking at lower frequency characteristics in voice spectra; the patterns that appear over many seconds of speech or the voice sample as a whole. These patterns, they report, may play a significant role in understanding the resulting brain activity and differentiating the information relevant to emotional processing.

With effective predictors and fMRI analysis of effects on the brain, the ultimate goal of PEEP is to learn how a toddler who has not yet developed language processes emotion through prosody and how the environment effects their development. “A long term goal is really to understand prosodic processing, because that is what young children are responding to before they can actually process and integrate the verbal content,” Cole said.

Image shows an upset little girl.
In the work to be presented, the team has demonstrated the importance of looking at lower frequency characteristics in voice spectra; the patterns that appear over many seconds of speech or the voice sample as a whole. These patterns, they report, may play a significant role in understanding the resulting brain activity and differentiating the information relevant to emotional processing. NeuroscienceNews.com image is in the public domain.

Toddlers, however, are somewhat harder to image in an fMRI device, as it requires them to be mostly motionless for long periods of time. So for now, the team is studying older children aged 6-10 — though there are still some challenges of wriggling.

“We’re essentially trying to validate this type of procedure and look at whether or not we’re able to get meaningful results out of studying children that are so young. This really hasn’t been done at this age group in the past and that’s largely due to the difficulty of having children remain somewhat immobile in the scanner.”

About this psychology research article

Source: Acoustical Society of America
Image Source: NeuroscienceNews.com image is in the public domain.
Original Research: The findings will be presented at The 172nd Meeting of the Acoustical Society of America between Nov. 28-Dec. 2, 2016 in Honolulu, Hawaii. Presentation 4aAA11, “Low frequency analysis of acoustical parameters of emotional speech for use with functional magnetic resonance imaging,” by Peter M. Moriarty is at 11:15 a.m. HAST, Dec. 1, 2016 in Room Lehua.

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]Acoustical Society of America. “How Do Children Hear Anger?.” NeuroscienceNews. NeuroscienceNews, 2 December 2026.
<https://neurosciencenews.com/hearing-anger-children-5664/>.[/cbtab][cbtab title=”APA”]Acoustical Society of America. (2026, December 2). How Do Children Hear Anger?. NeuroscienceNews. Retrieved December 2, 2026 from https://neurosciencenews.com/hearing-anger-children-5664/[/cbtab][cbtab title=”Chicago”]Acoustical Society of America. “How Do Children Hear Anger?.” https://neurosciencenews.com/hearing-anger-children-5664/ (accessed December 2, 2026).[/cbtab][/cbtabs]

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.