Mom’s Voice Activates Many Different Regions in Children’s Brains

Summary: A new study reports a number of different areas of childrens’ brains become activated when they hear their mother’s voice. This response predicts a child’s social communication ability.

Source: Stanford.

A far wider swath of brain areas is activated when children hear their mothers than when they hear other voices, and this brain response predicts a child’s social communication ability, a new study finds.

Children’s brains are far more engaged by their mother’s voice than by voices of women they do not know, a new study from the Stanford University School of Medicine has found.

Brain regions that respond more strongly to the mother’s voice extend beyond auditory areas to include those involved in emotion and reward processing, social functions, detection of what is personally relevant and face recognition.

The study, which is the first to evaluate brain scans of children listening to their mothers’ voices, published online May 16 in the Proceedings of the National Academy of Sciences. The strength of connections between the brain regions activated by the voice of a child’s own mother predicted that child’s social communication abilities, the study also found.

“Many of our social, language and emotional processes are learned by listening to our mom’s voice,” said lead author Daniel Abrams, PhD, instructor in psychiatry and behavioral sciences. “But surprisingly little is known about how the brain organizes itself around this very important sound source. We didn’t realize that a mother’s voice would have such quick access to so many different brain systems.”

Preference for mom’s voice

Decades of research have shown that children prefer their mother’s voices: In one classic study, 1-day-old babies sucked harder on a pacifier when they heard the sound of their mom’s voice, as opposed to the voices of other women. However, the mechanism behind this preference had never been defined.

“Nobody had really looked at the brain circuits that might be engaged,” senior author Vinod Menon, PhD, professor of psychiatry and behavioral sciences, said. “We wanted to know: Is it just auditory and voice-selective areas that respond differently, or is it more broad in terms of engagement, emotional reactivity and detection of salient stimuli?”

The study examined 24 children ages 7 to 12. All had IQs of at least 80, none had any developmental disorders, and all were being raised by their biological mothers. Parents answered a standard questionnaire about their child’s ability to interact and relate with others. And before the brain scans, each child’s mother was recorded saying three nonsense words.

“In this age range, where most children have good language skills, we didn’t want to use words that had meaning because that would have engaged a whole different set of circuitry in the brain,” said Menon, who is the Rachael L. and Walter F. Nichols, MD, Professor.

Two mothers whose children were not being studied, and who had never met any of the children in the study, were also recorded saying the three nonsense words. These recordings were used as controls.

MRI scanning

The children’s brains were scanned via magnetic resonance imaging while they listened to short clips of the nonsense-word recordings, some produced by their own mother and some by the control mothers. Even from very short clips, less than a second long, the children could identify their own mothers’ voices with greater than 97 percent accuracy.

The brain regions that were more engaged by the voices of the children’s own mothers than by the control voices included auditory regions, such as the primary auditory cortex; regions of the brain that handle emotions, such as the amygdala; brain regions that detect and assign value to rewarding stimuli, such as the mesolimbic reward pathway and medial prefrontal cortex; regions that process information about the self, including the default mode network; and areas involved in perceiving and processing the sight of faces.

“The extent of the regions that were engaged was really quite surprising,” Menon said.

“We know that hearing mother’s voice can be an important source of emotional comfort to children,” Abrams added. “Here, we’re showing the biological circuitry underlying that.”

Image shows brain scans from the study.
Brain activity in response to mother’s voice. Compared to female control voices, mother’s voice elicits greater activity in auditory brain structures in the midbrain and superior temporal cortex (Upper Left), including the bilateral IC and primary auditory cortex (mHG) and a wide extent of voice-selective STG (Upper Center) and STS. Mother’s voice also elicited greater activity in occipital cortex, including fusiform gyrus (FG) (Lower Left), and in heteromodal brain regions serving affective functions, anchored in the amygdala (Upper Right), core structures of the mesolimbic reward system, including NAc, OFC, and vmPFC (Lower Center), and structures of the salience network, including the AI and dACC (Lower Right). No voxels showed greater activity in response to female control voices compared to mother’s voice. NeuroscienceNews image is credited to Abrams et al./PNAS.

Children whose brains showed a stronger degree of connection between all these regions when hearing their mom’s voice also had the strongest social communication ability, suggesting that increased brain connectivity between the regions is a neural fingerprint for greater social communication abilities in children.

‘An important new template’

“This is an important new template for investigating social communication deficits in children with disorders such as autism,” Menon said. His team plans to conduct similar studies in children with autism, and is also in the process of investigating how adolescents respond to their mother’s voice to see whether the brain responses change as people mature into adulthood.

“Voice is one of the most important social communication cues,” Menon said. “It’s exciting to see that the echo of one’s mother’s voice lives on in so many brain systems.”

About this Alzheimer’s disease research article

Other Stanford authors of the study are Tianwen Chen, research associate; clinical research coordinators Paola Odriozola, Katherine Cheng and Amanda Baker; Aarthi Padmanabhan, PhD, postdoctoral scholar in psychiatry and behavioral sciences; Srikanth Ryali, PhD, instructor in psychiatry and behavioral sciences; John Kochalka, research assistant; and Carl Feinstein, MD, professor emeritus of psychiatry and behavioral sciences. Menon and Feinstein are members of Stanford’s Child Health Research Institute.

Funding: The study was funded by the National Institutes of Health (grants K01MH102428, K25HD074652, DC011095 and MH084164), as well as by the Singer Foundation and the Simons Foundation. Stanford’s Department of Psychiatry and Behavioral Sciences also supported the work.

Source: Erin Digitale – Stanford
Image Source: This NeuroscienceNews.com image is credited to Abrams et al./PNAS.
Original Research: Full open access research for “Neural circuits underlying mother’s voice perception predict social communication abilities in children” by Daniel A. Abrams, Tianwen Chen, Paola Odriozola, Katherine M. Cheng, Amanda E. Baker, Aarthi Padmanabhan, Srikanth Ryali, John Kochalka, Carl Feinstein, and Vinod Menon in PNAS. Published online May 16 2016 doi:10.1073/pnas.1602948113

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]Stanford. “Mom’s Voice Activates Many Different Regions in Children’s Brains.” NeuroscienceNews. NeuroscienceNews, 17 May 2016.
<https://neurosciencenews.com/emotion-brain-area-child-mom-4235/>.[/cbtab][cbtab title=”APA”]Stanford. (2016, May 17). Mom’s Voice Activates Many Different Regions in Children’s Brains. NeuroscienceNews. Retrieved May 17, 2016 from https://neurosciencenews.com/emotion-brain-area-child-mom-4235/[/cbtab][cbtab title=”Chicago”]Stanford. “Mom’s Voice Activates Many Different Regions in Children’s Brains.” NeuroscienceNews.
https://neurosciencenews.com/emotion-brain-area-child-mom-4235/ (accessed May 17, 2016).[/cbtab][/cbtabs]


Abstract

Neural circuits underlying mother’s voice perception predict social communication abilities in children

The human voice is a critical social cue, and listeners are extremely sensitive to the voices in their environment. One of the most salient voices in a child’s life is mother’s voice: Infants discriminate their mother’s voice from the first days of life, and this stimulus is associated with guiding emotional and social function during development. Little is known regarding the functional circuits that are selectively engaged in children by biologically salient voices such as mother’s voice or whether this brain activity is related to children’s social communication abilities. We used functional MRI to measure brain activity in 24 healthy children (mean age, 10.2 y) while they attended to brief (<1 s) nonsense words produced by their biological mother and two female control voices and explored relationships between speech-evoked neural activity and social function. Compared to female control voices, mother’s voice elicited greater activity in primary auditory regions in the midbrain and cortex; voice-selective superior temporal sulcus (STS); the amygdala, which is crucial for processing of affect; nucleus accumbens and orbitofrontal cortex of the reward circuit; anterior insula and cingulate of the salience network; and a subregion of fusiform gyrus associated with face perception. The strength of brain connectivity between voice-selective STS and reward, affective, salience, memory, and face-processing regions during mother’s voice perception predicted social communication skills. Our findings provide a novel neurobiological template for investigation of typical social development as well as clinical disorders, such as autism, in which perception of biologically and socially salient voices may be impaired.

“Neural circuits underlying mother’s voice perception predict social communication abilities in children” by Daniel A. Abrams, Tianwen Chen, Paola Odriozola, Katherine M. Cheng, Amanda E. Baker, Aarthi Padmanabhan, Srikanth Ryali, John Kochalka, Carl Feinstein, and Vinod Menon in PNAS. Published online May 16 2016 doi:10.1073/pnas.1602948113

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.