Speech or Song? Identifying How the Brain Perceives Music

Summary: New research explores the different ways in which the brain distinguishes between music and speech.

Source: Cognitive Neuroscience Society

Most neuroscientists who study music have something in common: they play a musical instrument, in many cases from a young age. Their drive to understand how the brain perceives and is shaped by music springs from a deep love of music.

This passion has translated to a wealth of discoveries about music in the brain, including recent work that identifies the ways in which the brain distinguishes between music and speech, as will be presented today at the annual meeting of the Cognitive Neuroscience Society (CNS) in San Francisco. 

“Over the past two decades, many excellent studies have shown similar mechanisms between speech and music across many levels,” says Andrew Chang of New York University, a lifelong violinist, who organized a symposium on music and speech perception at the CNS meeting.

“However, a fundamental question, often overlooked, is what makes the brain treat music and speech signals differently, and why do humans need two distinct auditory signals.”

New work, enabled in part by computational advances, is pointing toward differences in pitch and rhythm as key factors that enable people starting in infancy to distinguish speech from music, as well as how the predictive capabilities of the brain underlie both speech and music perception. 

Exploring acoustical perception in infants

From a young age, cognitive neuroscientist Christina Vanden Bosch der Nederlanden of University of Toronto, Mississauga, has been singing and playing the cello, which have helped to shape her research career.

“I remember sitting in the middle of the cello section and we were playing some particularly beautiful music – one where the whole cello section had the melody,” she says, “and I remember having this emotional response and wondering ‘how is it possible that I can have such a strong emotional response from the vibrations of my strings traveling to my ear? That seems wild!’” 

That experience started der Nederlanden on a long journey of wanting to understand how the brain processes music and speech in early development. Specifically, she and colleagues are investigating whether babies, who are learning about communicative sounds through experience, even know the difference between speech and song. 

“These are seemingly simple questions that actually have a lot of theoretical importance for how we learn to communicate,” she says.

“We know that from age 4, children can and readily do explicitly differentiate between music and language. Although that seems pretty obvious there has been little to no data asking children to make these sorts of distinctions.” 

At the CNS meeting, der Nederlanden will be presenting on new data collected right before and during the COVID-19 pandemic about the acoustic features that shape music and language during development. In one experiment, 4-month-old infants heard speech and song, both in a sing-songy infant-directed manner and in a monotone speaking voice, while recording electrical brain activity with electroencephalogram (EEG). 

“This work novelly suggests that infants are better at tracking infant-directed utterances when they’re spoken compared to sung, and this is different from what we see in adults who are better at neural tracking sung compared to spoken utterances,” she says.

They also found that pitch and rhythm each affected brain activity for speech compared to song, for example, finding that exaggerated pitch was related to better neural tracking of infant-directed speech – identifying the lack of “pitch stability” as an important acoustic feature for guiding attention in babies. 

While the exaggerated, unstable pitch contours of infant-directed speech, has been well-established as a feature infants love, this new research shows it also helps to signal whether someone is hearing speech or song.

Pitch stability is a feature, der Nederlanden says, that “might signal to a listener ‘oh this sounds like someone singing,’” and the lack of pitch stability can conversely signal to infants that they are hearing speech rather than playing with sounds in song.

In an online experiment, der Nederlanden and colleagues asked kids and adults to qualitatively describe how music and language are different.

“This gave me a rich dataset that tells me a lot about how people think music and language differ acoustically and also in terms of how the functional roles of music and language differ in our everyday lives,” she explains.

“For the acoustic differences, kids and adults described features like tempo, pitch, rhythm as important features for differentiating speech and song.”

In future work, der Nederlanden hopes to move toward more naturalistic settings, including using mobile EEG to test music and language processing outside of the lab.

“I think the girl sitting in the orchestra pit, geeking out about music and emotion, would be pretty excited to find out that she’s still asking questions about music and finding results that could have answered her questions from over 20 years ago!”

Identifying the predictive code of music

Guilhem Marion of Ecole Normale Supérieure has two passions that drive his research: music and computer science. He has combined those interests to create novel computational models of music that are helping researchers understand how the brain perceives music through “predictive coding,” similar to how people predict patterns in language.

“Predictive coding theory explains how the brain tries to predict the next note while listening to music, which is exactly what computational models of music do for generating new music,” he explains. Marion is using those models to better understand how culture affects music perception, by pulling in knowledge based on individual environments and knowledge. 

In new work conducted with Giovanni Di Liberto and colleagues, Marion recorded EEG activity of 21 professional musicians who were listening to or imagining in their minds four Bach choral pieces.

In one study, they were able to identify the amount of surprise for each note, using a computational model based on a large database of Western music. This surprise was a “cultural marker of music processing,” Marion says, showing how closely the notes were predicted based on a person’s native musical environment. 

“Our study showed for the first time the average EEG response to imagined musical notes and showed that they were correlated with the musical surprise computed using a statistical model of music,” Marion says.

“This work has broad implications in music cognition but more generally in cognitive neuroscience, as they will enlighten the way the human brain learns new language or other structures that will later shape its perception of the world.”

Chang says that such computational-based work is enabling a new type of music cognition study that balances good experimental control with ecological validity, something challenging for the complexity involved in music and speech sounds.

This shows a woman wearing headphones
While the exaggerated, unstable pitch contours of infant-directed speech, has been well-established as a feature infants love, this new research shows it also helps to signal whether someone is hearing speech or song. Image is in the public domain

“You often either make the sounds unnatural if everything is well controlled for your experimental purpose or preserve their natural properties of speech or music, but it then becomes difficult to fairly compare the sounds between experimental conditions,” he explains.

“Marion and Di Liberto’s groundbreaking approach enables researchers to investigate, and even isolate, the neural activities while listening to a continuous natural speech or music recording.

Chang, who has been playing violin since he was 8-years old, is excited to see the progress that has been made in music cognition studies just in the last decade. “When I started my PhD in 2013, only a few labs in the world were focusing on music,” he says.

“But now there are many excellent junior and even well-established senior researchers from other fields, such as speech, around the globe starting to get involved or even devoted to music cognitive neuroscience research.”

Understanding the relationship between music and language “can help us explore the fundamental questions of human cognition, such as why humans need music and speech, and how humans communicate and interact with each other via these forms,” Chang says.

“Also, these findings are the basis for the potential applications in clinical and child development domains, such as whether music can be used as an alternative form of verbal communication for individuals with aphasia, and how music facilitates infants learning speech.”

About this music and neuroscience research news

Author: Lisa M.P. Munoz
Source: Cognitive Neuroscience Society
Contact: Lisa M.P. Munoz – Cognitive Neuroscience Society
Image: The image is in the public domain

Original Research: The findings will be presented at the Cognitive Neuroscience Society 29th Annual Meeting

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.