This shows a woman and musical notes.
Then, they listened to sentences spoken in English. Credit: Neuroscience News

How Our Brains Process Music

Summary: Researchers unlocked how the brain processes melodies, creating a detailed map of auditory cortex activity. Their study reveals that the brain engages in dual tasks when hearing music: tracking pitch with neurons used for speech and predicting future notes with music-specific neurons.

This breakthrough clarifies the longstanding mystery of melody perception, demonstrating that some neural processes for music and speech are shared, while others are uniquely musical. The discovery enhances our understanding of the brain’s complex response to music and opens avenues for exploring music’s emotional and therapeutic impacts.

Key Facts:

  1. Dual Processing for Music: The brain tracks melody pitch using neurons also involved in speech processing and employs a unique set of neurons for predicting musical notes.
  2. Unique Neurons for Music: For the first time, researchers identified neurons specifically dedicated to anticipating the sequence of melodic notes, separate from speech processing.
  3. Shared and Unique Neural Pathways: The study illustrates that while music and speech share some neural pathways for processing pitch, music also activates distinct neural mechanisms for predicting melody sequences.

Source: UCSF

Music has been central to human cultures for tens of thousands of years, but how our brains perceive it has long been shrouded in mystery.

Now, researchers at UC San Francisco have developed a precise map of what is happening in the cerebral cortex when someone hears a melody.

It turns out to be doing two things at once: following the pitch of a note, using two sets of neurons that also follow the pitch of speech, and trying to predict what notes will come next, using a set of neurons that are specific to music.

The study, published Feb. 16 in Science Advances, resolves long-standing questions about how melody is processed in the brain’s auditory cortex.

“We found that some of how we understand a melody is entwined with how we understand speech, while other important aspects of music stand alone,” said Edward Chang, MD, chair of neurosurgery and a member of the Weill Institute for Neurosciences at UCSF.

Predicting the next note

The first two groups of neurons turned out to be the same ones that Chang identified in a 2017 study of how we process the changes in vocal pitch that lend meaning and emotion to speech.

The third group of neurons, however, are solely devoted to predicting melodic notes and are described here for the first time.

Chang’s team knew that something similar happens in speech: specialized neurons in the auditory cortex anticipate the next speech sound, or phoneme, based on what the brain has already learned about words and their context, much like the word-prediction function of a cell phone.

The researchers hypothesized that a similar group of neurons must exist for predicting melody.

Chang’s team tested this on eight participants who volunteered for research studies during their surgical workup for epilepsy. The team recorded direct brain activity from the auditory cortex while the participants listened to a variety of melodic phrases from Western music.

Then, they listened to sentences spoken in English.

The hypothesis proved correct. The recordings showed that the participants’ brains were using the same neurons to assess the qualities of pitch in both speech and music, but that each of these modes had specific neurons devoted to prediction.

In other words, the auditory cortex wasn’t just looking for notes. It also had a specialized set of neurons that was trying to predict which notes would come next, using what it already knew of melodic patterns.

“When we’re listening to music, two things are happening simultaneously,” Chang explained. “There’s a low-level processing of the individual notes of the melody, and then this high-level, abstract processing of the context of these notes.”

This makes sense because our brains evolved to anticipate upcoming information, said Narayan Sankaran, Ph.D., a postdoctoral scholar in the Chang Lab, who led the work. Listening to a melody can sway our emotions because the auditory neurons that process music are in conversation with emotional centers in the brain.

“Composers talk about musical tension and resolution,” Sankaran said. “Our ability to expect and anticipate these features of music explains how it can set an upbeat tone or bring us to tears.”

But much remains to be learned about those connections.

“It’s obvious that exposure to music enriches our social, emotional and intellectual lives and has potential to treat a broad range of conditions,” Sankanran said. “To understand why music is able to confer all these benefits, we need to answer some fundamental questions about how music works in the brain.”

About this music and neuroscience research news

Author: Robin Marks
Source: UCSF
Contact: Robin Marks – UCSF
Image: The image is credited to Neuroscience News

Original Research: Open access.
Encoding of melody in the human auditory cortex” by Narayan Sankaran et al. Science Advances


Abstract

Encoding of melody in the human auditory cortex

Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception varies along several pitch-based dimensions: (i) the absolute pitch of notes, (ii) the difference in pitch between successive notes, and (iii) the statistical expectation of each note given prior context.

How the brain represents these dimensions and whether their encoding is specialized for music remains unknown. We recorded high-density neurophysiological activity directly from the human auditory cortex while participants listened to Western musical phrases.

Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial map for representing distinct melodic dimensions. The same participants listened to spoken English, and we compared responses to music and speech.

Cortical sites selective for music encoded expectation, while sites that encoded pitch and pitch-change in music used the same neural code to represent equivalent properties of speech.

Findings reveal how the perception of melody recruits both music-specific and general-purpose sound representations.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.
  1. But are these specialized neurons that predict the next phoneme specific to music? Take, for example, tonal languages, like Chinese or Vietnamese, in which words are often bisyllabic. I’ll use Vietnamese, as l learned it as an adult. On hearing the first syllable, one’s ear, so to speak, does not assume that a syllable of any of the six possible tones will follow. Usually, one ” knows” that the next syllable can only have 1-3 possible tones.

Your email address will not be published. Required fields are marked *