Why Certain Types of Music Make Our Brains Sing, and Others Don’t

Summary: Music can induce a range of emotions and help us to better understand different cultures. But what is it that makes us tune in to some songs more than others? Researchers say when we listen to a song, our brains predict what happens next, and that prediction dictates whether we like that song or not.

Source: The Conversation

A few years ago, Spotify published an online interactive map of musical tastes, sorted by city. At the time, Jeanne Added prevailed in Paris and Nantes, and London was partial to local hip hop duo Krept and Kronan. It is well established that music tastes vary over time, by region and even by social group.

However, most brains look alike at birth, so what happens in them that causes us to end up with such disparate music tastes?

Emotions – a story of prediction

If one presented you with a unknown melody and suddenly stopped it, you could be able to sing the note you think fit the best. At least, professional musicians could! In a study published in the Journal of Neuroscience in September 2021, we show that similar prediction mechanisms are happening in the brain every time we listen to music, whithout us being necessarly conscious of it.

Those predictions are generated in the auditory cortex and merged with the note that was actually heard, resulting in a “prediction error”. We used this prediction error as a sort of neural score to measure how well the brain could predict the next note in a melody.

Back in 1956, the US composer and musicologist Leonard Meyer theorised that emotion could be induced in music by a sense of satisfaction or frustration derived from the listener’s expectations. Since then, academic advances have helped identify a link between musical expectations and other more complex feelings.

For instance, participants in one study were able to memorize tone sequences much better if they could first accurately predict the notes within.

Now, basic emotions (e.g., joy, sadness or annoyance) can be broken down into two fundamental dimensions, valence and psychological activation, which measure, respectively, how positive an emotion is (e.g., sadness versus joy) and how exciting it is (boredom versus anger). Combining the two helps us define these basic emotions.

Two studies from 2013 and 2018 showed that when participants were asked to rank these two dimensions on a sliding scale, there was a clear relationship between prediction error and emotion. For instance, in those studies, music notes that were less accurately predicted led to emotions with greater psychological activation.

Throughout the history of cognitive neuroscience, pleasure has often been linked to the reward system, particularly with regard to learning processes. Studies have shown that there are particular dopaminergic neurons that react to prediction error.

Among other functions, this process enables us to learn about and predict the world around us. It is not yet clear whether pleasure drives learning or vice versa, but the two processes are undoubtedly connected. This also applies to music.

When we listen to music, the greatest amount of pleasure stems from events predicted with only a moderate level of accuracy. In other words, overly simple and predictable events – or, indeed, overly complex ones – do not necessarily induce new learning and thus generate only a small amount of pleasure.

Most pleasure comes from the events falling in between – those that are complex enough to arouse interest but consistent enough with our predictions to form a pattern.

Predictions dependent on our culture

Nevertheless, our prediction of musical events remains inexorably bound to our musical upbringing. To explore this phenomenon, a group of researchers met with the Sámi people, who inhabit the region stretching between the northernmost reaches of Sweden and the Kola Peninsula in Russia. Their traditional singing, known as yoik, differs vastly from Western tonal music due to limited exposure to Western culture.

Credit: Anita Livstrand

For a study published in 2000, musicians from Sámi regions, Finland and the rest of Europe (the latter coming from various countries unfamiliar with yoik singing) were asked to listen to excerpts of yoiks that they had never heard before. They were then asked to sing the next note in the song, which had been intentionally left out.

Interestingly, the spread of data varied greatly between groups; not all participants gave the same response, but certain notes were more prevalent than others within each group.

Those who most accurately predicted the next note in the song were the Sámi musicians, followed by the Finnish musicians, who had had more exposure to Sámi music than those from elsewhere in Europe.

Learning new cultures through passive exposure

This brings us to the question of how we learn about cultures, a process known as enculturation. For example, musical time can be divided in different ways. Western musical traditions generally use four-time signatures (as often heard in classic rock ‘n’ roll) or three-time signatures (as heard in waltzes).

However, other cultures use what Western musical theory calls an asymmetrical meter. Balkan music, for instance, is known for asymmetrical meters like nine-time or seven-time signatures.

To explore these differences, a 2005 study looked at folk melodies with either symmetrical or asymmetrical meters.

In each one, beats were added or removed at a specific moment – something referred to as an “accident” – and then participants of various ages listened to them. Regardless of whether the piece had a symmetrical or asymmetrical meter, infants aged six months or less listened for the same amount of time.

However, 12-month-olds spent considerably more time watching the screen when the “accidents” were introduced into the symmetrical meters compared to the asymmetrical ones.

We could infer from this that the subjects were more surprised by an accident in a symmetrical meter because they interpreted it as a disruption to a familiar pattern.

This shows a woman playing the guitar
Back in 1956, the US composer and musicologist Leonard Meyer theorised that emotion could be induced in music by a sense of satisfaction or frustration derived from the listener’s expectations. Image is in the public domain

To test this hypothesis, the researchers had a CD of Balkan music (with asymmetrical metres) played to the infants in their homes. The experiment was repeated after one week of listening, and the infants spent an equal amount of time watching the screen when the accidents were introduced, regardless of whether the meter was symmetrical or asymmetrical.

This means that through passive listening to the Balkan music, they were able to build an internal representation of the musical metric, which allowed them to predict the pattern and detect accidents in both meter types.

2010 study found a strikingly similar effect among adults – in this case, not for rhythm but for pitch. These experiments show that passive exposure to music can help us learn the specific musical patterns of a given culture – formally known as the process of enculturation.

Throughout this article, we have seen how passive music listening can change the way we predict musical patterns when presented with a new piece. We have also looked at the myriad ways in which listeners predict such patterns, depending on their culture and how it distorts perception by making them feel pleasure and emotions differently. While more research is needed, these studies have opened new avenues toward understanding why there is such diversity in our music tastes.

What we know for now is that our musical culture (that is, the music we have listened to throughout life) warps our perception and causes our preference for certain pieces over others, whether by similarity or by contrast to pieces that we have already heard.

About this music and neuroscience research news

Author: Guilhem Marion
Source: The Conversation
Contact: Guilhem Marion – The Conversation
Image: The image is in the public domain

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.
  1. Music is a combination of pitches, tuned to specific frequencies to operate in a modality of composition. From pre-birth to the age of six, audio exposure sets default historic memories of recall to the brain’s storage area. For example: When we hear the words “I wanna hold your hand,” many will remember the Beatles on Ed Sullivan back in the early ’60s. Even one simple word like “kyrie” will invoke memories of performing songs in a catholic mass setting for millions of Catholics. As with two specific tones, let’s say a perfect 5th, starting on the tonic, up to the 5th and back to tonic, may bring about the song of the Wicked Witch of the East’s henchmen (OhEEE Oh, Yoooo oh. OhEEE oh, yoooo oh). Or a perfect forth reminding some of the wedding march (Here comes the bride). But when applying only specific frequencies with more amplitude compared to others in recordings, a higher brain cognitive function has been detected in trials from MIT, Harvard, Oxford and others. Filtering out volume of white noise found in all digital audio will reduce the noise pollution identifying resource capabilities of the brain, and gives stronger signals to the brain of those specific frequencies, which condense the neurotransmitters to a center point of concentration, evading the amyloid proteins which stick to the nerve walls and attract many neurotransmitters from giving a full, optimal signal to the brain.

Comments are closed.