A Fraction of a Second Is All You Need to Feel the Music

Summary: Our perception of musical timing is closely linked to the quality of the sound.

Source: University of Oslo

The brain does not necessarily perceive the sounds in music simultaneously as they are being played. New research sheds light on musicians’ implicit knowledge of sound and timing.

“It is very important for our overall impression of music that the details are right,” says musicologist Anne Danielsen at the RITMO Center for Interdisciplinary Studies in Rhythm, Time and Motion at the University of Oslo.

Together with her research colleague Guilherme Schmidt Câmara, she is looking for answers to what these details are. They know there are some basic rules relating to sound and timing which most creators of music comply with. Few, however, are aware of what they actually do in order to make it sound right.

“When we talk to musicians and producers, it becomes clear that they simply adjust sounds automatically in order to get the right timing—it’s a form of implicit knowledge,” says Câmara.

In order to make this knowledge more explicit, the researchers have studied the factors that influence when we perceive a sound happening. They have found a pattern and have noticed that our perception of timing is closely related to the quality of the sound—whether or not it is soft or sharp, short or long and wobbly.

When does a sound happen?

Timing the sounds of all instruments so that the music sounds good is essential, but the different notes are not necessarily being played when you hear them.

“Scientists have previously assumed that we perceive the timing at the beginning of a sound but have not reflected critically on what happens when the sounds have different shapes,” says Danielsen.

A sound has a rhythmic center. If you imagine a sound wave, this center is located near the top of the wave, and your perception of where in time the sound is located is actually up there, rather than where it begins.

“If the sound is sharp, the beginning coincides with this rhythmic center. While with a longer and more wobbly sound, we perceive that the center is placed long after the sound has actually begun.”

In order to hit a beat, or to play together in a band, musicians have to tune in to each other to get it right.

“If you have a soft sound and you want it to be heard exactly on the beat, then you need to place it a little early so that it can be experienced like that,” says Danielsen.

Experiments reveal musicians’ strategies

In order to investigate this, Câmara has conducted controlled experiments with skilled guitarists, bassists and drummers.

“They were all given a rhythmic reference, a simple groove pattern that can be found in many genres. Then they were asked to play along with it in three different ways: either right on the beat, a little behind, or a little ahead,” he explains.

This way, he could test their perception of the timing of different sounds, and how they play in order to time the sounds to a beat. After the experiments, they asked the musicians what they had been trying to do.

“They use their own words, saying they are playing slower or more heavily when they are aiming for after the beat. This accords well with what we see as a pattern of influencing the sound rather than just its location.”

Danielsen points out that timing one’s own playing to a beat is something that all musicians practice, so it is something that everyone thinks about.

“However, they are much less aware of how they use sound to communicate timing differences,” she says.

Musicians manipulate sound and time

The researchers believe our perception of sound in time is based on fundamental psychoacoustic rules, meaning how the brain perceives sound signals. All musicians take these more or less stated rules into account, but how they do it depends on what genre their playing falls into.

“Each genre has a characteristic microrhythmic profile. Samba has it its own, EDM has its own, hip-hop has another,” says Danielsen.

In music production, the producer sees the sound in front of her on the screen and can twist and turn the music by moving how the sounds relate to each other.

“Producers who create a groove on a computer know this. They move sounds back and forth on the beat and think: ‘if I put it there it works, and if I put it there it doesn’t.’ So, they learn through experience, and if something needs to sound precise, they need to juggle the sounds around a bit.”

AI strives to give music human qualities

The researchers believe that our knowledge about how different types of sound affect timing could be used to develop software that uses artificial intelligence to create music.

They have found a pattern and have noticed that our perception of timing is closely related to the quality of the sound—whether or not it is soft or sharp, short or long and wobbly. Image is in the public domain.

“We can already make a sequence groovier and more human so that it doesn’t sound completely mechanical. If we start with a programmed beat, then the algorithms can move the sounds slightly to affect the style. If the algorithm also takes the shape of the sound into account, we can obtain an even broader palette of rhythmic conditions that can shape the music in a more esthetically pleasing way,” says Câmara.

When you listen to music, it doesn’t take much before something sounds wrong. It’s about context, and about the type of music involved.

“When we play live, we want a margin of error, we’re not machines. There is always a certain amount of asynchronicity,” says Câmara, who is a musician himself.

Although we are talking about tiny shifts, humans have a trained ear for placing something in time by using sound.

“In some contexts, 10 to 20 milliseconds may be enough for hearing a difference. We don’t need to be completely aware of this, but we can feel it.”

Anne Danielsen points out that this does not just apply to people who work with music.

“Compared to what we perceive with our eyes, our precision in terms of time and sound is extremely precise. This makes us very sensitive to spatial sound differences. But also, when listening to differences in voices—whether someone is angry, sad, happy or annoyed—we use finely meshed audio information to interpret what that voice is actually communicating,” she says.

“It may seem incredibly small and insignificant, but it’s actually very important information for us.”

Music challenges our sensory boundaries

Danielsen believes that the fact that music research has enabled us to discover psychoacoustic rules relating to how the human brain perceives sound says something about the importance of conducting research on music.

“We do extreme things in music. By testing out the boundaries of what we may find esthetically pleasing, we are also testing our perception apparatus,” she says.

“You could say that music is constantly experimenting with our senses. That’s why music is a good research topic for finding out how we perceive sound, how we listen and how we structure it in time.”

About this music and neuroscience research news

Source: University of Oslo
Contact: Mari Lilleslåtten – University of Oslo
Image: The image is in the public domain

Original Research: Closed access:
Effects of instructed timing on electric guitar and bass sound in groove performance” by Guilherme Schmidt Câmara et al. The Journal of the Acoustical Society of America 


Abstract

Effects of instructed timing on electric guitar and bass sound in groove performance

This paper reports on two experiments that investigated the expressive means through which musicians well versed in groove-based music signal the intended timing of a rhythmic event. Data were collected from 21 expert electric guitarists and 21 bassists, who were instructed to perform a simple rhythmic pattern in three different timing styles—“laid-back,” “on-the-beat,” and “pushed”—in tandem with a metronome. As expected, onset and peak timing locations corresponded to the instructed timing styles for both instruments. Regarding sound, results for guitarists revealed systematic differences across participants in the duration and brightness [spectral centroid (SC)] of the guitar strokes played using these different timing styles. In general, laid-back strokes were played with a longer duration and a lower SC relative to on-the-beat and pushed strokes. Results for the bassists indicated systematic differences in intensity (sound-pressure level): pushed strokes were played with higher intensity than on-the-beat and laid-back strokes. These results lend further credence to the hypothesis that both temporal and sound-related features are important indications of the intended timing of a rhythmic event, and together these features offer deeper insight into the ways in which musicians communicate at the microrhythmic level in groove-based music.

Join our Newsletter
Thank you for subscribing.
Something went wrong.
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.
Exit mobile version