Summary: When it comes to anticipating how a song will progress, the human brain considers the rhythm and beats that came before.
Source: APS
Whether listening to a concerto by Bach or the latest pop tunes on Spotify, the human brain does not wait passively for the song to unfold. Instead, when a musical phrase has an unresolved or uncertain quality about it our brains automatically predict how the melody will end.
Past ideas on how the human brain processes music suggested that musical phrases are perceived by looking backward rather than forward. New research published in the journal Psychological Science, however, suggests that the human brain considers what has come before to anticipate what comes next.
“The brain is constantly one step ahead and matches expectations to what is about to happen,” said Niels Chr. Hansen, a fellow at the Aarhus Institute of Advanced Studies and one of two lead authors on the paper. “This finding challenges previous assumptions that musical phrases feel finished only after the next phrase has begun.”
Hansen and his colleagues focused their research on one of the basic units of music, the musical phrase—a sequence or pattern of sounds that form a distinct musical “thought” within a melody. Like a sentence, a musical phrase is a coherent and complete part of a larger whole, but it may end with some uncertainty about what comes next in the melody.
The new research shows that listeners use these moments of uncertainty, or high entropy, to determine where one phrase ends and another begins.
“We only know a little about how the brain determines when things start and end,” said Hansen. “Here, music provides a perfect domain to measure something that is otherwise difficult to measure—namely, uncertainty.”
To study the brain’s musical predictive power, the researchers had 38 participants listen, note by note, to chorale melodies by Bach. Participants could pause and restart the music by pressing the space bar on a computer keyboard.
The participants were told that they would be tested afterward on how well they remembered the melodies. This allowed the researchers to use the time participants dwelled on each tone as an indirect measure of their understanding of musical phrasing.
In a second experiment, 31 different participants listened to the same musical phrases and then assessed how complete they sounded. The participants judged melodies that ended on high-entropy tones to be more complete—and lingered on them longer.
“We were able to show that people have a tendency to experience high-entropy tones as musical-phrase endings. This is basic research that makes us more aware of how the human brain acquires new knowledge not just from music, but also when it comes to language, movements, or other things that take place over time,” said Haley Kragness, a postdoctoral researcher at the University of Toronto Scarborough and the paper’s second lead author.
Over the long term, the researchers hope that the results can be used to optimize communication and interactions between people—or, alternatively, to understand how artists are able to tease or trick audiences.
“This study shows that humans harness the statistical properties of the world around them not only to predict what is likely to happen next, but also to parse streams of complex, continuous input into smaller, more manageable segments of information,” said Hansen.
Other collaborators on the study were Laurel Trainor (McMaster University), Peter Vuust (Aarhus University), and Marcus Pearce (Queen Mary, University of London).
About this music and neuroscience research news
Author: Press Office
Source: APS
Contact: Press Office – APS
Image: The image is in the public domain
Original Research: Closed access.
“Predictive Uncertainty Underlies Auditory Boundary Perception” by Niels Chr. Hansen et al. Psychological Science
Abstract
Predictive Uncertainty Underlies Auditory Boundary Perception
Anticipating the future is essential for efficient perception and action planning. Yet the role of anticipation in event segmentation is understudied because empirical research has focused on retrospective cues such as surprise. We address this concern in the context of perception of musical-phrase boundaries.
A computational model of cognitive sequence processing was used to control the information-dynamic properties of tone sequences. In an implicit, self-paced listening task (N = 38), undergraduates dwelled longer on tones generating high entropy (i.e., high uncertainty) than on those generating low entropy (i.e., low uncertainty). Similarly, sequences that ended on tones generating high entropy were rated as sounding more complete (N = 31 undergraduates).
These entropy effects were independent of both the surprise (i.e., information content) and phrase position of target tones in the original musical stimuli.
Our results indicate that events generating high entropy prospectively contribute to segmentation processes in auditory sequence perception, independently of the properties of the subsequent event.