Summary: Researchers reveal how the auditory cortex reacts to “wrong” sounds and shed light on auditory memory recall.
Whether it’s a car door not properly closed, a shanked kick in football, or a misplaced note in music, our ears tell us when something doesn’t sound right.
A team of neuroscientists has recently uncovered how the brain works to make distinctions between “right” and “wrong” sounds—research that provides a deeper understanding of how we learn complex audio-motor tasks like speaking or playing music.
“We listen to the sounds our movements produce to determine whether or not we made a mistake,” says David Schneider, an assistant professor in New York University’s Center for Neural Science and the senior author of the paper, which appears in the journal Current Biology.
“This is most obvious for a musician or when speaking, but our brains are actually doing this all the time, such as when a golfer listens for the sound of her club making contact with the ball. Our brains are always registering whether a sound matches or deviates from expectations. In our study, we discovered that the brain is able to make precise predictions about when a sound is supposed to happen and what it should sound like.”
The researchers focused their work on better understanding everyday phenomena. For instance, we know what a car door should sound like because we have shut them countless times.
However, on those occasions when we leave the seat belt in the door jamb of the car and try to shut it, we hear something different—a “clank” rather than a “thump.” It’s the same with a batter in baseball who hits a pitched ball squarely as opposed to merely tipping it—or when a musician hears a note that fits the melody rather than one that disrupts it.
However, it’s unclear how the brain works to recognize “right” from “wrong” sounds. Understanding how it does so may offer insights into how the healthy brain can learn to speak and play music as well as into what goes wrong in neural disorders such as schizophrenia.
To address this, Schneider and his colleagues studied the neurological activity of mice when they performed tasks akin to closing a car door. The scientists trained mice to push a lever with their paws—like the shutting of a car door—and played a tone every time the lever reached a particular position.
Eventually, the mice learned exactly what the lever was supposed to sound like. If the researchers removed the sound, played the wrong sound, or played the correct sound at the wrong time, the mice adjusted their behavior, just as humans would do if a car door did something unexpected.
The scientists recorded the mice’s brain activity during these behaviors—specifically, how the neurons responded in the auditory cortex, one of the brain’s “hearing centers.” Overall, these neurons were only minimally active when a mouse pushed a lever and heard the expected sound.
However, if the researchers changed the sound to the wrong frequency—similar to car door’s “clank”—or even slightly shifted the timing of the sound, these neurons responded vigorously.
“The auditory cortex seems to signal not what was heard, but whether what was heard matched or violated its expectations,” observes Nicholas Audette, the lead author on the study and a postdoctoral fellow in the Schneider lab.
In addition, the researchers found that if they omitted the sound altogether—similar to not shutting a door hard enough—they observed a select group of neurons become active at the time the sound should have happened.
“Because these were some of the same neurons that would have been active if the sound had actually been played, it was as if the brain was recalling a memory of the sound that it thought it was going to hear,” notes Schneider.
In addition to its role in predicting self-generated sounds during everyday behaviors, the same brain circuitry that Schneider and his colleagues are studying is thought to malfunction in diseases such as schizophrenia, leading to the perception of “phantom voices” that aren’t actually there. Their hope is that by understanding these brain circuits in the healthy brain, they can begin to understand what might go wrong during disease.
The study’s other authors were Alessandro La Chioma, a postdoctoral fellow at the Center for Neural Science, and WenXi Zhou, an NYU doctoral student.
Funding: This research was supported by grants from the National Institutes of Health (T32-MH019524, 1R01-DC018802).
About this auditory neuroscience research news
Author: James Devitt
Contact: James Devitt – NYU
Image: The image is in the public domain
Original Research: Closed access.
“Precise movement-based predictions in the mouse auditory cortex” by David Schneider et al. Current Biology
Precise movement-based predictions in the mouse auditory cortex
- Mice learn to expect the acoustic consequences of a simple forelimb movement
- Auditory cortex activity is strong when a self-generated sound violates expectation
- During silent movements, auditory cortex activity reflects precise expectations
- Movement, expectation, and prediction signals are distributed across cortical layers
Many of the sensations experienced by an organism are caused by their own actions, and accurately anticipating both the sensory features and timing of self-generated stimuli is crucial to a variety of behaviors.
In the auditory cortex, neural responses to self-generated sounds exhibit frequency-specific suppression, suggesting that movement-based predictions may be implemented early in sensory processing.
However, it remains unknown whether this modulation results from a behaviorally specific and temporally precise prediction, nor is it known whether corresponding expectation signals are present locally in the auditory cortex.
To address these questions, we trained mice to expect the precise acoustic outcome of a forelimb movement using a closed-loop sound-generating lever.
Dense neuronal recordings in the auditory cortex revealed suppression of responses to self-generated sounds that was specific to the expected acoustic features, to a precise position within the movement, and to the movement that was coupled to sound during training.
Prediction-based suppression was concentrated in L2/3 and L5, where deviations from expectation also recruited a population of prediction-error neurons that was otherwise unresponsive.
Recording in the absence of sound revealed abundant movement signals in deep layers that were biased toward neurons tuned to the expected sound, as well as expectation signals that were present throughout the cortex and peaked at the time of expected auditory feedback.
Together, these findings identify distinct populations of auditory cortical neurons with movement, expectation, and error signals consistent with a learned internal model linking an action to its specific acoustic outcome.