How the Brain Encodes Sounds

Summary: Researchers report the auditory cortex may encode noises differently that was previously thought.

Source: WUSTL.

When you are out in the woods and hear a cracking sound, your brain needs to process quickly whether the sound is coming from, say, a bear or a chipmunk. In new research published in PLoS Biology, a biomedical engineer at Washington University in St. Louis has a new interpretation for an old observation, debunking an established theory in the process.

Dennis Barbour, MD, PhD, associate professor of biomedical engineering in the School of Engineering & Applied Science who studies neurophysiology, found in an animal model that auditory cortex neurons may be encoding sounds differently than previously thought. Sensory neurons, such as those in auditory cortex, on average respond relatively indiscriminately at the beginning of a new stimulus, but rapidly become much more selective. The few neurons responding for the duration of a stimulus were generally thought to encode the identity of a stimulus, while the many neurons responding at the beginning were thought to encode only its presence. This theory makes a prediction that had never been tested — that the indiscriminate, initial responses would encode stimulus identity less accurately than how the selective responses register over the sound’s duration.

“At the beginning of a sound transition, things are diffusely encoded across the neuron population, but sound identity turns out to be more accurately encoded,” Barbour said. “As a result, you can more rapidly identify sounds and act on that information. If you get about the same amount of information for each action potential spike of neural activity, as we found, then the more spikes you can put toward a problem, the faster you can decide what to do. Neural populations spike most and encode most accurately at the beginning of stimuli.”

Barbour’s study involved recording individual neurons. To make similar kinds of measurements of brain activity in humans, researchers must use noninvasive techniques that average many neurons together. Event-related potential (ERP) techniques record brain signals through electrodes on the scalp and reflect neural activity synchronized to the onset of a stimulus. Functional MRI (fMRI), on the other hand, reflects activity averaged over several seconds. If the brain were using fundamentally different encoding schemes for onsets versus sustained stimulus presence, these two methods might be expected to diverge in their findings. Both reveal the neural encoding of stimulus identity, however.

a head
Barbour said the research is the most fundamental work to build a theory for how information might be encoded for sound processing, yet it implies a novel sensory encoding principle potentially applicable to other sensory systems, such as how smells are processed and encoded. NeuroscienceNews.com image is adapted from the WUSTL news release.

“If function is localized, with small numbers of neurons bunched together doing similar things, that’s consistent with sparse coding, high selectivity, and low population spiking rates. But if you have distributed activity, or lots of neurons contributing all over the place, that’s consistent with dense coding, low selectivity, and high population spiking rates. Depending on how the experiment is conducted, neuroscientists see both. Our evidence suggests that it might just be both, depending on which data you look at and how you analyze it.”

Barbour said the research is the most fundamental work to build a theory for how information might be encoded for sound processing, yet it implies a novel sensory encoding principle potentially applicable to other sensory systems, such as how smells are processed and encoded. Earlier this year, Barbour worked with Barani Raman, associate professor of biomedical engineering, to investigate how the presence and absence of an odor or a sound is processed. While the response times between the olfactory and auditory systems are different, the neurons are responding in the same ways. The results of that research also gave strong evidence that there may exist a stored set of signal processing motifs that is potentially shared by different sensory systems and even different species.

About this neuroscience research article

Funding: Funding for this research was provided by: the National Institutes of Health (R01- DC009215).

Source: Judy Martin – WUSTL
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is adapted from the WUSTL news release.
Original Research: Full open access research for “Rate, not selectivity, determines neuronal population coding accuracy in auditory cortex” by Wensheng Sun and Dennis L. Barbour in PLOS Biology. Published online November 1 2017 doi:10.1371/journal.pbio.2002459

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]WUSTL “How the Brain Encodes Sounds.” NeuroscienceNews. NeuroscienceNews, 11 November 2017.
<https://neurosciencenews.com/sound-encoding-7925/>.[/cbtab][cbtab title=”APA”]WUSTL (2017, November 11). How the Brain Encodes Sounds. NeuroscienceNews. Retrieved November 11, 2017 from https://neurosciencenews.com/sound-encoding-7925/[/cbtab][cbtab title=”Chicago”]WUSTL “How the Brain Encodes Sounds.” https://neurosciencenews.com/sound-encoding-7925/ (accessed November 11, 2017).[/cbtab][/cbtabs]


Abstract

Rate, not selectivity, determines neuronal population coding accuracy in auditory cortex

The notion that neurons with higher selectivity carry more information about external sensory inputs is widely accepted in neuroscience. High-selectivity neurons respond to a narrow range of sensory inputs, and thus would be considered highly informative by rejecting a large proportion of possible inputs. In auditory cortex, neuronal responses are less selective immediately after the onset of a sound and then become highly selective in the following sustained response epoch. These 2 temporal response epochs have thus been interpreted to encode first the presence and then the content of a sound input. Contrary to predictions from that prevailing theory, however, we found that the neural population conveys similar information about sound input across the 2 epochs in spite of the neuronal selectivity differences. The amount of information encoded turns out to be almost completely dependent upon the total number of population spikes in the read-out window for this system. Moreover, inhomogeneous Poisson spiking behavior is sufficient to account for this property. These results imply a novel principle of sensory encoding that is potentially shared widely among multiple sensory systems.

“Rate, not selectivity, determines neuronal population coding accuracy in auditory cortex” by Wensheng Sun and Dennis L. Barbour in PLOS Biology. Published online November 1 2017 doi:10.1371/journal.pbio.2002459

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.