The Brain Processes Sight and Sound in the Same Manner

Summary: A new study reveals both auditory and visual learning follow similar principles. The findings, researchers report, could help in the development of new approaches to restore sensory deficits.

Source: GUMC.

Although sight is a much different sense than sound, Georgetown University Medical Center neuroscientists have found that the human brain learns to make sense of these stimuli in the same way.

The researchers say in a two-step process, neurons in one area of the brain learn the representation of the stimuli, and another area categorizes that input so as to ascribe meaning to it — like first seeing just a car without a roof and then analyzing that stimulus in order to place it in the category of “convertible.” Similarly, when a child learns a new word, it first has to learn the new sound and then, in a second step, learn to understand that different versions (accents, pronunciations, etc.) of the word, spoken by different members of the family or by their friends, all mean the same thing and need to be categorized together.

“A computational advantage of this scheme is that it allows the brain to easily build on previous content to learn novel information,” says the study’s senior investigator, Maximilian Riesenhuber, PhD, a professor in Georgetown University School of Medicine’s Department of Neuroscience. Study co-authors include first author, Xiong Jiang, PhD; graduate student Mark A. Chevillet; and Josef P. Rauschecker, PhD, all Georgetown neuroscientists.

Their study, published in Neuron, is the first to provide strong evidence that learning in vision and audition follows similar principles. “We have long tried to make sense of senses, studying how the brain represents our multisensory world,” says Riesenhuber.

In 2007, the investigators were first to describe the two-step model in human learning of visual categories, and the new study now shows that the brain appears to use the same kind of learning mechanisms across sensory modalities.

The findings could also help scientists devise new approaches to restore sensory deficits, Rauschecker, one of the co-authors, says.

“Knowing how senses learn the world may help us devise workarounds in our very plastic brains,” he says. “If a person can’t process one sensory modality, say vision, because of blindness, there could be substitution devices that allow visual input to be transformed into sounds. So one disabled sense would be processed by other sensory brain centers.”

The 16 participants in this study were trained to categorize monkey communication calls– real sounds that mean something to monkeys, but are alien in meaning to humans. The investigators divided the sounds into two categories labeled with nonsense names, based on prototypes from two categories: so-called “coos” and “harmonic arches.” Using an auditory morphing system, the investigators were able to create thousands of monkey call combinations from the prototypes, including some very similar calls that required the participants to make fine distinctions between the calls. Learning to correctly categorize the novel sounds took about six hours.

Before and after training, fMRI data were obtained from the volunteers to investigate changes in neuronal tuning in the brain that were induced by categorization training. Advanced fMRI techniques, functional magnetic resonance imaging rapid adaptation (fMRI-RA) and multi-voxel pattern analysis, were used along with conventional fMRI and functional connectivity analyses. In this way, researchers were able to see two distinct sets of changes: a representation of the monkey calls in the left auditory cortex, and tuning analysis that leads to category selectivity for different types of calls in the lateral prefrontal cortex.

brain
The findings could also help scientists devise new approaches to restore sensory deficits, Rauschecker, one of the co-authors, says. NeuroscienceNews.com image is in the public domain.

“In our study, we used four different techniques, in particular fMRI-RA and MVPA, to independently and synergistically provide converging results. This allowed us to obtain strong results even from a small sample,” says co-author Jiang.

Processing sound requires discrimination in acoustics and tuning changes at the level of the auditory cortex, a process that the researchers say is the same between humans and animal communication systems. Using monkey calls instead of human speech forced the participants to categorize the sounds purely on the basis of acoustics rather than meaning.

“At an evolutionary level, humans and animals need to understand who is friend and who is foe, and sight and sound are integral to these judgments,” Riesenhuber says.

About this neuroscience research article

Funding: The work was supported by grant from the National Science Foundation (BCS 0749986). The authors report having no personal financial interests related to the study.

Source: Karen Teber – GUMC
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is in the public domain.
Original Research: Abstract for “Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes” by Xiong Jiang, Mark A. Chevillet, Josef P. Rauschecker, and Maximilian Riesenhuber in Neuron. Published April 18 2018.
doi:10.1016/j.neuron.2018.03.014

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]GUMC “The Brain Processes Sight and Sound in the Same Manner.” NeuroscienceNews. NeuroscienceNews, 18 April 2018.
<https://neurosciencenews.com/sight-sound-processing-8833/>.[/cbtab][cbtab title=”APA”]GUMC (2018, April 18). The Brain Processes Sight and Sound in the Same Manner. NeuroscienceNews. Retrieved April 18, 2018 from https://neurosciencenews.com/sight-sound-processing-8833/[/cbtab][cbtab title=”Chicago”]GUMC “The Brain Processes Sight and Sound in the Same Manner.” https://neurosciencenews.com/sight-sound-processing-8833/ (accessed April 18, 2018).[/cbtab][/cbtabs]


Abstract

Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes

Highlights
•Human subjects learned to categorize morphed monkey calls
•Training sharpened neural selectivity to auditory features in left auditory cortex
•Training induced auditory category selectivity in lateral prefrontal cortex
•This indicates similar principles of learning in the visual and auditory domains

Summary
Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain.

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.