New Theory of How the Brain Generates Our Perceptions of the World

New research out of the University of Pennsylvania is filling in gaps between two prevailing theories about how the brain generates our perception of the world.

One, called the Bayesian decoding theory, says that to best recognize what’s in front of us, the brain combines the sensory signals it receives, a scene we’re looking at, for example, with our preconceived notions. Experience says that a school bus is typically yellow, so logically the one in front of us is more likely yellow than blue.

The second theory, called efficient coding, explains that, for sensory input resources like neurons in the brain and retinas in the eyes to efficiently do their jobs, they process and store more precise information about the most frequently viewed objects. If you see a yellow bus every day, your brain encodes an accurate picture of the vehicle so on next viewing you’ll know its color. Conversely, the brain won’t expend many resources to remember a blue bus.

Alan Stocker, an assistant professor in the psychology and the electrical and systems engineering departments, and Xue-Xin Wei, a psychology graduate student, combined these two concepts to create a new theory about how we perceive our world: How often we observe an object or scene shapes both what we expect of something similar in the future and how accurately we’ll see it.

“There are two forces that determine what we perceive,” Stocker said.

Image shows the outline of a human head in blue next to the words perception, encoding and decoding.
A common misconception found across these studies was that dementia is a normal part of aging. Credit: The researchers/Research Square.

According to his research, those sometimes work against each other; we end up observing the opposite of what our experience tells us should happen. Keeping with the bus example, a bus that is actually blue would look even bluer than it is.

Stocker and Wei also discovered that the accuracy of the object or scene itself changes the viewer’s perception. “If [the stimulus] is cloudy, for example, that would have a different effect than if the ‘noise’ was at the neural level,” Stocker said. “If I show you something briefly, the neurons in the brain only have time to respond with a few spikes. That gives a low accuracy representation in the brain.”

Given the prevalence of the Bayesian model, Stocker said there’s sometimes hesitance to discuss the notion of repulsive bias, something he said he hopes their research makes more acceptable.

“It’s a big step forward bringing efficient coding into the picture,” said Stocker. “It opens up the door to a whole new class of data considered ‘anti-Bayesian’ before.”

About this neuroscience research

Source: Michele Berger – University of Pennsylvania
Image Source: The image is credited to the researchers and is adapted from the Research Square video
Video Source: The video is available at the Research Square Vimeo page
Original Research: Abstract for “A Bayesian observer model constrained by efficient coding can explain ‘anti-Bayesian’ percepts” by Xue-Xin Wei and Alan A Stocker in Nature Neuroscience. Published online September 7 2015 doi:10.1038/nn.4105


Abstract

A Bayesian observer model constrained by efficient coding can explain ‘anti-Bayesian’ percepts

Bayesian observer models provide a principled account of the fact that our perception of the world rarely matches physical reality. The standard explanation is that our percepts are biased toward our prior beliefs. However, reported psychophysical data suggest that this view may be simplistic. We propose a new model formulation based on efficient coding that is fully specified for any given natural stimulus distribution. The model makes two new and seemingly anti-Bayesian predictions. First, it predicts that perception is often biased away from an observer’s prior beliefs. Second, it predicts that stimulus uncertainty differentially affects perceptual bias depending on whether the uncertainty is induced by internal or external noise. We found that both model predictions match reported perceptual biases in perceived visual orientation and spatial frequency, and were able to explain data that have not been explained before. The model is general and should prove applicable to other perceptual variables and tasks.

“A Bayesian observer model constrained by efficient coding can explain ‘anti-Bayesian’ percepts” by Xue-Xin Wei and Alan A Stocker in Nature Neuroscience. Published online September 7 2015 doi:10.1038/nn.4105

Feel free to share this neuroscience article.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.