Summary: Our perception of the world isn’t necessarily the most accurate, but rather the most beneficial for our survival. By manipulating the context and rewards of visual tasks, researchers found that our visual perception, even at the retinal level, changes to maximize personal benefits.
This finding suggests that cognitive biases may not only affect our decision-making process but also alter our fundamental perception. The results of this study may impact our understanding of human biases and also help refine AI perception algorithms.
Key Facts:
- This study provides evidence that cognitive biases aren’t just errors causing inaccurate judgments, but in fact, are integral to our survival strategy, as they allow us to perceive the world in a selective way due to our limited cognitive abilities.
- In addition to visual perception, the study’s results may extend to other sensory perceptions, suggesting that our overall interaction with the world is significantly influenced by the context and potential benefits, not just an objective interpretation of sensory inputs.
- The fact that these biases kick in even before we consciously think about what we see might explain why these distortions are difficult to identify and change. It highlights the deep-rooted nature of cognitive biases in our perceptual systems.
Source: ETH Zurich
Are our senses there to provide us with the most complete representation of the world, or do they serve our survival?
For a long time, the former was the dominant view in neuroscience. “Was” is the operative word here. In the last 50 years, psychologists such as Nobel Prize winners Daniel Kahnemann and Amos Tversky have shown that human perception is often anything but complete and instead is highly selective.
Experiments have now verified that there is a whole list of examples of cognitive biases. One of the most important is confirmation bias: we often process new information in a way that confirms our beliefs and expectations.
But up until now, researchers haven’t been able to fully explain under what conditions these distortions come into play and when exactly in the perceptual process they begin.
A study by researchers led by ETH Professor Rafael Polania and University of Zurich Professor Todd Hare, recently published in the journal Nature Human Behaviour now shows that the brain already adjusts the visual perception of things on the retina when it is in our interest to do so. Or, to put it another way, we unconsciously see things distorted when it comes to our survival, well-being, or other interests.
How slanted are the stripe patterns?
Polania and his coauthors were able to prove through a series of experiments that people perceive the same things differently when the decision context changes. The study’s 86 participants were asked to repeatedly compare two black-and-white striped patterns – known as Gabor patches – and say which pattern was closer to a 45-degree angle. The aim was to score as many points as possible.
In the first round, they received 15 points for every correct answer. But in the second round, the decision context changed: it no longer mattered whether the answer was right or wrong. Instead, the score increased continuously from 0 to 45 degrees. The participants saw the same pairs in both rounds.
They ought really to have reached the same conclusion both times. This is because when we look at something, our retinas convert the reflected light into visual information that is transmitted to our brain via nerve pathways.
There, they are matched with our prior knowledge and experience and processed to provide a three-dimensional image. The visual information was the same in both rounds.
What we see depends on the context
When the researchers evaluated the experiment, they realised that the participants had adjusted their perceptions in the second round to score as many points as possible. If they actually saw the world objectively, there shouldn’t be any differences between the two rounds.
Participants’ assessments of the Gabor patches’ angles ought to have been the same each time, irrespective of the decision context. But this wasn’t the case: “People flexibly and unconsciously adjust their perceptions when it works to their advantage,” Polania says.
For Polania and his coauthors, inferring that cognitive distortions are errors that cause us to make inaccurate or irrational judgements and decisions is missing the point. “Since our cognitive abilities are limited, it actually makes sense that we perceive the world in a distorted or selective way,” he says.
Even the retina prioritises useful information
Our visual perception seems to depend more strongly on the potential utility of information than previously thought. In another experiment, the researchers were able to show that our retinas already try to process information in the most advantageous way possible.
“As soon as we look at something, we try to maximise our own benefit. This means that cognitive bias starts long before we consciously think about something,” Polania says.
This is because a lot of information is lost in perception. It’s therefore more efficient for the brain to filter, prioritise and select information as early on as possible.
AI filters visual information like humans
To determine when visual information is distorted, a group of participants repeated the test with a variable score. Unlike the first experiment, however, the Gabor patch pairs were displayed at the top of the visual test field.
After this training round came the real task: the participants repeatedly saw a single Gabor patch at the top or bottom of the test area and had to estimate the angle of the stripes.
The researchers found that the participants assessed each patch differently depending on whether it appeared at the bottom or at the top of the test field. When subjects saw the patch at the top, their perceptions immediately adapted to the utility maximisation logic they had applied during the training round. This wasn’t the case when the patch appeared at the bottom.
The study’s authors also tested these results on an artificial intelligence (AI) agent that underwent the same experiments as the human subjects. To achieve the highest possible score in the experiment, the AI agent also stopped trying to represent the world accurately when it started processing the information. The agent exhibited the same perceptual biases observed in humans.
Biases are more deeply rooted than previously thought
The results of the study may also shed new light on the discussion of biases in humans and AI agents. Perhaps these distortions are so difficult to identify and change because they are an unconscious part of vision. They kick in long before we can think about what we see.
The fact that our perception is programmed to increase utility rather than to fully represent the world doesn’t make things any easier. Yet, the results of the study can also help us find new ways to identify and correct biases.
About this visual neuroscience and AI research news
Author: Christoph Elhardt
Source: ETH Zurich
Contact: Christoph Elhardt – ETH Zurich
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Sensory perception relies on fitness-maximizing codes” by Schaffner J et al. Nature Human Behavior
Abstract
Sensory perception relies on fitness-maximizing codes
Sensory information encoded by humans and other organisms is generally presumed to be as accurate as their biological limitations allow.
However, perhaps counterintuitively, accurate sensory representations may not necessarily maximize the organism’s chances of survival.
To test this hypothesis, we developed a unified normative framework for fitness-maximizing encoding by combining theoretical insights from neuroscience, computer science, and economics.
Behavioural experiments in humans revealed that sensory encoding strategies are flexibly adapted to promote fitness maximization, a result confirmed by deep neural networks with information capacity constraints trained to solve the same task as humans.
Moreover, human functional MRI data revealed that novel behavioural goals that rely on object perception induce efficient stimulus representations in early sensory structures.
These results suggest that fitness-maximizing rules imposed by the environment are applied at early stages of sensory processing in humans and machines.