How the brain infers causal structure for visual awareness.
Our brain must deal with a lot of uncertainty. Incoming sensory information is noisy and incomplete; our environment is continuously changing and unpredictable. Researchers at the Radboud University show that the brain creates a coherent story by considering multiple probabilities about the state of the body and the environment. PLoS Computational Biology publishes their results on March 11.
Our eyes continuously make very fast eye movements to scan the environment. Watch the eyes of your friend, you will notice that the eyes are never still. However, when you watch your own eyes in a mirror, you never see the motion of your own eyes. Thus, when we make these rapid eye movement, we are virtually blind. How then do we detect possible changes in the environment that occur during an eye movement?
Researchers at the Donders Institute, the brain research center of the Radboud University, performed an experiment in which they could determine how the brain deals with this uncertainty. They made participants watch dots on a screen, and at the moment the eye movement was made, one of the dots jumped to a slightly different location. They then asked their participants to report the initial location of the displaced dot. You can imagine that when the object made a big jump, participant perceive the change in location, but not, or much less so, when the jump is small.
The researchers show that the brain uses causal inference to solve the problem. Jeroen Atsma, one of the authors of the study, explains what this means. ‘The brain considers two causes simultaneously in the processing of the new visual image after the eye movement: the dot has remained stationary, and the dot has moved. But, because the visual input is noisy, the brain can never determine with absolute certainty whether the dot moved or remained stationary. Because of this, both possibilities are considered. The brain balances the two causal probabilities to determine the initial location of the dot, which is actually statistically the most optimal strategy.’
‘We found large differences among participants’, says Atsma, who works on a PhD thesis that includes this study. ‘All participants always considered both causes simultaneously, but differ in how they weighted the two causes. These differences were predicted by a model by simply changing one parameter.’ ‘It is now apparent that probabilities and different causes are considered simultaneously, something which does not directly follow from intuition.’
‘The brain determines the causes of events, which we call causal inference,’ says Pieter Medendorp, professor of Sensorimotor Integration at the Donders Institute. ‘We captured something important of how the brain deals with conflicting information and uncertainty in signals. You won’t notice the causal inference process, it is all unconscious. Due to this discovery we can become aware of and measure how signals are combined, and quantify differences between people and situations.’
About this neuroscience research
Source:Donders Institute Image Source: The image is adapted from the Donders Institute press release. Original Research: Full open access research for “Causal Inference for Spatial Constancy across Saccades” by Jeroen Atsma, Femke Maij, Mathieu Koppen, David E. Irwin, and W. Pieter Medendorp in PLOS Computational Biology. Published online March 11 2016 doi:10.1371/journal.pcbi.1004766
Causal Inference for Spatial Constancy across Saccades
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability.
“Causal Inference for Spatial Constancy across Saccades” by Jeroen Atsma, Femke Maij, Mathieu Koppen, David E. Irwin, and W. Pieter Medendorp in PLOS Computational Biology. Published online March 11 2016 doi:10.1371/journal.pcbi.1004766