Summary: An fMRI study conducted by University of Glasgow researchers reveals how our brains can predict what our eyes will see next.
Source: University of Glasgow.
Neuroscientists at the University of Glasgow have shown how the human brain can predict what our eyes will see next, using functional magnetic resonance imaging (fMRI).
In a new study published in the Nature journal Scientific Reports, researchers have gained a greater understanding of visual mechanisms, and how seeing is a constant two-way dialogue between the brain and the eyes.
The research, led by Professor Lars Muckli of the University of Glasgow, used fMRI and a visual illusion to show that the brain anticipates the information it will see when the eyes next move.
The illusion involves two stationary flashing squares that look to the observer as one square moving between the two locations because the brain predicts motion. During these flashes, the authors instructed participants to move their eyes. The researchers imaged the visual cortex and found that the prediction of motion updated to a new spatial position in cortex with the eye movement.
We move our eyes approximately 4 times per second, meaning our brains have to process new visual information every 250 milliseconds. Nevertheless, the world appears stable. If you were to move your video camera so frequently, the film would appear jumpy. The reason we still perceive the world as stable is because our brains think ahead. In other words, the brain predicts what it is going to see after you have moved your eyes.
Professor Lars Muckli, of the Institute of Neuroscience & Psychology, said: “This study is important because it demonstrates how fMRI can contribute to this area of neuroscience research. Further to that, finding a feasible mechanism for brain function will contribute to brain-inspired computing and artificial intelligence, as well as aid our investigation into mental disorders.”
The study also reveals the potential for fMRI to contribute to this area of neuroscience research, as the authors are able to detect a difference in processing of only 32ms, much faster than is typically thought possible with fMRI.
Scientist Dr Gracie Edwards:” Visual information is received from the eyes and processed by the visual system in the brain. We call visual information “feedforward” input. At the same time, the brain also sends information to the visual system, this information is called “feedback”.
“Feedback information influences our perception of the feedforward input using expectations based on our memories of similar perceptual events. Feedforward and feedback information interact with one another to produce the visual scenes we perceive every day.”
Funding: The study, “Predictive feedback to V1 dynamically updates with sensory input” is published in Scientific Reports. The research was funded by the BBSRC and a Human Brain Project grant.
Source: Ali Howard – University of Glasgow
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is adapted from University of Glasgow news release.
Original Research: Full open access research for “Predictive feedback to V1 dynamically updates with sensory input” by Grace Edwards, Petra Vetter, Fiona McGruer, Lucy S. Petro & Lars Muckli in Scientific Reports. Published online November 28 2017 doi:10.1038/s41598-017-16093-y
Predictive feedback to V1 dynamically updates with sensory input
Predictive coding theories propose that the brain creates internal models of the environment to predict upcoming sensory input. Hierarchical predictive coding models of vision postulate that higher visual areas generate predictions of sensory inputs and feed them back to early visual cortex. In V1, sensory inputs that do not match the predictions lead to amplified brain activation, but does this amplification process dynamically update to new retinotopic locations with eye-movements? We investigated the effect of eye-movements in predictive feedback using functional brain imaging and eye-tracking whilst presenting an apparent motion illusion. Apparent motion induces an internal model of motion, during which sensory predictions of the illusory motion feed back to V1. We observed attenuated BOLD responses to predicted stimuli at the new post-saccadic location in V1. Therefore, pre-saccadic predictions update their retinotopic location in time for post-saccadic input, validating dynamic predictive coding theories in V1.
“Predictive feedback to V1 dynamically updates with sensory input” by Grace Edwards, Petra Vetter, Fiona McGruer, Lucy S. Petro & Lars Muckli in Scientific Reports. Published online November 28 2017 doi:10.1038/s41598-017-16093-y