If your eyes deceive you, blame your brain. Many optical illusions work because what we see clashes with what we expect to see.
That 3D movie? Give credit to filmmakers who exploit binocular vision, or the way the brain merges the slightly different images from the two eyes to create depth.
These are examples of the brain making sense of the information coming from the eyes in order to produce what we “see.” The brain combines signals that reach your retina with the models your brain has learned to predict what to expect when you move through the world. Your brain solves problems by inferring what is the most likely cause of any given image on your retina, based on knowledge or experience.
Scientists have explored the complex puzzle of visual perception with increasing precision, discovering that individual neurons are tuned to detect very specific motions: up, but not down; right, but not left; and in all directions. These same neurons, which live in the brain’s middle temporal visual area, are also sensitive to relative depth.
Now a Harvard Medical School team led by Richard Born has uncovered key principles about the way those neurons work, explaining how the brain uses sensory information to guide the decisions that underlie behaviors. Their findings, reported in Neuron, illuminate the nature and origin of the neural signals used to solve perceptual tasks.
Based on their previous work, the researchers knew that they could selectively interfere with signals concerning depth, while leaving the signals for direction of motion intact. They wanted to learn what happened next, after the visual information was received and used to make a judgment about the visual stimulus.
Was the next step based on “bottom-up” information coming from the retina as sensory evidence? Or, as in optical illusions, did top-down information originating in the brain’s decision centers influence what happened in response to a visual stimulus?
“We were able to show that there’s a direct bottom-up contribution to these signals,” said Born, HMS professor of neurobiology and senior author of the paper. “It’s told us some very interesting things about how the brain makes calculations and combines information from different sources, and how that information influences behaviors.”
In their experiments with nonhuman primates, the researchers cooled specific neurons to temporarily block their signals, in the same way that ice makes a sprained ankle feel better because it prevents pain neurons from firing.
The team selectively blocked pathways that provide information about visual depth–how far something is from the viewer–but not the direction of motion. The animals were trained to watch flickering dots on a screen, something like “snow” on an old television, and detect when the dots suddenly lined up and moved in one direction or changed in depth.
If the animal detected motion or a change in depth, making an eye movement to look at the changed stimulus would result in delivery of a reward.
When the neurons were inactivated, the animals were less likely to detect depth, but their ability to detect motion was not affected. This told the scientists that feed-forward information, not feedback, was being used by the animal to make its decision. Their findings help explain how relative motion and depth work together.
“Combining two pathways that compute two different things in the same neurons is essential for vision, we think,” Born said. “But for these two particular calculations, first you have to compute them separately before you can put them together.”
Born believes there are other implications of their work.
“We think that the same operations that are happening in the visual system are happening at higher levels of the brain, so that by understanding these circuits that are easier to study we think we will gain traction on those higher level questions,” Born said.
About this visual neuroscience research
Funding: This work was supported by the Sackler Scholarship, Quan Fellowship, the Natural Sciences and Engineering Research Council of Canada, National Eye Institute grant R01 EY11379 and the Core Grant for Vision Research EY12196.
Source: Elizabeth Cooney – Harvard Image Credit: Image is credited to Born lab Original Research:Abstract for “A Modality-Specific Feedforward Component of Choice-Related Activity in MT” by Alexandra Smolyanskaya, Ralf M. Haefner, Stephen G. Lomber, and Richard T. Born in Neuron. Published online June 10 2015 doi:10.1016/j.neuron.2015.06.018
A Modality-Specific Feedforward Component of Choice-Related Activity in MT
Highlights •V2/V3 inactivation reduces MT decision signals in a depth task but not a motion task •Observations suggest that MT decision signals are partly due to input correlations •A purely feedforward computational model can account for the data
Summary The activity of individual sensory neurons can be predictive of an animal’s choices. These decision signals arise from network properties dependent on feedforward and feedback inputs; however, the relative contributions of these inputs are poorly understood. We determined the role of feedforward pathways to decision signals in MT by recording neuronal activity while monkeys performed motion and depth tasks. During each session, we reversibly inactivated V2 and V3, which provide feedforward input to MT that conveys more information about depth than motion. We thus monitored the choice-related activity of the same neuron both before and during V2/V3 inactivation. During inactivation, MT neurons became less predictive of decisions for the depth task but not the motion task, indicating that a feedforward pathway that gives rise to tuning preferences also contributes to decision signals. We show that our data are consistent with V2/V3 input conferring structured noise correlations onto the MT population.
“A Modality-Specific Feedforward Component of Choice-Related Activity in MT” by Alexandra Smolyanskaya, Ralf M. Haefner, Stephen G. Lomber, and Richard T. Born in Neuron. Published online June 10 2015 doi:10.1016/j.neuron.2015.06.018