Summary: For decades, neuroscientists believed the brain became more efficient during learning by making neurons act more independently—like a factory line where everyone does one job perfectly. However, a new study has flipped this theory on its head.
By tracking the visual cortex over several weeks, researchers discovered that as we master a new skill, our neurons actually become more coordinated, sharing more information rather than less. This “neural teamwork” allows the brain to blend what we see with what we expect to see, creating a flexible, “inference-based” model of the world that makes our perception more robust and adaptive.
Key Facts
- The Coordination Shift: As subjects learned to differentiate complex visual patterns, their sensory neurons shifted from acting independently to working as a highly coordinated team.
- Information Redundancy: Learning increases the amount of shared information among neurons, a direct contradiction to the long-held “efficiency” theory that favored neural independence.
- Active Engagement Only: This neural “teaming up” only occurs when the brain is actively performing a task and making decisions. It disappears when the brain is passively viewing the same stimuli.
- Feedback Loops: The researchers believe this coordination is driven by feedback from higher-level brain areas, allowing the brain to incorporate prior knowledge into sensory perception.
- AI Inspiration: These findings suggest that artificial intelligence could become more human-like by incorporating “generative feedback loops” that allow for faster learning from limited data.
Source: University of Rochester
When you get better at a skill—recognizing a familiar face in a crowd, spotting a typo at a glance, or anticipating the next move in a game—sensory neurons in your brain become more coordinated, sharing information rather than acting more independently.
That’s the conclusion of a new study by researchers at the University of Rochester and its Del Monte Institute for Neuroscience, published in Science, which challenges a long-held assumption in neuroscience that learning improves efficiency by minimizing repetition across neural signals.
Led by Shizhao Liu, a graduate student in the labs of Ralf Haefner and Adam Snyder, both faculty members in the Department of Brain and Cognitive Sciences, the study shows that learning instead increases shared activity among neurons. The findings could provide insights into learning disorders and inspire more flexible, human-like artificial intelligence tools.
“The dominant view in neuroscience has been that learning makes the brain more efficient by pushing neurons to act more independently, so information can be read out more cleanly,” Liu says. “Our results support a different idea, that sensory areas of the brain aren’t just passively encoding the world. They’re actively performing inference by combining what’s coming in with what the brain has learned to expect.”
How learning reshapes neural teamwork
For decades, researchers believed that learning streamlined how the brain processes information by reducing shared activity among neurons, allowing information to be read out more efficiently. The idea shaped how researchers thought about everything from perception to decision-making.
But the research from Liu, Haefner, Snyder, and their team suggests a different mechanism. Rather than becoming more independent, neurons become more coordinated as learning unfolds, increasing the amount of information they share, particularly when the brain is actively engaged in a task and making decisions.
This coordination reflects the brain’s growing reliance on internal expectations. As learning progresses, feedback from higher-level brain areas appears to shape how sensory neurons respond, allowing perception to incorporate both incoming information and what the brain has learned from past experiences.
Tracking neurons as learning unfolds
The researchers tracked the activity of the same small networks of neurons in the visual cortex over several weeks as subjects learned to tell apart different visual patterns. The team measured whether neurons were increasingly acting on their own or sharing more information as learning progressed.
The researchers discovered that before learning, neurons mostly worked independently. But as subjects honed their visual skills, the neurons started to behave more like a well-trained sports team, communicating and working together in a coordinated way.
“It’s a bit like a group of people solving a problem,” Snyder says. “Instead of everyone working in isolation as efficiently as possible, learning makes them communicate more. That shared information makes each individual better informed and potentially makes the group more flexible and adaptive.”
Importantly, this coordinated effect only appeared when subjects were actively performing a task and making decisions based on what they saw. When they passively looked at the same images without needing to respond, the effect disappeared.
The neurons most important for the task showed the biggest boost in coordination, especially at the moments when decisions were made.
But these are flexible, not permanent, changes. The researchers believe these shifts are guided by feedback signals from higher-level brain areas, allowing neurons to adjust their behavior on the fly, depending on the task.
The results support a growing idea in neuroscience that the brain isn’t a simple conveyor belt that passes information forward. Instead, it constantly blends what we see with what we expect to see, creating a richer, more informed picture of the world. And that blending requires groups of neurons to act together, not separately.
Insights for health and AI
Understanding how the brain coordinates neurons during learning could provide new insights into learning disorders and conditions that affect perception. It could also help scientists design artificial intelligence systems that generalize better by taking inspiration from the way the brain flexibly blends prior expectations with new sensory information.
“Most current artificial intelligence systems are built on discriminative architectures that map sensory inputs directly to outputs,” Haefner says.
“Our new research suggests that incorporating generative feedback loops—in which internal models shape sensory representations—may lead to systems that learn faster from limited data, are more robust to uncertainty, and adapt more flexibly to changing tasks.”
Key Questions Answered:
A: That was the old way of thinking! We used to think the brain was like a lean startup where every neuron did its own thing to save energy. But this study shows the brain is more like a world-class sports team. By communicating and “repeating” key information, the neurons ensure that the message is loud, clear, and flexible enough to adapt if the situation changes.
A: If learning is about neurons learning to “speak the same language” and coordinate, then learning disorders might be a breakdown in that communication. Instead of a coordinated team, the neurons might be staying “isolated,” making it harder for the brain to turn sensory input into a useful internal model.
A: Most current AI is “one-way”—it takes data and gives an answer. This study suggests that “two-way” AI (where the system’s expectations influence how it perceives new data) would be much more powerful, robust, and capable of learning from very few examples, just like a human brain.
Editorial Notes:
- This article was edited by a Neuroscience News editor.
- Journal paper reviewed in full.
- Additional context added by our staff.
About this neuroscience and learning research news
Author: Lindsey Valich
Source: University of Rochester
Contact: Lindsey Valich – University of Rochester
Image: The image is credited to Neuroscience News
Original Research: Closed access.
“Task learning increases information redundancy of neural responses in macaque visual cortex” by Shizhao Liu, Anton Pletenev, Ralf M. Haefner, and Adam C. Snyder. Science
DOI:10.1126/science.adw7707
Abstract
Task learning increases information redundancy of neural responses in macaque visual cortex
INTRODUCTION
How does the brain transform sensory input into perception and behavior? The classic model guiding most of neuroscience and modern deep learning views perception as a largely feedforward process: Sensory signals are transformed from early to higher visual areas to make behaviorally relevant information more explicit. Feedback connections are thought to merely fine-tune this process—enhancing relevant features or suppressing noise during attention and learning.
An alternative framework, generative inference, posits that sensory processing is fundamentally bidirectional. In this view, neurons represent beliefs about causes in the external world, continuously updated by the exchange of information between sensory evidence (feedforward) and prior expectations (feedback).
RATIONALE
A recent theoretical prediction from the generative inference framework offers a decisive way to empirically distinguish these two models. Generative inference models predict an increased sharing of task-related information among sensory neurons while learning a perceptual decision-making task—manifesting as higher redundancy in their responses. This prediction directly opposes the classic model, which holds that learning and attention reduce redundancy and correlated variability to improve coding efficiency.
To test these conflicting predictions, we measured changes in information redundancy among neurons in visual area V4 of two macaque monkeys as they learned to discriminate between two orientations in two separate tasks (cardinal and oblique). Neural activity was recorded chronically using Utah arrays over weeks of training. We quantified information redundancy as the difference between the linear Fisher information carried by the intact population activity and that carried by the same population after removing correlations.
RESULTS
At the start of learning, redundancy was near zero, indicating largely independent neural responses. Over the course of training, redundancy increased, ultimately reaching levels where roughly half of each neuron’s information was shared with other recorded neurons. Redundancy also increased dynamically within trials, over hundreds of milliseconds, consistent with the gradual accumulation and sharing of information.
Increased redundancy did not result in a loss of information in the population but, instead, the individual-neuron information increased—both predicted by generative inference. Learning-related changes in redundancy were stronger during task performance compared with passive viewing on the same day, which suggests that the increase in redundancy owing to a redistribution of information depends on active task engagement.
CONCLUSION
Learning a perceptual task increased information redundancy among sensory neurons—a result that contradicts conventional understanding of the roles of learning and attention. Rather than eliminating correlated variability, learning appears to redistribute information across neurons through feedback and recurrent interactions, enabling consistent beliefs about the sensory world.
These findings suggest that cortical sensory processing is best understood as a dynamic inference process—one that integrates prior expectations and sensory evidence—challenging the long-held assumption of a fundamentally unidirectional information flow during sensory processing in the brain.

