Why Learning Makes Brain Cells Work Together, Not Apart

Summary: For decades, neuroscientists believed that the brain became more efficient during learning by making neurons act more independently—reducing “redundancy” to clear up the signal. However, a new study has flipped this theory on its head.

Researchers found that as we master a skill, our sensory neurons actually become more coordinated, sharing more information rather than acting in isolation. This “teamwork” allows the brain to blend incoming sensory data with internal expectations, making our perception more robust and flexible.

Key Facts

  • The Coordination Shift: Learning increases shared activity among neurons, particularly when the brain is actively making decisions.
  • Predictive Inference: The brain isn’t just a passive recorder; it uses coordinated neural activity to combine what we see with what it expects to see based on past experience.
  • Active Engagement Required: This neural “teaming up” only occurs during active tasks. When subjects passively viewed the same stimuli without needing to respond, the coordination disappeared.
  • Flexible Systems: These changes are temporary and guided by feedback from higher-level brain areas, allowing the brain to adjust neural behavior on the fly.
  • AI Evolution: The findings suggest that AI could become more human-like by incorporating “generative feedback loops” that allow systems to learn faster from less data.

Source: University of Rochester

When you get better at a skill—recognizing a familiar face in a crowd, spotting a typo at a glance, or anticipating the next move in a game—sensory neurons in your brain become more coordinated, sharing information rather than acting more independently.

That’s the conclusion of a new study by researchers at the University of Rochester and its Del Monte Institute for Neuroscience, published in Science, which challenges a long-held assumption in neuroscience that learning improves efficiency by minimizing repetition across neural signals.

This shows a neuron.
New research shows that as learning unfolds, neurons become more coordinated, sharing information to improve task performance. Credit: Neuroscience News

Led by Shizhao Liu, a graduate student in the labs of Ralf Haefner and Adam Snyder, both faculty members in the Department of Brain and Cognitive Sciences, the study shows that learning instead increases shared activity among neurons. The findings could provide insights into learning disorders and inspire more flexible, human-like artificial intelligence tools.

“The dominant view in neuroscience has been that learning makes the brain more efficient by pushing neurons to act more independently, so information can be read out more cleanly,” Liu says.

“Our results support a different idea, that sensory areas of the brain aren’t just passively encoding the world. They’re actively performing inference by combining what’s coming in with what the brain has learned to expect.”

How learning reshapes neural teamwork

For decades, researchers believed that learning streamlined how the brain processes information by reducing shared activity among neurons, allowing information to be read out more efficiently. The idea shaped how researchers thought about everything from perception to decision-making.

But the research from Liu, Haefner, Snyder, and their team suggests a different mechanism. Rather than becoming more independent, neurons become more coordinated as learning unfolds, increasing the amount of information they share, particularly when the brain is actively engaged in a task and making decisions.

This coordination reflects the brain’s growing reliance on internal expectations. As learning progresses, feedback from higher-level brain areas appears to shape how sensory neurons respond, allowing perception to incorporate both incoming information and what the brain has learned from past experiences.

Tracking neurons as learning unfolds

The researchers tracked the activity of the same small networks of neurons in the visual cortex over several weeks as subjects learned to tell apart different visual patterns. The team measured whether neurons were increasingly acting on their own or sharing more information as learning progressed.

The researchers discovered that before learning, neurons mostly worked independently. But as subjects honed their visual skills, the neurons started to behave more like a well-trained sports team, communicating and working together in a coordinated way.

“It’s a bit like a group of people solving a problem,” Snyder says. “Instead of everyone working in isolation as efficiently as possible, learning makes them communicate more. That shared information makes each individual better informed and potentially makes the group more flexible and adaptive.”

Importantly, this coordinated effect only appeared when subjects were actively performing a task and making decisions based on what they saw. When they passively looked at the same images without needing to respond, the effect disappeared.

The neurons most important for the task showed the biggest boost in coordination, especially at the moments when decisions were made.

But these are flexible, not permanent, changes. The researchers believe these shifts are guided by feedback signals from higher-level brain areas, allowing neurons to adjust their behavior on the fly, depending on the task.

The results support a growing idea in neuroscience that the brain isn’t a simple conveyor belt that passes information forward. Instead, it constantly blends what we see with what we expect to see, creating a richer, more informed picture of the world. And that blending requires groups of neurons to act together, not separately.

Insights for health and AI

Understanding how the brain coordinates neurons during learning could provide new insights into learning disorders and conditions that affect perception. It could also help scientists design artificial intelligence systems that generalize better by taking inspiration from the way the brain flexibly blends prior expectations with new sensory information.

“Most current artificial intelligence systems are built on discriminative architectures that map sensory inputs directly to outputs,” Haefner says.

“Our new research suggests that incorporating generative feedback loops—in which internal models shape sensory representations—may lead to systems that learn faster from limited data, are more robust to uncertainty, and adapt more flexibly to changing tasks.”

Key Questions Answered:

Q: I thought the brain got “leaner” as we learned. Isn’t redundancy bad?

A: That was the old “efficiency” model. But think of it like a sports team: if every player works in total isolation, the team is fragile. By communicating and sharing information, the group becomes more adaptive. This “redundancy” is actually the brain’s way of ensuring the message is loud and clear even in uncertain conditions.

Q: Does this happen every time I learn something new?

A: Only if you’re actively engaged! The study showed that if you’re just “zoning out” while looking at something, your neurons don’t coordinate. The “teamwork” only kicks in when you are focused and making decisions based on that information.

Q: How could this change how we build AI?

A: Most AI today is “one-way”—input goes in, answer comes out. This research suggests that adding “feedback loops”—where the AI’s internal model helps shape how it sees new data—could make machines much better at handling uncertainty, just like humans.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this learning and neuroscience research news

Author: Lindsey Valich
Source: University of Rochester
Contact: Lindsey Valich – University of Rochester
Image: The image is credited to Neuroscience News

Original Research: Open access.
Task learning increases information redundancy of neural responses in macaque visual cortex” by Shizhao Liu, Anton Pletenev, Ralf M. Haefner, and Adam C. Snyder. Science
DOI:10.1126/science.adw7707


Abstract

Task learning increases information redundancy of neural responses in macaque visual cortex

INTRODUCTION

How does the brain transform sensory input into perception and behavior? The classic model guiding most of neuroscience and modern deep learning views perception as a largely feedforward process: Sensory signals are transformed from early to higher visual areas to make behaviorally relevant information more explicit. Feedback connections are thought to merely fine-tune this process—enhancing relevant features or suppressing noise during attention and learning.

An alternative framework, generative inference, posits that sensory processing is fundamentally bidirectional. In this view, neurons represent beliefs about causes in the external world, continuously updated by the exchange of information between sensory evidence (feedforward) and prior expectations (feedback).

RATIONALE

A recent theoretical prediction from the generative inference framework offers a decisive way to empirically distinguish these two models. Generative inference models predict an increased sharing of task-related information among sensory neurons while learning a perceptual decision-making task—manifesting as higher redundancy in their responses. This prediction directly opposes the classic model, which holds that learning and attention reduce redundancy and correlated variability to improve coding efficiency.

To test these conflicting predictions, we measured changes in information redundancy among neurons in visual area V4 of two macaque monkeys as they learned to discriminate between two orientations in two separate tasks (cardinal and oblique). Neural activity was recorded chronically using Utah arrays over weeks of training.

We quantified information redundancy as the difference between the linear Fisher information carried by the intact population activity and that carried by the same population after removing correlations.

RESULTS

At the start of learning, redundancy was near zero, indicating largely independent neural responses. Over the course of training, redundancy increased, ultimately reaching levels where roughly half of each neuron’s information was shared with other recorded neurons.

Redundancy also increased dynamically within trials, over hundreds of milliseconds, consistent with the gradual accumulation and sharing of information. Increased redundancy did not result in a loss of information in the population but, instead, the individual-neuron information increased—both predicted by generative inference.

Learning-related changes in redundancy were stronger during task performance compared with passive viewing on the same day, which suggests that the increase in redundancy owing to a redistribution of information depends on active task engagement.

CONCLUSION

Learning a perceptual task increased information redundancy among sensory neurons—a result that contradicts conventional understanding of the roles of learning and attention. Rather than eliminating correlated variability, learning appears to redistribute information across neurons through feedback and recurrent interactions, enabling consistent beliefs about the sensory world.

These findings suggest that cortical sensory processing is best understood as a dynamic inference process—one that integrates prior expectations and sensory evidence—challenging the long-held assumption of a fundamentally unidirectional information flow during sensory processing in the brain.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.