Summary: Researchers have discovered that the brain begins summarizing complex visual scenes much earlier than previously believed. In a study, researchers demonstrated that the primary visual cortex (V1)—traditionally thought to only process simple edges and lines—actually computes “ensemble perception.”
This process allows the brain to ignore individual moving dots and instead extract a statistical “gist,” such as the average direction and the level of “noise” (variance) in a scene. These early summaries are then passed to the posterior parietal cortex (PPC), where they are transformed into abstract categories to guide decision-making.
Key Facts
- Early Compression: The primary visual cortex (V1) doesn’t just relay raw data; it compresses complex input into useful statistical summaries (mean and variance) at the very first stage of cortical processing.
- The “Untuned” Secret: Researchers found that even neurons that seemed “untuned” (showing no clear individual preference) contributed significantly to the overall population code, proving that the brain relies on collective neural activity.
- Task-Driven Bias: When mice were actively performing a task, their V1 representations became “biased” toward learned categories, showing that behavioral context can reshape even the earliest levels of vision.
- Hierarchy of Labor: While V1 handles the “math” of the scene (averages and spreads), the PPC handles the “logic,” turning those numbers into behavioral choices like “moving left” or “moving right.”
Source: Institute for Basic Science
When animals move through complex visual environments, the brain cannot afford to analyze every detail one by one. Instead, it rapidly extracts the overall structure of the scene—for example, the mean (average) direction of motion across many moving elements.
This ability, known as ensemble perception, allows the brain to capture the “gist” of a scene at a glance. Yet where, and how, this statistical summary is computed in the brain has remained unclear.
A research team led by LEE Doyun and KIM Yee-Joon at the Center for Memory and Glioscience within the Institute for Basic Science (IBS) has now shown that this process begins much earlier in the visual system than previously thought.
Co-corresponding author LEE Doyun said, “What is especially striking is that this transformation begins already in primary visual cortex. The brain starts compressing complex sensory input into useful statistical summaries at a very early stage.”
In the brain, visual information is processed step by step along a hierarchy of regions. The primary visual cortex (V1) is the first cortical stage that receives visual input from the eyes and is traditionally thought to process simple features such as edges or motion direction. Further downstream, the posterior parietal cortex (PPC) integrates this information into more abstract representations that are linked to perception and decision-making.
The researchers found that V1 already encodes not only the mean direction of complex motion patterns, but also their variance—how dispersed or uncertain the motion is. This information is then carried forward to PPC, where it is reorganized into more abstract category representations that can guide behavior.
To investigate how the brain extracts these visual summaries, the team trained head-fixed mice to classify random-dot motion stimuli according to their overall direction.
Unlike conventional motion displays, in which many dots move coherently in a single direction, the stimuli in this study were designed so that each dot moved in a different direction sampled from a controlled distribution. This allowed the researchers to independently manipulate the mean motion direction and its variability.
The mice successfully learned to group eight possible mean motion directions into two motion categories. Even when the motion of individual dots varied widely, the animals could still categorize the overall direction, indicating that they were not simply following a few prominent local signals. Instead, they were extracting a true statistical summary of the scene.
“We showed that the brain does not process complex visual input by tracking each element individually,” said LEE Young-Beom, first author of the study. “Instead, it extracts stable summary information such as mean and variance to rapidly capture the overall structure of the environment.”
Using miniscope calcium imaging, the researchers recorded neural activity in both V1 and PPC while the mice performed the task or passively viewed the stimuli. At the level of individual neurons, only a relatively small subset showed clear selectivity for the global mean motion direction.
At the population level, however, neural activity in both regions robustly encoded the mean motion direction—even though most single neurons did not appear strongly tuned on their own.
The study revealed a clear division of labor across the cortical hierarchy. In V1, population activity encoded both the mean and the variance of motion direction, indicating that early visual cortex already computes summary statistics rather than merely relaying local signals. In PPC, by contrast, the representation shifted toward more abstract, task-relevant category information, suggesting that sensory summaries are progressively transformed into task-relevant signals.
The researchers also found that task demands could reshape early visual representations. During active categorization, the neural representation of mean motion direction in V1 became systematically biased toward the center of the learned category. This suggests that even early visual cortex is not purely stimulus-driven, but can be influenced by learning and behavioral context.
Another notable finding was that seemingly “untuned” neurons still contributed substantially to the population code. Even neurons that did not meet conventional selectivity criteria helped support accurate representation of global motion direction when analyzed collectively, highlighting the importance of distributed population coding in the brain.
Co-corresponding author KIM Yee-Joon added, “Our findings suggest that visual information is progressively reorganized—from summary statistics in early visual cortex to more abstract category representations in higher cortical areas. This provides an important clue to how the brain efficiently makes sense of complex scenes.”
By revealing how the brain converts noisy sensory input into stable statistical summaries and then into abstract category signals, the study provides new insight into a fundamental principle of perception. The findings may help explain how the brain rapidly extracts meaningful structure from complex environments and could also inform future work in artificial intelligence and computer vision.
Key Questions Answered:
A: No, that would be a total system overload. This study shows your brain (starting at the very first stop in the cortex) calculates the “average” direction of the swarm and how “messy” the motion is. It treats the swarm as one statistical object rather than a thousand individual ones.
A: Scientists used to ignore neurons that didn’t “fire” strongly for a specific direction. However, this study proves those “quiet” neurons are like backup singers in a choir—individually you might not hear them, but collectively they make the “song” (the visual representation) much clearer and more accurate.
A: Absolutely. Current AI often struggles with “noise” in complex environments. By mimicking how the human brain uses early statistical compression to simplify data, engineers could build computer vision systems that are much faster and more efficient at navigating the real world.
Editorial Notes:
- This article was edited by a Neuroscience News editor.
- Journal paper reviewed in full.
- Additional context added by our staff.
About this visual neuroscience research news
Author: William Suh
Source: Institute for Basic Science
Contact: William Suh – Institute for Basic Science
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Hierarchical summary statistics encoding across primary visual and posterior parietal cortices” by Young-Beom Lee, Oliver James, Gaeun Jung, Doyun Lee, Yee-Joon Kim. Advanced Science
DOI:10.1002/advs.202512369
Abstract
Hierarchical summary statistics encoding across primary visual and posterior parietal cortices
Despite growing evidence that the visual system pools sensory data into a summary statistical representation, the underlying neural mechanisms remain unclear.
We characterized the neural coding of summary statistics at the single-cell and population levels using calcium signals imaged in primary visual cortex (V1) and posterior parietal cortex (PPC) while head-fixed mice passively viewed or classified eight mean motion directions of randomly moving dots into two categories.
A small portion of neurons in both areas showed global mean motion direction selectivity beyond what would be expected from the simple summation of responses to individual dot motions.
Although this selectivity was variable across stimulus variability and trials, population activity robustly encoded global mean motion direction, even though most neurons were not significantly tuned.
The V1 population-level mean motion representation was dependent on stimulus variance and systematically biased toward the category center during the motion categorization task.
These, along with the observed population-level neural coding of stimulus variance, suggest that multivariate V1 activity is well suited to processing summary statistics.
The redundant summary statistical encodings in both V1 and PPC suggest that such information accumulates across the visual hierarchy, which may allow PPC to bind multiple levels of summary statistical representations into task-oriented category signals.

