Summary: To understand the human brain, artificial intelligence usually goes big—massive models requiring supercomputers. However, new research is “thinking small.”
By shrinking state-of-the-art AI models to 1/1,000th of their original size, researchers created a vision model so compact it could fit in an email attachment, yet it predicts neural responses in the macaque visual cortex better than any large-scale system. This breakthrough allowed scientists to identify specific “specialist” neurons—such as a group that specifically detects dots—providing a new blueprint for how primates process visual information.
Key Facts
- Compression Breakthrough: Researchers trained a large AI model on neural responses from macaque monkeys and then used compression technology to shrink it by 1,000x.
- Superior Accuracy: Despite its tiny size, the compact model outperformed existing state-of-the-art vision models by more than 30% in predicting neural activity.
- The “Dot” Neurons: The model revealed that specific V4 neurons are specialized “dot detectors.” This is critical for primates, as recognizing eyes (which are essentially dots) is fundamental to social interaction.
- Inner Workings Revealed: Because the model is small, researchers could finally “look under the hood” to see how neurons break down images into edges, colors, and specific shapes.
- Therapeutic Potential: Understanding the specific images that drive neurons to “talk” could lead to visual therapies for Alzheimer’s, potentially rebuilding synapses through targeted visual stimulation.
Source: CSHL
What does it take to make AI that can pass as human? Try massive clusters of supercomputers. To build human-like intelligence, computer scientists think big.
However, for neuroscientists who want to understand how real brains work, today’s AI only goes so far, as it replaces one deeply complicated system (the brain) with another (AI). How then do we figure out the inner workings of the biological brain?
To answer this question, Cold Spring Harbor Laboratory Assistant Professor Benjamin Cowley is thinking small.
In collaboration with Carnegie Mellon University Professor Matthew Smith and Princeton University Professor Jonathan Pillow, Cowley has helped develop a new AI model much smaller and simpler than today’s “state-of-the-art” systems, yet far better at illustrating how the brain makes sense of visual stimuli. In previous work, Cowley trained AI to anticipate neural responses in fruit flies. This time, he’s set his sights on macaque, a species of monkey whose brains are much closer to humans.
In a new study published in Nature, Cowley and colleagues present macaques with sets of carefully curated natural images and track which neurons in the animals’ visual cortex fire in response to each picture. From there, they first train large AI models to predict neural responses to specific images until they outperform competing models by more than 30%. Then, they use compression technology to shrink the large AI model to about 1/1,000 the size. The result is a vision model small enough for an email attachment.
Finding that AI models of the brain could be this tiny is huge in itself. But Cowley goes further, pinpointing the inner workings of these models. This analysis reveals something extraordinary. The compact model neurons all break down images into low-level features like edges and colors, then form unique preferences by consolidating this information in different ways. What does this mean for primates like us? Cowley offers one example: “In the monkey’s brain—and in our brains, too, most likely—there’s a group of V4 neurons that love dots.”
In other words, there are neurons in your brain that specialize in dot detection. That might seem random, but think about the key features of the face. What are eyes but dots loaded with information? Consider how important eye contact is in daily life.
Looking ahead, the findings have Cowley thinking about building AI models of mental health conditions. “For example, in Alzheimer’s dementia, we know synapses are lost,” he explains. “If we know the images that drive neurons to talk to each other, we can potentially rebuild synapses once thought lost to disease.”
Who knows? Thanks to work like this, one day you might be able to stave off—or even treat—neurodegenerative disease by looking at special pictures. Just wait and see.
Key Questions Answered:
A: Big AI models are “black boxes”—they are so complex that we don’t actually know how they reach their conclusions. Small models are transparent. By shrinking the model, scientists can see exactly how a neuron makes a decision, turning a mystery into a map.
A: Everything! In the primate world, eyes are the most important dots. Being able to detect dots instantly allows our brains to lock onto a gaze, recognize a face, and understand social cues. These specialized neurons are the foundation of our social intelligence.
A: That’s the dream! If we know exactly which visual features “wake up” specific neural pathways, we might be able to use specialized images to stimulate those paths and keep synapses from withering away.
Editorial Notes:
- This article was edited by a Neuroscience News editor.
- Journal paper reviewed in full.
- Additional context added by our staff.
About this AI and visual neuroscience research news
Author: Samuel Diamond
Source: CSHL
Contact: Samuel Diamond – CSHL
Image: The image is credited to Cowley lab/CSHL
Original Research: Closed access.
“Compact deep neural network models of the visual cortex” by Benjamin R. Cowley, Patricia L. Stan, Jonathan W. Pillow & Matthew A. Smith. Nature
DOI:10.1038/s41586-026-10150-1
Abstract
Compact deep neural network models of the visual cortex
A powerful approach to understand the computations carried out by the visual cortex is to build models that predict neural responses to any arbitrary image. Deep neural networks (DNNs) have emerged as the leading predictive models, yet their underlying computations remain buried beneath millions of parameters.
Here we challenge the need for models at this scale by seeking predictive and parsimonious DNN models of the primate visual cortex. We first built a highly predictive DNN model of neural responses in macaque visual area V4 by alternating data collection and model training in adaptive closed-loop experiments.
We then compressed this large, black-box DNN model, which comprised 60 million parameters, to identify compact models with 5,000 times fewer parameters yet comparable accuracy. This dramatic compression enabled us to investigate the inner workings of the compact models.
We discovered a salient computational motif: compact models share similar filters in early processing, but individual models then specialize their feature selectivity by ‘consolidating’ this shared high-dimensional representation in distinct ways.
We examined this consolidation step in a dot-detecting model neuron, revealing a computational mechanism that leads to a testable circuit hypothesis for dot-selective V4 neurons.
Beyond V4, we found strong model compression for macaque visual areas V1 and IT (inferior temporal cortex), revealing a general computational principle of the visual cortex.
Overall, our work challenges the notion that large DNNs are necessary to predict individual neurons and establishes a modelling framework that balances prediction and parsimony.

