A New Theory for What’s Happening in the Brain When Something Looks Familiar

Summary: A new theory suggests the brain understands the level of activation required from a sensory input and corrects for it, leaving behind a signal for familiarity.

Source: University of Pennsylvania

When a person views a familiar image, even having seen it just once before for a few seconds, something unique happens in the human brain.

Until recently, neuroscientists believed that vigorous activity in a visual part of the brain called the inferotemporal (IT) cortex meant the person was looking at something novel, like the face of a stranger or a never-before-seen painting. Less IT cortex activity, on the other hand, indicated familiarity.

But something about that theory, called repetition suppression, didn’t hold up for University of Pennsylvania neuroscientist Nicole Rust. “Different images produce different amounts of activation even when they are all novel,” says Rust, an associate professor in the Department of Psychology. Beyond that, other factors—an image’s brightness, for instance, or its contrast—result in a similar effect.

In a paper published in the Proceedings of the National Academy of Sciences, she and postdoctoral fellow Vahid Mehrpour, along with Penn research associate Travis Meyer and Eero Simoncelli of New York University, propose a new theory, one in which the brain understands the level of activation expected from a sensory input and corrects for it, leaving behind the signal for familiarity. They call it sensory referenced suppression.

The visual system

Rust’s lab focuses on systems and computational neuroscience, which combines measurements of neural activity and mathematical modeling to figure out what’s happening in the brain. One aspect relates to the visual system. “The big central problem of vision is how to get the information from the world into our heads in an interpretable way. We know that our sensory systems have to break it down,” she says.

It’s a complicated process, greatly simplified here for clarity: Information comes into the eye via the rods and cones. It travels neuron by neuron through a stack of brain areas that make up the visual system and finally to a visual brain area called the IT cortex. Its 16 million neurons activate in different patterns depending on what’s being viewed, and the brain must then interpret the patterns to understand what it’s seeing.

“You get one pattern for a specific face. You get a different pattern for ‘coffee cup.’ You get a different pattern for ‘pencil,'” Rust says. “That’s what the visual system does. It builds the world back up to help you decipher what you’re looking at.”

In addition to its role in vision, activation of the IT cortex is also thought to play a role in memory. Repetition suppression, the old theory, relies on the idea that there’s an activation threshold that gets crossed: More neural activity tells the brain the image is novel, less indicates one that’s previously seen.

Because several factors affect the total amount of neural activity, also called spikes, in the IT cortex, the brain can’t discern what’s specifically causing the reaction. It could be memory, image contrast, or something else altogether, Mehrpour says. “We propose a new idea that the brain corrects for the changes caused by these other factors, in our case contrast,” he says. After that calibration, what remains is the isolated brain activation for familiarity. In other words, the brain understands when it is viewing something that it has previously seen.

Long-term implications

To draw this conclusion, the researchers presented sequences of grayscale images to two adult male rhesus macaques. Every image appeared exactly twice, the first time as novel, the second time as familiar, in a range of high- and low-contrast combinations. Each viewing lasted precisely half a second. The animals were trained to use eye movements to indicate whether an image was new or familiar, disregarding the contrast levels.

As the macaques performed this memory task, the researchers recorded neural activity in the IT cortex, measuring the spikes for hundreds of individual neurons, a unique method that differs from those that measure proxies of neural activity averaged across 10,000 neurons firing. Because Rust and colleagues wanted to understand the neural code, they needed information for individual neurons.

Using a mathematical approach, they deciphered the patterns of spikes that accounted for how the macaques could distinguish memory from contrast. This ultimately confirmed their hypothesis. “Familiarity and contrast both change the overall firing rate,” Rust says. “What we’re saying is the brain can tease apart and isolate one from the other.”

This shows a red question mark
In addition to its role in vision, activation of the IT cortex is also thought to play a role in memory. Image is in the public domain

In the future, better understanding this process could have applications for artificial intelligence, Mehrpour says. “If we know how the brain represents and rebuilds information in memory in the presence of changes in sensory input like contrast, we can design AI systems that work in the same way,” he says. “We could potentially build machines that work in the same way that our brain does.”

Beyond that, Rust says that down the line the findings could have implications for treating memory-impairing diseases like Alzheimer’s. “By understanding how memory in a healthy brain works, you can lay the foundations to develop preventions and treatments for the memory-related disorders plaguing an aging population.”

But for any of this to come to pass, it will be crucial to keep digging, she says. “To get this right, we have to understand the memory signal that’s driving behavior.” This work brings neuroscientists one step closer.

Funding: Funding for this research came from the Simons Foundation (grants 543033 and 543047), National Eye Institute of the National Institutes of Health (Grant R01EY020851), National Science Foundation (CAREER Award 1265480), and Howard Hughes Medical Institute.

Vahid Mehrpour is a postdoctoral fellow in the Visual Memory Lab at the University of Pennsylvania.

Travis Meyer is a research associate in the Visual Memory Lab at the University of Pennsylvania.

Nicole Rust is an associate professor in the Department of Psychology in the School of Arts & Sciences at the University of Pennsylvania. She is also director of the Visual Memory Lab, co-director of the Computational Neuroscience Initiative, and MindCORE’s executive director for research.

Eero Simonelli is a professor of neural science, mathematics, data science, and psychology in the College of Arts & Science at New York University. He is also founding director of the Center for Computational Neuroscience at the Simons Foundation’s Flatiron Institute.

About this neuroscience research news

Source: University of Pennsylvania
Contact: Michele Berger – University of Pennsylvania
Image: The image is in the public domain

Original Research: Closed access.
Pinpointing the neural signatures of single-exposure visual recognition memory” by Nicole Rust et al. PNAS


Abstract

Pinpointing the neural signatures of single-exposure visual recognition memory

Memories of the images that we have seen are thought to be reflected in the reduction of neural responses in high-level visual areas such as inferotemporal (IT) cortex, a phenomenon known as repetition suppression (RS).

We challenged this hypothesis with a task that required rhesus monkeys to report whether images were novel or repeated while ignoring variations in contrast, a stimulus attribute that is also known to modulate the overall IT response.

The monkeys’ behavior was largely contrast invariant, contrary to the predictions of an RS-inspired decoder, which could not distinguish responses to images that are repeated from those that are of lower contrast. However, the monkeys’ behavioral patterns were well predicted by a linearly decodable variant in which the total spike count was corrected for contrast modulation.

These results suggest that the IT neural activity pattern that best aligns with single-exposure visual recognition memory behavior is not RS but rather sensory referenced suppression: reductions in IT population response magnitude, corrected for sensory modulation.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.