Memory May Alter How We Perceive the Visual and Auditory Information We Encounter

Summary: New research focuses on how memory can impact the perception of auditory and visual information.

Source: Harvard

Aleena Garner still remembers the moment she decided to pursue neurobiology. She was in an undergraduate chemistry class when realization struck: the human brain—that three-pound organ capable of carrying out an endless array of sophisticated tasks—is made up of the very same elements as yogurt.

“Despite having a similar composition as yogurt, our brains are considerably more capable. How can this be? That’s a big question I’ve had for a long time,” Garner said.

Garner, who recently became an assistant professor of neurobiology in the Blavatnik Institute at Harvard Medical School, is tackling this question by exploring how the brain processes sensory information.

In a conversation with Harvard Medicine News, Garner delved into her research, which focuses on how memory affects perception of visual and auditory information.

Understanding these interactions will not only advance general understanding of the brain, she said, but could be useful in situations like post-traumatic stress disorder (PTSD), where they become disrupted.

HMNews: Why are you studying perception in the context of memory?

Garner: Traditionally, we think about memory as occurring in a higher-order cognitive region of the brain that is separate from the sensory regions that process more basic information about the world. Thus, it makes sense to look at sensory regions with respect to visual information that enters the eye and auditory information that enters the ear.

However, we don’t understand why there seems to be more feedback from the higher-order cognitive areas to the sensory regions than from receptor organs like the eye and the ear.

We also don’t know why the auditory cortex and visual cortex communicate with each other before sending information to higher-order brain regions. It makes sense that we would want to form a pure image of what we’re seeing or hearing in the world, but the anatomy of the brain suggests that we’re actually altering what we perceive in our early sensory regions before this information gets to higher-order cognitive areas.

One of the big goals in my lab is to investigate the communication between the early sensory regions and the higher-order cognitive regions of the brain to understand how they’re interacting. 

More specifically, we want to know how do we use memory. One application of memory could be to build a picture of the world so that we can predict what’s going to happen.

If you walk into a new room, you understand that you can’t walk through the walls because you can make predictions in a new context based on what you already know. You know you can pick up a glass of water and drink from it, so you don’t need to spend very much neural power processing that information. Instead, your brain spends more energy processing a conversation with another person and thinking about things you didn’t expect, such as a question you’ve never been asked before. Your brain can then update its model to incorporate this new information.

Memories let us spend less energy processing the things that we expect, enabling us to amplify the signal of the things we don’t expect. We are interested in how this process works.

HMNews: Do you have other examples of how memory can alter our interpretation of sensory information?

Garner: If you are on the sidewalk and you hear a siren as you’re about to step into a crosswalk, you’d stop because experience tells you that an ambulance is driving by. In that moment, your auditory sense triggers the visual image of an ambulance.

However, maybe the siren ends up being a child’s toy instead of an ambulance. If that happens often enough, eventually, when you hear the siren, you will think it’s just somebody’s toy, and you’ll walk into the crosswalk anyway. This is because your experience and memory have actually changed your picture of the world, and thus you interpret the siren differently.

Sometimes, the same stimulus is integrated into two different memories, one positive and one negative. Then the question becomes how does your brain know how to react. That will depend on the other sensory cues around the stimulus.

If you see a picture and hear a bell, that may mean you are going to receive a reward, but if you see the same picture and hear a knocking sound, that may mean you are going to receive a punishment. In this example, your experience with the auditory information changes how you interpret the visual information.

We want to know how the brain makes these adjustments and where these changes are happening. 

HMNews: Are there potential applications of your research that interest you?

Garner: A longer-term goal of the lab is to look at cases related to trauma such as PTSD. Normally, people can distinguish when a stimulus is safe and when it’s not safe. However, in PTSD, this ability becomes disrupted, and one of the symptoms can be to overgeneralize and be fearful of a stimulus even when you don’t need to be. This fear response can cause a physical reaction such as tensing muscles and freezing.

There’s some work in humans looking at how the brainstem is involved in body physiology and how we react to stimuli. I want to look at connections between the brainstem and cortical regions of the brain to see how the communication works and how it becomes disrupted after trauma—first in mice, and eventually in humans.

I’m also interested in motor-related functions of the brain. I was in physical therapy for a while after a rock-climbing accident, and I met patients with Parkinson’s disease who were training in rehabilitation and physical therapy to get better. It’s remarkable—physical therapy and training does help with the symptoms even though Parkinson’s is a neurodegenerative disease.

I want to explore interventions for neurodegenerative diseases that are based on the connectivity between the sensory regions of the cortex and the brainstem. Such interventions may be able to train the brain to have more motor function even as some of the primary motor areas deteriorate.

HMNews: You recently published a paper in Nature Neuroscience that explored memory and audiovisual predictions in mice. What did you find out?

Garner: The motivation for the work was a basic question: If a visual stimulus is integrated into a memory, is it represented differently in the primary visual cortex of the brain? In other words, is a visual stimulus that is presented in a relatively neutral way processed differently by the brain than the same stimulus presented during a specific memory retrieval. In the study, we used an auditory cue to trigger a memory about a visual stimulus.

It turned out that there was a difference. The response to the visual stimulus was suppressed after the mouse learned to associate it with a memory-triggering auditory cue. But we didn’t know what was causing this suppression. Scientists have established that there is a large projection from the auditory cortex to the visual cortex in both primates and mice, but the function of this pathway is not understood, so we decided to investigate it.

We found that the auditory axons had auditory and visual responses, and the number of visually responsive axons increased as the mouse was trained to associate the auditory and visual cues.

We then developed a functional mapping technique that involved synthetically exciting auditory input to the visual cortex while looking at the effects on activity of visual cortex neurons. Intriguingly, when we excited auditory input, we saw selective suppression of visual cortex neurons that were responsive to the visual stimulus associated with the auditory cue—but only after the mouse learned to associate the visual stimulus with the auditory cue.  

Indeed, these visual cortex neurons that were inhibited by synthetic stimulation of auditory input were mostly responsible for the suppression of the visual response after the mouse learned the auditory cue.  

These results give us a mechanism to explain the experience-dependent suppression of visual responses following a learned, predictive, auditory cue.

HMNews: In your research you use a virtual reality system designed for mice. How does it work?

Garner: In the system, a mouse is on a spherical treadmill—a ball on air with a dome-shaped screen around it. The treadmill allows the mouse to rotate its torso and move its legs in different directions, but keeps the mouse in place so we can measure the activity of hundreds of neurons in its brain. Then, we yoke the movement of the mouse to whatever we project onto the walls of the dome, creating a virtual reality where, as the mouse turns, the world does a counter turn, just like in real life.

This setup allows us to have precise temporal and spatial control over the auditory and visual stimuli that a mouse is experiencing. We create virtual walls in the dome, so the mouse can locomote to the wall, but can’t pass through. We also project different types of visual patterns and shapes on the walls and present sounds using a surround sound system. It’s kind of like an interactive, 3-D IMAX theater sized for a mouse.

We then use calcium imaging to record neural activity as the mouse explores this interactive, virtual world.

HMNews: Your description makes me think of a mouse playing a video game…

Garner: Yes, that’s exactly what it is [laughs]. With virtual reality we can do all kinds of interesting things without having to physically pick up the mouse and move it from one location to another, which affects behavior, and even transcription of genes.

This shows the outline of two brains and computer monitors with city-scapes and the outline of people
Understanding these interactions will not only advance general understanding of the brain, she said, but could be useful in situations like post-traumatic stress disorder (PTSD), where they become disrupted. Image is in the public domain

We can flip a switch to instantly change the environment, so we can look at contextual learning without changing anything but the context. That would be impossible in a real environment. 

Christopher Harvey, an associate professor of neurobiology at HMS, developed this technology during his postdoctoral work, so I’m very excited to be working in the same department as him. It’s an outstanding opportunity.

HMNews: Beyond your research, what do you hope to be involved in at HMS?

Garner: I love teaching—it’s one of the reasons I went into academic science. Even if you don’t realize that you’re good at something, a teacher can bring that out, which can alter the course of your life. That’s one of the reasons I was attracted to Harvard. There seems to be genuine support for teaching, and a positive attitude about the importance of teaching.

I’m also part of a group called the Leading Edge Symposium that was founded by Kara McKinley, an assistant professor of stem cell and regenerative biology at Harvard. The group aims to support women and nonbinary individuals in the sciences.

Postdocs from anywhere can apply, and the program provides support during job applications: participants can give practice talks, get feedback on application materials—lots of things that not everybody gets at their institution. It’s wonderfully supportive, and I don’t think I would have done so well interviewing without being in the group as a postdoc. There are different levels of approaching the issue of gender equity in science, and I think the small-scale level of teaching and mentoring individual people is very important.

Of course, I want to have a large influence, but if I affect two people, and those two people are successful and start their own labs, then they can each influence two more people. You start to get this exponential growth.

About this memory research news

Author: Catherine Caruso
Source: Harvard
Contact: Catherine Caruso – Harvard
Image: The image is in the public domain

Original Research: Closed access.
A cortical circuit for audio-visual predictions” by Aleena R. Garner et al. Nature Neuroscience


Abstract

A cortical circuit for audio-visual predictions

Learned associations between stimuli in different sensory modalities can shape the way we perceive these stimuli. However, it is not well understood how these interactions are mediated or at what level of the processing hierarchy they occur.

Here we describe a neural mechanism by which an auditory input can shape visual representations of behaviorally relevant stimuli through direct interactions between auditory and visual cortices in mice.

We show that the association of an auditory stimulus with a visual stimulus in a behaviorally relevant context leads to experience-dependent suppression of visual responses in primary visual cortex (V1). Auditory cortex axons carry a mixture of auditory and retinotopically matched visual input to V1, and optogenetic stimulation of these axons selectively suppresses V1 neurons that are responsive to the associated visual stimulus after, but not before, learning.

Our results suggest that cross-modal associations can be communicated by long-range cortical connections and that, with learning, these cross-modal connections function to suppress responses to predictable input.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.