Summary: Researchers use virtual reality to discover how the brain assembles contextual memories.
Source: UC Davis.
Virtual reality is helping neuroscientists at the University of California, Davis, get new insight into how different brain areas assemble memories in context.
In a study published Jan. 18 in the journal Nature Communications, graduate student Halle Dimsdale-Zucker and colleagues used a virtual reality environment to train subjects, then showed that different areas of the hippocampus are activated for different types of memories.
It’s well known that one memory can trigger related memories. We remember specific events with context — when and where it happened, who was there. Different memories can have specific context, as well as information that is the same between memories — for example, events that occurred in the same location.
Dimsdale-Zucker and Professor Charan Ranganath at the UC Davis Center for Neuroscience and Department of Psychology are interested in how the brain assembles all the pieces of these memories. They use functional magnetic resonance imaging, or fMRI, to look for brain areas that are activated as memories are recalled, especially in the hippocampus, a small structure in the center of the brain.
For this study, Dimsdale-Zucker used architectural sketching software to build houses in a 3-D virtual environment. The subjects watched a series of videos in which they went into one house then another. In each video, different objects were positioned within the houses. The subjects therefore memorized the objects in two contexts: which video (episodic memory) and which house (spatial memory).
In the second phase of the study, the subjects were asked to try to remember the objects while they were scanned by fMRI.
Being asked about the objects spontaneously reactivated contextual information, Dimsdale-Zucker said. Different regions of the hippocampus were activated for different kinds of information: One area, CA1, was associated with representing shared information about contexts (e.g., objects that were in the same video); another, distinct area was linked to representing differences in context.
“What’s exciting is that it is intuitive that you can remember a unique experience, but the hippocampus is also involved in linking similar experiences,” Dimsdale-Zucker said. “You need both to be able to remember.”
Another interesting finding was that in this study, the hippocampus was involved in episodic memories linking both time and space, she said. Conventional thinking has been that the hippocampus codes primarily for spatial memories, for example those involved in navigation.
Virtual reality makes it possible to carry out controlled laboratory experiments with episodic memory, Dimsdale-Zucker said. A better understanding of how memories are formed, stored and recalled could eventually lead to better diagnosis and treatment for memory problems in aging or degenerative disorders such as Alzheimer’s disease.
Additional authors on the paper are Arne Ekstrom and Andrew Yonelinas at the UC Davis Center for Neuroscience, and Maureen Ritchey at Boston College. Dimsdale-Zucker was supported by an NSF graduate research fellowship.
[cbtabs][cbtab title=”MLA”]UC Davis “Using Virtual Reality to Identify Brain Areas Involved in Memory.” NeuroscienceNews. NeuroscienceNews, 26 January 2018. <https://neurosciencenews.com/virtual-reality-memory-8377/>.[/cbtab][cbtab title=”APA”]UC Davis (2018, January 26). Using Virtual Reality to Identify Brain Areas Involved in Memory. NeuroscienceNews. Retrieved January 26, 2018 from https://neurosciencenews.com/virtual-reality-memory-8377/[/cbtab][cbtab title=”Chicago”]UC Davis “Using Virtual Reality to Identify Brain Areas Involved in Memory.” https://neurosciencenews.com/virtual-reality-memory-8377/ (accessed January 26, 2018).[/cbtab][/cbtabs]
CA1 and CA3 differentially support spontaneous retrieval of episodic contexts within human hippocampal subfields
The hippocampus plays a critical role in spatial and episodic memory. Mechanistic models predict that hippocampal subfields have computational specializations that differentially support memory. However, there is little empirical evidence suggesting differences between the subfields, particularly in humans. To clarify how hippocampal subfields support human spatial and episodic memory, we developed a virtual reality paradigm where participants passively navigated through houses (spatial contexts) across a series of videos (episodic contexts). We then used multivariate analyses of high-resolution fMRI data to identify neural representations of contextual information during recollection. Multi-voxel pattern similarity analyses revealed that CA1 represented objects that shared an episodic context as more similar than those from different episodic contexts. CA23DG showed the opposite pattern, differentiating between objects encountered in the same episodic context. The complementary characteristics of these subfields explain how we can parse our experiences into cohesive episodes while retaining the specific details that support vivid recollection.