Summary: Vision appears to play a more dominant role than motor movements when it comes to encoding memory of large-scale spaces. The findings address a long-standing debate as to whether or not body movements aid the learning of physical spaces.
Source: University of Arizona
Virtual reality is becoming increasingly present in our everyday lives, from online tours of homes for sale to high-tech headsets that immerse gamers in hyper-realistic digital worlds. While its entertainment value is well-established, virtual reality also has vast potential for practical uses that are just beginning to be explored.
Arne Ekstrom, director of the Human Spatial Cognition Lab in the University of Arizona Department of Psychology, uses virtual reality to study spatial navigation and memory. Among the lab’s interests are the technology’s potential for socially beneficial uses, such as training first responders, medical professionals and those who must navigate hazardous environments. For those types of applications to be most effective, though, we need to better understand how people learn in virtual environments.
In a new study published in the journal Neuron, Ekstrom and co-author Derek Huffman, a post-doctoral researcher in the Center for Neuroscience at the University of California, Davis, advance that understanding by looking at whether or not being able to physically move through virtual spaces improves how we learn them.
“One of the big concerns or drawbacks with virtual reality is that it fails to capture the experience that we actually have when we navigate in the real world,” said Ekstrom, an associate professor of psychology and the study’s senior author. “That’s what we were trying to address in this study: What information is sufficient for forming spatial representations that are useful in actually knowing where things are?”
The researchers had study participants explore three virtual cities while wearing virtual reality headsets. The participants navigated each city in one of three ways:
- Participants wore the headset while walking on an omnidirectional, or 360-degree, treadmill, which allows users to walk freely in any direction. In this condition, the participants could navigate through the virtual environment by walking and turning their heads.
- Participants navigated through the virtual environments using only a handheld joystick; they were not able to navigate by moving their heads or walking.
- Participants navigated by moving their bodies side to side and moving a joystick back and forth; they were not able to walk around.
Participants spent two to three hours, on average, exploring the virtual cities and locating certain shops they were instructed to find. Once they’d had an opportunity to learn the environments well, they were asked a series of questions to test their spatial memory. For example, they might be asked to imagine they were standing at the coffee shop, facing the bookstore. They would then be asked to point in the direction of the grocery store.
The accuracy of participants’ responses did not vary based on which condition they were in.
Participants then underwent an MRI scan while answering a similar set of questions. This allowed the researchers to see what was happening in the brain as participants retrieved spatial memories.
The researchers found that the same areas of the brain were activated for participants in all three situations. In addition, the patterns of interaction between different regions of the brain were similar among the three conditions.
“What we found was that the neural codes were identical between the different conditions,” Ekstrom said. “This suggests – as far as the brain is concerned and what we were also able to measure with behavior – that there is sufficient information with just seeing things in a virtual environment. The information you get from moving your body, once you know the environment well enough, doesn’t really add that much.”
The findings address a long-standing scientific debate around whether or not body movements aid in learning physical spaces.

“There’s been this idea that how you learn might make a huge difference, and that if you don’t have body-based cues, then you’re lacking a big part of what might be important for forming memories of space,” said Huffman, the study’s first author. “Our research would suggest that once you have a well-formed memory of an environment, it doesn’t matter as much how you learned it.”
“We would say you don’t need body immersion, and you don’t need body cues to form complex spatial representations,” Ekstom added. “That can happen with sufficient exposure in simple virtual reality applications.”
From a practical standpoint, the research suggests that even basic virtual reality systems may be useful in instructional applications.
“Virtual reality has the potential to allow us to understand situations that we might not otherwise be able to directly experience,” Ekstrom said. “For example, what if we could train first responders to be able to find people after an attack on a building, without them actually ever having been to that building?
“Our findings suggest there’s promise for using virtual reality – even simple applications where you’re just moving a joystick – to teach people fairly complex knowledge about spatial environments.”
Source:
University of Arizona
Media Contacts:
Alexis Blue – University of Arizona
Image Source:
The image is in the public domain.
Original Research: Closed access
“A Modality-Independent Network Underlies the Retrieval of Large-Scale Spatial Environments in the Human Brain”. Derek J. Huffman and Arne D. Ekstrom.
Neuron doi:10.1016/j.neuron.2019.08.012.
Abstract
A Modality-Independent Network Underlies the Retrieval of Large-Scale Spatial Environments in the Human Brain
Highlights
• What is the role of body-based cues, such as head turns, in human navigation?
• We tested this question using immersive virtual reality and neuroimaging
• Behavioral and brain data suggest that human spatial memory is modality independent
• Vision might play a dominant role in human memory for large-scale spaces
Summary
In humans, the extent to which body-based cues, such as vestibular, somatosensory, and motoric cues, are necessary for normal expression of spatial representations remains unclear. Recent breakthroughs in immersive virtual reality technology allowed us to test how body-based cues influence spatial representations of large-scale environments in humans. Specifically, we manipulated the availability of body-based cues during navigation using an omnidirectional treadmill and a head-mounted display, investigating brain differences in levels of activation (i.e., univariate analysis), patterns of activity (i.e., multivariate pattern analysis), and putative network interactions between spatial retrieval tasks using fMRI. Our behavioral and neuroimaging results support the idea that there is a core, modality-independent network supporting spatial memory retrieval in the human brain. Thus, for well-learned spatial environments, at least in humans, primarily visual input may be sufficient for expression of complex representations of spatial environments.