How The Brain Builds Panoramic Memories

Summary: Researchers have identified specific, key brain regions that help us link different views of our surroundings.

Source: MIT.

Neuroscientists identify brain regions key to linking different views of our surroundings.

When asked to visualize your childhood home, you can probably picture not only the house you lived in, but also the buildings next door and across the street. MIT neuroscientists have now identified two brain regions that are involved in creating these panoramic memories.

These brain regions help us to merge fleeting views of our surroundings into a seamless, 360-degree panorama, the researchers say.

“Our understanding of our environment is largely shaped by our memory for what’s currently out of sight,” says Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows. “What we were looking for are hubs in the brain where your memories for the panoramic environment are integrated with your current field of view.”

Robertson is the lead author of the study, which appears in the Sept. 8 issue of the journal Current Biology. Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute, is the paper’s lead author.

Building memories

As we look at a scene, visual information flows from our retinas into the brain, which has regions that are responsible for processing different elements of what we see, such as faces or objects. The MIT team suspected that areas involved in processing scenes — the occipital place area (OPA), the retrosplenial complex (RSC), and parahippocampal place area (PPA) — might also be involved in generating panoramic memories of a place such as a street corner.

If this were true, when you saw two images of houses that you knew were across the street from each other, they would evoke similar patterns of activity in these specialized brain regions. Two houses from different streets would not induce similar patterns.

“Our hypothesis was that as we begin to build memory of the environment around us, there would be certain regions of the brain where the representation of a single image would start to overlap with representations of other views from the same scene,” Robertson says.

The researchers explored this hypothesis using immersive virtual reality headsets, which allowed them to show people many different panoramic scenes. In this study, the researchers showed participants images from 40 street corners in Boston’s Beacon Hill neighborhood.

The images were presented in two ways: Half the time, participants saw a 100-degree stretch of a 360-degree scene, but the other half of the time, they saw two noncontinuous stretches of a 360-degree scene.

Image shows boats sailing on a river.
“Our understanding of our environment is largely shaped by our memory for what’s currently out of sight,” says Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows. “What we were looking for are hubs in the brain where your memories for the panoramic environment are integrated with your current field of view.” Neurosciencenews image is adapted from the MIT press release.

After showing participants these panoramic environments, the researchers then showed them 40 pairs of images and asked if they came from the same street corner. Participants were much better able to determine if pairs came from the same corner if they had seen the two scenes linked in the 100-degree image than if they had seen them unlinked.

Brain scans revealed that when participants saw two images that they knew were linked, the response patterns in the RSC and OPA regions were similar. However, this was not the case for image pairs that the participants had not seen as linked. This suggests that the RSC and OPA, but not the PPA, are involved in building panoramic memories of our surroundings, the researchers say.

Priming the brain

In another experiment, the researchers tested whether one image could “prime” the brain to recall an image from the same panoramic scene. To do this, they showed participants a scene and asked them whether it had been on their left or right when they first saw it. Before that, they showed them either another image from the same street corner or an unrelated image. Participants performed much better when primed with the related image.

“After you have seen a series of views of a panoramic environment, you have explicitly linked them in memory to a known place,” Robertson says. “They also evoke overlapping visual representations in certain regions of the brain, which is implicitly guiding your upcoming perceptual experience.”

About this memory research article

Funding: The research was funded by the National Science Foundation Science and Technology Center for Brains, Minds, and Machines; and the Harvard Milton Fund.

Source: Anne Trafton – MIT
Image Source: This NeuroscienceNews.com image is adapted from the MIT press release.
Original Research: Abstract for “Neural Representations Integrate the Current Field of View with the Remembered 360° Panorama in Scene-Selective Cortex” by Caroline E. Robertson, Katherine L. Hermann, Anna Mynick, Dwight J. Kravitz, and Nancy Kanwisher in Current Biology. Published online September 8 2016 doi:10.1016/j.cub.2016.07.002

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]MIT. “How The Brain Builds Panoramic Memories.” NeuroscienceNews. NeuroscienceNews, 8 September 2016.
<https://neurosciencenews.com/panoramic-memory-neuroscience-4988/>.[/cbtab][cbtab title=”APA”]MIT. (2016, September 8). How The Brain Builds Panoramic Memories. NeuroscienceNews. Retrieved September 8, 2016 from https://neurosciencenews.com/panoramic-memory-neuroscience-4988/[/cbtab][cbtab title=”Chicago”]MIT. “How The Brain Builds Panoramic Memories.” https://neurosciencenews.com/panoramic-memory-neuroscience-4988/ (accessed September 8, 2016).[/cbtab][/cbtabs]


Abstract

Neural Representations Integrate the Current Field of View with the Remembered 360° Panorama in Scene-Selective Cortex

Highlights
•Visual experience of a 360° panorama forges memory associations between scene views
•Representations of discrete views of a 360° environment overlap in RSC and OPA
•The scene currently in view primes associated views of the 360° environment

Summary
We experience our visual environment as a seamless, immersive panorama. Yet, each view is discrete and fleeting, separated by expansive eye movements and discontinuous views of our spatial surroundings. How are discrete views of a panoramic environment knit together into a broad, unified memory representation? Regions of the brain’s “scene network” are well poised to integrate retinal input and memory: they are visually driven but also densely interconnected with memory structures in the medial temporal lobe. Further, these regions harbor memory signals relevant for navigation and adapt across overlapping shifts in scene viewpoint. However, it is unknown whether regions of the scene network support visual memory for the panoramic environment outside of the current field of view and, further, how memory for the surrounding environment influences ongoing perception. Here, we demonstrate that specific regions of the scene network—the retrosplenial complex (RSC) and occipital place area (OPA)—unite discrete views of a 360° panoramic environment, both current and out of sight, in a common representational space. Further, individual scene views prime associated representations of the panoramic environment in behavior, facilitating subsequent perceptual judgments. We propose that this dynamic interplay between memory and perception plays an important role in weaving the fabric of continuous visual experience.

“Neural Representations Integrate the Current Field of View with the Remembered 360° Panorama in Scene-Selective Cortex” by Caroline E. Robertson, Katherine L. Hermann, Anna Mynick, Dwight J. Kravitz, and Nancy Kanwisher in Current Biology. Published online September 8 2016 doi:10.1016/j.cub.2016.07.002

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.
  1. Cool; “Back to the Future”
    The First Seeds of Cognitive Psychology
    Edward Chance Tolman (1946ish) –>through Rescorla/Wagner (1970s

    A number of studies in the Berkeley laboratory of Edward Tolman appeared both to show flaws in the law of effect as well as radical Behaviorism as promoted by Skinner and his followers …and to require (gasp!!)mental representation in their explanation. For example, rats were allowed to explore a maze in which there were three routes of different lengths between the starting position and the goal. The rats behavior when the maze was blocked implied that they must have some sort of mental map of the maze. The rats prefer the routes according to their shortness, so, when the maze is blocked at point A, stopping them using the shortest route, they will choose the second shortest route. When, however, the maze is blocked at point B the rats does not retrace his steps and use route 2, which would be predicted according to the law of effect, but rather uses route 3 . The rat must be recognising that block B will stop him using route 2 by using some memory of the layout of the maze. Tolman’s group also showed that animals could use knowledge they gained learning a maze by running to navigate it swimming and that unexpected changes in the quality of reward could weaken learning even though the animal was still rewarded. This result was developed further by Crespi who, in 1942, showed that unexpected decreases in reward quantity caused rats temporarily to run a maze more slowly than normal while unexpected increases caused a temporary elevation in running speed (The animals are making stastical calculations, and using mathematical spacial navigation algorithms, and at the very least vector algebra/analytical geometry and trigonometry to a degree that would no doubt impress both Rene Descartes and Pythagoras).

    At the same time as this work was appearing in the USA the Polish psychologists Konorski and Miller began the first cognitive analyses of classical conditioning – the forerunners of the work of Rescorla, Wagner, Dickinson and Mackintosh. In case you had forgotten here is a very basic review of the Rescorla/Wagner reinterpretation of Pavlovian conditioning as Cognitive Neuroscience in the Information Processing tradition: According to Rescorla and Kamin, associations are only learned when a surprising event accompanies a CS. In a normal simple conditioning experiment the US is surprising the first few times it is experienced so it is associated with salient stimuli which immediately precede it. In a blocking experiment once the association between the CS (CS1) presented in the first phase of the procedure and the US has been made the US is no longer surprising (since it is predicted by CS1). In the second phase, where both CS1 and CS2 are experienced, as the US is no longer surprising it does not induce any further learning and so no association is made between the US and CS2. This explanation was presented by Rescorla and Wagner (1972) as a formal model of conditioning which expresses the capacity a CS has to become associated with a US at any given time. This associative strength of the US to the CS is referred to by the letter V and the change in this strength which occurs on each trial of conditioning is called dV. The more a CS is associated with a US the less additional association the US can induce. This informal explanation of the role of US surprise and of CS (and US) salience in the process of conditioning can be stated as follows:
    dV = ab(L – V)
    where a is the salience (intensity) of the US, b is the salience (intensity) of the CS and L is the amount of processing given to a completely unpredicted US. In words: when the US is first encountered the CS has no association to it so V is zero. On the first trial the CS gains a strength of abL in its association with the US which is proportional to the saliences of the CS and the US and to the initial amount of processing given to the US. As we start trial two the associative strength is V is abL so the change in strength that occurs with the second pairing of the CS and US is ab(L – abL). It is smaller than the amount learned on the first trial and this reduction in amount that is learned reflects the fact that the CS now has some association with the US, so the US is less surprising (cute…very cute–oops I’m not supposed to impose my opinions). As more trials ensue, the equation predicts a gradually decreasing rate of learning which reaches an asymptote at L.
    However, the diagram below shows: this is not what is seen when the development CS-US associations is measured over time. Instead the learning curve is sigmoidal. Rescorla has argued that the equation is consistent with observed behavior if one assumes that very small changes in associative strength are undetectable and that there is a limit to the amount of effect that very large changes can have on behavior.

    CS-US acquisition
    There are other respects, however, where the model performs better in predicting experimental outcomes. It can also be applied to a number of CSs each of which contributes to an overall associative strength V of the US in the right hand side of the equation. It is reasonably clear that the presence of the CS salience term b in the equation lets it account for overshadowing. The meaning of the equation is clearest if the specific dVs on the left hand side are seen as referring to the increments in association between specific CSs while V on the right hand side is referring to the predictability of the US and so is the sum of all the different CS-US associations. If the conditioning strength accrued to CS1 is denoted by dV1 and that to CS2 by dV2 then our equations are:
    dV1 = ab1(L – V)
    dV2 = ab2(L – V)
    and both dV1 and dV2 accrue to V on each trial. The amount of association directed to each CS is proportional to their salience.
    The equation also models blocking well. During the initial phase of a blocking experiment the associative strength of the US is increased so later, when a second CS is presented the amount of associative strength it can gain has been reduced.
    The critical question is, however, does the model predict experimental outcomes it was not explicitly devised for, i.e. can it be generalized? In one example the model predicts the effects of pairing two previously learned CSs on learning about a third new stimulus. If on separate occasions (not as compound stimuli) two CSs of equal salience have both been completely associated with a US then V=L for both stimuli and dV on subsequent trials is zero for both. Now a third CS in conjunction with the original pair is presented so three CSs are presented together whereas only two of them were presented singly in the past. The overall associative strength of the US is now 2L, a contribution of L from both of the original CSs. The equation predicts that there will be a negative change in associative strength on this trial proportional to the salience of the CSs:
    dV = ab(L – 2L)
    dV = -abL
    Conducting the experiment shows: the third stimulus becomes a conditioned inhibitor of the US – it provokes a CR of the opposite quality to that produced by the other two CSs.
    It was obviously only a matter of time before The elegant science of behaviorism began to be co-opted by the “cognitive neuroscience” movement, AI, Neural Networking, Holographic models of Neuronal connections… etc.

Comments are closed.