Summary: Scientists have made strides in uncovering the mechanisms underlying memory formation and consolidation in the brain during rest or sleep.
A new study focuses on the role of the hippocampus, a brain region important for memory, and its place cells which “replay” neuronal sequences.
The researchers built an artificial intelligence model to better understand these processes, discovering that sequences of experiences are prioritized during replay based on familiarity and rewards.
The AI agent was found to learn spatial information more effectively when replaying these prioritized sequences, offering valuable insight into the way our brains learn and process information.
- The hippocampus contains place cells that fire in specific locations, and these cells play a crucial role in “replay” during rest or sleep.
- Neuronal sequences during replay are not random, but follow certain prioritization rules, such as prioritizing familiar experiences and those associated with rewards.
- The artificial intelligence model built by the researchers emulates this replay process and was found to learn spatial information more efficiently when replaying prioritized sequences.
The hippocampus brain region is of great importance in memory formation. This has been illustrated by famous cases such as that of the patient H.M., who was unable to form new memories after large parts of his hippocampus had been removed.
Studies on rodents have demonstrated the role of the hippocampus in spatial learning and navigation. An important discovery in this context was cells that fire at specific locations, known as place cells.
“They play a role in a fascinating phenomenon known as replay,” explains Nicolas Diekmann.
“When an animal moves around, certain place cells fire one after the other along the animal’s route. Later, at rest or during sleep, the same place cells can be reactivated either in the same order as they were experienced or in reverse order.”
The sequences observed during repetition don’t just reflect earlier behaviour. Sequences can also be reassembled, they can adapt to structural changes in the environment or represent places not yet visited but seen.
“We were interested in how the hippocampus produces such a variety of replay types efficiently and what purpose they serve,” outlines Nicolas Diekmann.
The researchers, therefore, built a computer model in which an artificial intelligence learns spatial information. Ultimately, they study how quickly the AI agent finds an exit from a specific spatial situation. The better it knows it, the faster it is.
Playback follows certain rules
The AI agent too learns by repeating neuronal sequences. However, they are not played back randomly, but prioritized according to certain rules.
“Sequences are played back stochastically according to their prioritization,” points out Diekmann. Familiar sequences are prioritized. Positions associated with a reward are also played back more frequently.
“Our model is biologically plausible, generates a manageable computational overhead and learns faster than agents where sequences are replayed at random,” sums up Nicolas Diekmann. “This gives us a little more detail on how the brain learns.”
About this AI and learning research news
Author: Meike Driessen
Contact: Meike Driessen – RUB
Image: The image is credited to Neuroscience News
Original Research: Open access.
“A model of hippocampal replay driven by experience and environmental structure facilitates spatial learning” by Nicolas Diekmann et al. eLife
A model of hippocampal replay driven by experience and environmental structure facilitates spatial learning
Replay of neuronal sequences in the hippocampus during resting states and sleep play an important role in learning and memory consolidation. Consistent with these functions, replay sequences have been shown to obey current spatial constraints. Nevertheless, replay does not necessarily reflect previous behavior and can construct never-experienced sequences.
Here, we propose a stochastic replay mechanism that prioritizes experiences based on three variables: 1. Experience strength, 2. experience similarity, and 3. inhibition of return. Using this prioritized replay mechanism to train reinforcement learning agents leads to far better performance than using random replay.
Its performance is close to the state-of-the-art, but computationally intensive, algorithm by Mattar & Daw (2018). Importantly, our model reproduces diverse types of replay because of the stochasticity of the replay mechanism and experience-dependent differences between the three variables.
In conclusion, a unified replay mechanism generates diverse replay statistics and is efficient in driving spatial learning.