Summary: Researchers trained computer models to predict everyday events, revealing that responding to uncertainty, rather than just prediction errors, improved understanding. This challenges the idea that surprises alone drive event comprehension and suggests the brain may use dual mechanisms. Meanwhile, studies on memory show identifying event boundaries predicts better recall, especially for older adults.
Ongoing research aims to improve memory by helping people recognize these boundaries more effectively. Findings may lead to interventions for age-related memory loss and enhance our understanding of cognitive processes. This work highlights the intricate ties between event segmentation and memory storage.
Key Facts:
- Computer models show uncertainty improves comprehension of everyday events.
- Identifying event boundaries strongly predicts memory retention.
- Older adults often struggle with event processing, linked to memory decline.
Source: WUSTL
Life is a series of small events: making morning coffee, letting the dog out, opening a laptop, letting the dog back in. Add them all up and you have a full day.
Our brains are committed to observing and processing the events that make up our daily lives, said Jeff Zacks, the Edgar James Swift Professor in Arts & Sciences and chair of the Department of Psychological & Brain Sciences.
“Knowing where events begin and where they end is crucial to understanding the world,” Zacks said.
In a pair of new papers, Zacks and other researchers in Arts & Sciences and the McKelvey School of Engineering explore this key process of human cognition.
Zacks led a study that trained computer models to observe more than 25 hours of video of people performing simple, everyday tasks such as cleaning a kitchen or cooking a meal before making predictions about what happens next.
The study came to a surprising conclusion: The computer models were most accurate when they responded to uncertainty. When the model was especially unsure about what would happen next, it would reset and reassess the scene, an approach that improved its overall comprehension.
Co-authors of the study, which will be published in an upcoming edition of PNAS Nexus, include Tan Nguyen, a graduate student in Zacks’s Dynamic Cognition Laboratory; Matt Bezdek, a senior scientist in the lab; Aaron Bobick, the James M. McKelvey Professor and dean of the McKelvey School of Engineering; Todd Braver, the William R. Stuckenberg Professor in Human Values and Moral Development; and Samuel Gershman, a Harvard neuroscientist.
Zacks had previously theorized that the human brain was especially tuned to the small surprises that fill our lives. He proposed that people would reassess a scene every time they registered something they didn’t expect, a phenomenon known as “prediction error.”
The finding that the successful computer model paid more attention to uncertainty than to prediction errors threw the prior theory into doubt. “We’re doing science here,” Zacks said. “We revise theories when faced with new data.”
Surprises still matter, and there’s no need to completely throw out the concept of prediction error, Nguyen said. “We’re starting to think that the brain uses both mechanisms,” he said. “It’s not a case of either/or. Each model can make unique contributions to our understanding of human cognition.”
Maverick Smith, a postdoctoral researcher in the Dynamic Cognition Lab, is also taking a deeper look at the interplay between event comprehension and memory. Working with Heather Bailey, a former WashU postdoc who is now an associate professor at Kansas State University, Smith co-authored a review article in Nature Reviews Psychology gathering the growing evidence that long-term memory is intricately tied to the ability to logically and accurately discern where one event ends and another begins.
“There are a lot of individual differences in the ability to identify when events start and stop, and those differences can strongly predict how much people remember later on,” Smith said.
“We hope to be able to create an intervention that could improve memory by helping people segment events.”
Like Zacks, Smith relies on video clips to better understand how the brain processes events. Instead of a person cooking and cleaning, his videos show a person shopping in a store, setting up a printer, or doing other mundane tasks.
In various experiments, viewers push buttons whenever they discern the beginning or end of a particular event. Smith then tests the participant’s memory of the videos with a series of written questions.
Smith found that older people tend to have more difficulty processing events, a deficit that could play a role in age-related memory loss. “Maybe there’s a way we can intervene to help them better remember the events in their lives,” he said.
Zacks, Nguyen, Smith, and other members of the Department of Psychological & Brain Sciences have ambitious plans to further their understanding of the brain’s ability to process and remember events.
Zacks’ team is working to use fMRI brain imaging to track how 45 study participants respond to videos of everyday events in real time. “We’re studying the actual neural dynamics of these cognitive processes,” Zacks said.
Another ongoing study tracks eye movements, providing new insight into how we see the world. “When people watch an everyday activity, they spend a lot of time looking at and thinking about people’s hands,” Zacks explained.
Smith is currently using video-based experiments to see if he can improve the memory of study subjects — including older people and those with Alzheimer’s disease — by making the boundaries between events easier to identify. Ultimately, he would like to understand how event observations are stored and maintained in long-term memory.
“Some people are definitely better than others at segmenting events into meaningful chunks,” Smith said. “Can that ability be improved, and can that lead to improvements in memory? Those are the questions we’re still asking.”
About this memory and neuroscience research news
Author: Leah Shaffer
Source: WUSTL
Contact: Leah Shaffer – WUSTL
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Modeling human activity comprehension at human scale: Prediction, segmentation, and categorization” by Jeff Zacks et al. PNAS Nexus
Abstract
Modeling human activity comprehension at human scale: Prediction, segmentation, and categorization
Humans form sequences of event models—representations of the current situation—to predict how activity will unfold. Multiple mechanisms have been proposed for how the cognitive system determines when to segment the stream of behavior and switch from one active event model to another.
Here, we constructed a computational model that learns knowledge about event classes (event schemas), by combining recurrent neural networks for short-term dynamics with Bayesian inference over event classes for event-to-event transitions.
This architecture represents event schemas and uses them to construct a series of event models. This architecture was trained on one pass through 18 h of naturalistic human activities. Another 3.5 h of activities were used to test each variant for agreement with human segmentation and categorization.
The architecture was able to learn to predict human activity, and it developed segmentation and categorization approaching human-like performance.
We then compared two variants of this architecture designed to better emulate human event segmentation: one transitioned when the active event model produced high uncertainty in its prediction and the other transitioned when the active event model produced a large prediction error.
The two variants learned to segment and categorize events, and the prediction uncertainty variant provided a somewhat closer match to human segmentation and categorization—despite being given no feedback about segmentation or categorization.
These results suggest that event model transitioning based on prediction uncertainty or prediction error can reproduce two important features of human event comprehension.