Summary: A new study links a person’s visual long term memory to tracking how an object moves.
Source: Johns Hopkins Medicine.
As Superman flies over the city, people on the ground famously suppose they see a bird, then a plane, and then finally realize it’s a superhero. But they haven’t just spotted the Man of Steel – they’ve experienced the ideal conditions to create a very strong memory of him.
Johns Hopkins University cognitive psychologists are the first to link human’s long-term visual memory with how things move. The key, they found, lies in whether we can visually track an object. When people see Superman, they don’t think they’re seeing a bird, a plane and a superhero. They know it’s just one thing – even though the distance, lighting and angle change how he looks.
People’s memory improves significantly with rich details about how an object’s appearance changes as it moves through space and time, the researchers concluded. The findings, which shed light on long-term memory and could advance machine learning technology, appear in this month’s Journal of Experimental Psychology: General.
“The way I look is only a small part of how you know who I am,” said co-author Jonathan Flombaum, an assistant professor in the Department of Psychological and Brain Sciences. “If you see me move across a room, you’re getting data about how I look from different distances and in different lighting and from different angles. Will this help you recognize me later? No one has ever asked that question. We find that the answer is yes.”
Humans have a remarkable memory for objects, says co-author Mark Schurgin, a graduate student in Flombaum’s Visual Thinking Lab. We recognize things we haven’t seen in decades — like eight-track tapes and subway tokens. We know the faces of neighbors we’ve never even met. And very small children will often point to a toy in a store after seeing it just once on TV.
Though people almost never encounter a single object the exact same way twice, we recognize them anyway.
Schurgin and Flombaum, wondered if people’s vast ability for recall, a skill machines and computers cannot come close to matching, had something to do with our “core knowledge” of the world, the innate understanding of basic physics that all humans, and many animals, are born with. Specifically, everyone knows something can’t be in two places at once. So if we see one thing moving from place to place, our brain has a chance to see it in varying circumstances – and a chance to form a stronger memory of it.

Likewise, if something is behaving erratically and we can’t be sure we’re seeing just one thing, those memories won’t form.
“With visual memory, what matters to our brain is that an object is the same,” said Flombaum. “People are more likely to recognize an object if they see it at least twice, moving in the same path.”
The researchers tested the theory in a series of experiments where people were shown very short video clips of moving objects, then given memory tests. Sometimes the objects appeared to move across the screen as a single object would. Other times they moved in ways we wouldn’t expect a single object to move, such as popping out from one side of the screen and then the other.
In every experiment, subjects had significantly better memories – as much as nearly 20 percent better – of track-able objects that moved according to our expectations, the researchers found.
“Your brain has certain automatic rules for how it expects things in the world to behave,” said Schurgin. “It turns out, these rules affect your memory for what you see.”
The researchers expect the findings to help computer scientists build smarter machines that can recognize objects. Learning more about how humans do it, Flombaum said, will help us build systems that can do it.
Funding: This research was supported by NSF BCS-1534568 and a seed grant from the Johns Hopkins University Science of Learning Institute.
Source: Jill Rosen – Johns Hopkins Medicine
Image Source: NeuroscienceNews.com image is adapted from the JHU video.
Video Source: The video is credited to Johns Hopkins University.
Original Research: Abstract for “Exploiting core knowledge for visual object recognition” by Schurgin, Mark W. and Flombaum, Jonathan I. in Journal of Experimental Psychology: General. Published online March 2017 doi:10.1037/xge0000270
[cbtabs][cbtab title=”MLA”]Johns Hopkins Medicine “It’s a Bird, It’s a Plane, It’s – a Key Discovery About Human Memory.” NeuroscienceNews. NeuroscienceNews, 4 March 2017.
<https://neurosciencenews.com/visual-memory-perception-6207/>.[/cbtab][cbtab title=”APA”]Johns Hopkins Medicine (2017, March 4). It’s a Bird, It’s a Plane, It’s – a Key Discovery About Human Memory. NeuroscienceNew. Retrieved March 4, 2017 from https://neurosciencenews.com/visual-memory-perception-6207/[/cbtab][cbtab title=”Chicago”]Johns Hopkins Medicine “It’s a Bird, It’s a Plane, It’s – a Key Discovery About Human Memory.” https://neurosciencenews.com/visual-memory-perception-6207/ (accessed March 4, 2017).[/cbtab][/cbtabs]
Abstract
Exploiting core knowledge for visual object recognition
Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints—often characterized as ‘Core Knowledge’—are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition.
“Exploiting core knowledge for visual object recognition” by Schurgin, Mark W. and Flombaum, Jonathan I. in Journal of Experimental Psychology: General. Published online March 2017 doi:10.1037/xge0000270