Summary: A new artificial neural network based on the human brain sheds light on how we process moving images.
Source: University of Cambridge
A computer network closely modeled on part of the human brain is enabling new insights into the way our brains process moving images – and explains some perplexing optical illusions.
By using decades’ worth of data from human motion perception studies, researchers have trained an artificial neural network to estimate the speed and direction of image sequences.
The new system, called MotionNet, is designed to closely match the motion-processing structures inside a human brain. This has allowed the researchers to explore features of human visual processing that cannot be directly measured in the brain.
Their study, published in the Journal of Vision, uses the artificial system to describe how space and time information is combined in our brain to produce our perceptions, or misperceptions, of moving images.
The brain can be easily fooled. For instance, if there’s a black spot on the left of a screen, which fades while a black spot appears on the right, we will ‘see’ the spot moving from left to right – this is called ‘phi’ motion. But if the spot that appears on the right is white on a dark background, we ‘see’ the spot moving from right to left, in what is known as ‘reverse-phi’ motion.”
The researchers reproduced reverse-phi motion in the MotionNet system, and found that it made the same mistakes in perception as a human brain – but unlike with a human brain, they could look closely at the artificial system to see why this was happening. They found that neurons are ‘tuned’ to the direction of movement, and in MotionNet, ‘reverse-phi’ was triggering neurons tuned to the direction opposite to the actual movement.
The artificial system also revealed new information about this common illusion: the speed of reverse-phi motion is affected by how far apart the dots are, in the reverse to what would be expected. Dots ‘moving’ at a constant speed appear to move faster if spaced a short distance apart, and more slowly if spaced a longer distance apart.
“We’ve known about reverse-phi motion for a long time, but the new model generated a completely new prediction about how we experience it, which no-one has ever looked at or tested before,” said Dr Reuben Rideaux, a researcher in the University of Cambridge’s Department of Psychology and first author of the study.
Humans are reasonably good at working out the speed and direction of a moving object just by looking at it. It’s how we can catch a ball, estimate depth, or decide if it’s safe to cross the road. We do this by processing the changing patterns of light into a perception of motion – but many aspects of how this happens are still not understood.
“It’s very hard to directly measure what’s going on inside the human brain when we perceive motion – even our best medical technology can’t show us the entire system at work. With MotionNet we have complete access,” said Rideaux.
Thinking things are moving at a different speed than they really are can sometimes have catastrophic consequences. For example, people tend to underestimate how fast they are driving in foggy conditions, because dimmer scenery appears to be moving past more slowly than it really is.
The researchers showed in a previous study that neurons in our brain are biased towards slow speeds, so when visibility is low they tend to guess that objects are moving more slowly than they actually are.
Revealing more about the reverse-phi illusion is just one example of the way that MotionNet is providing new insights into how we perceive motion. With confidence that the artificial system is solving visual problems in a very similar way to human brains, the researchers hope to fill in many gaps in current understanding of how this part of our brain works.
Predictions from MotionNet will need to be validated in biological experiments, but the researchers say that knowing which part of the brain to focus on will save a lot of time.
Rideaux and his study co-author Dr Andrew Welchman are part of Cambridge’s Adaptive Brain Lab, where a team of researchers is examining the brain mechanisms underlying our ability to perceive the structure of the world around us.
Exploring and explaining properties of motion processing in biological brains using a neural network
Visual motion perception underpins behaviors ranging from navigation to depth perception and grasping. Our limited access to biological systems constrains our understanding of how motion is processed within the brain.
Here we explore properties of motion perception in biological systems by training a neural network to estimate the velocity of image sequences. The network recapitulates key characteristics of motion processing in biological brains, and we use our access to its structure to explore and understand motion (mis)perception. We find that the network captures the biological response to reverse-phi motion in terms of direction.
We further find that it overestimates and underestimates the speed of slow and fast reverse-phi motion, respectively, because of the correlation between reverse-phi motion and the spatiotemporal receptive fields tuned to motion in opposite directions. Second, we find that the distribution of spatiotemporal tuning properties in the V1 and middle temporal (MT) layers of the network are similar to those observed in biological systems. We then show that, in comparison to MT units tuned to fast speeds, those tuned to slow speeds primarily receive input from V1 units tuned to high spatial frequency and low temporal frequency. Next, we find that there is a positive correlation between the pattern-motion and speed selectivity of MT units. Finally, we show that the network captures human underestimation of low coherence motion stimuli, and that this is due to pooling of noise and signal motion.
These findings provide biologically plausible explanations for well-known phenomena and produce concrete predictions for future psychophysical and neurophysiological experiments.