Summary: Our eyes are constantly “jumping” (saccades) several times a second, which should make the world look like a shaky, handheld camera video. Yet, the world remains perfectly still. A new study used afterimages—the ghostly shapes left behind after looking at a bright light—to decode how the brain achieves this stability.
By tracking afterimages in total darkness, researchers discovered that the brain uses an internal “efference copy” of its own movement commands to predict where things should be. While this prediction is incredibly accurate, it has a consistent, 6% “undershoot” that reveals the inner workings of our visual hardware.
Key Facts
- The Saccade Paradox: Even though our eyes move abruptly, our perception doesn’t “shift” because the brain predicts the visual consequences of the movement before they happen.
- The 94% Rule: On average, the brain’s internal estimate of an eye movement reaches about 94% of the actual distance. This tiny, systematic error is called hypometria.
- Efference Copy: The brain doesn’t wait for visual feedback to know where the eyes moved; it uses a “carbon copy” of the motor signal sent to the eye muscles to update its internal map.
- Predictive Remapping: Afterimages move with our gaze because they stay fixed on the retina. The brain, expecting the image to shift across the retina during a jump, concludes the “ghostly light” must be moving through space to stay in the same retinal spot.
- Adaptive Mapping: When the eyes get tired and movements shorten (saccadic adaptation), the brain’s internal prediction automatically adjusts to match the new, shorter movements.
Source: TUB
Contrary to what you and I might experience when we explore the world, our eyes do not provide us with a continuous and stable view of it. They jump several times each second in rapid movements called saccades. Because the eye projects the world onto the retina, we should see the world shift abruptly each time the eyes move—the visual scene should feel unstable, yet the brain uses sophisticated mechanisms that ensure it does not.
A recent study, titled “High-fidelity but hypometric spatial localization of afterimages across saccades”, published in Science Advances, found that common afterimages, like the faint shape we all see after looking at a bright light, provide insights into how the brain achieves that stability.
The work was conducted by Richard Schweitzer, Thomas Seel, Jörg Raisch, and Martin Rolfs, researchers from the Cluster of Excellence Science of Intelligence in Berlin. Using afterimages as an experimental tool, the team set out to measure how accurately the brain predicts the visual consequences of its own eye movements. The study reveals that these predictions are very accurate, but are subject to systematic errors.
Using afterimages to isolate the brain’s internal signals
The phenomenon that afterimages follow wherever we direct our gaze was already documented by Aristotle and reveals a striking dissociation: While the visual world appears stable when eye movements constantly shift the world across the retina, afterimages seem to drift across the scene despite remaining fixed on the retina. Visual stability and the apparent motion of afterimages may therefore be two sides of the same coin–the brain’s attempt to account for its own eye movements.
To examine these mechanisms, the experiments had to be conducted in complete darkness—the opposite to normal everyday vision where the richness of the visual scene provides constant feedback that helps the brain estimate each eye movement.
Sitting in the dark, participants first fixated a bright flash that created an afterimage and then looked over to a second, briefly illuminated light source. Then, once the afterimage became clearly visible, brief probe lights appeared at specific positions, and participants reported whether the afterimage seemed to lie to the left of the probe light, to the right, or directly aligned with it.
From these responses the researchers could estimate where the afterimage was perceived. Eye-tracking measurements monitored where participants really looked—allowing the researchers to determine how closely perception tracked the actual movement of the eye.
What the study found: Prediction is highly accurate – but still slightly short
Afterimages closely followed the eyes: The larger the eye movement, the farther the afterimage appeared to move in space. Yet this match was not perfect. “On average, the perceived shift of the afterimage reached about 94 percent of the actual eye movement,” says Richard Schweitzer, lead author of the study. “In practical terms, perception follows eye movements very closely, but not perfectly.”
This small undershoot, called hypometria, held across individuals and remained consistent across different directions and sizes of eye movements. This suggests a systematic inaccuracy in the brain’s prediction rather than a random error. Even though the difference is subtle enough that most people never consciously notice it, understanding it requires looking at how the brain updates space after each eye movement.
The brain predicts before it sees
Now what actually determines where the afterimage appears? One possibility is that its perceived location is determined based on visual feedback that becomes available after each eye movement. The researchers tested this directly. In some trials, the saccade target (i.e., the light that participants were told to follow) remained briefly visible after the eye landed; in others it was shifted slightly to create deliberately misleading feedback.
Neither manipulation changed where participants perceived the afterimage. Indeed, there is good evidence that the brain uses an internal copy of the command sent to the eye muscles, called an efference copy, to predict how the visual scene should shift.
That signal effectively tells the brain: “the eyes just moved this far”, allowing perception to anticipate the consequences of the movement instead of waiting for new visual input to correct perception afterward. Movements of afterimages now reveal that visual predictions derived from the efference copy fall short of the eye movement’s true consequences.
When eye movements change, perception changes with them
That raises a natural follow-up question: If perception depends on the brain’s efference copy, what happens when those movements themselves change?
Eye movements are not fixed. When the eyes consistently miss their targets—say, due to fatigue of the eye muscles—people gradually adjust how far their eyes move. This process, known as saccadic adaptation, can be introduced in the lab by shifting the target of an eye movement with each saccade.
This trick provided another insight into the brain’s prediction of the visual consequences of eye movements: As participants’ saccades became shorter through adaptation, the perceived shift of the afterimage shortened with them. Yet, the small systematic undershoot remained, whether saccades were adapted or not.
Why a small error may actually be expected
That remaining mismatch may not be a flaw. Natural eye movements often fall slightly short of their targets, so it makes sense that the brain’s internal estimate reflects this tendency. Assuming a stable visual environment–where objects do not suddenly change their positions during saccades–observers can use visual cues in everyday life to learn how much the visual scene typically changes after a given eye movement.
If saccades tend to fall slightly short, it would only be reasonable to expect a slightly smaller visual shift as well. What may matter more than perfect accuracy of the movement is that perception stays reliably aligned with it.
What afterimages reveal about visual stability
If afterimages remain fixed on the retina, then why do they appear to move with our gaze? One possible explanation is that the brain uses its knowledge about the consequences of an upcoming eye movement to predict where an object should appear on the retina after the saccade –a process known as predictive remapping. If this prediction is accurate and matches the object’s actual position, as confirmed by visual feedback, the object is perceived as stable.
In normal visual environments this works well. But an afterimage inevitably violates this prediction: because it stays fixed on the retina while the eyes move, the brain can only conclude that it moved in the same direction. In this case, the size of the prediction error corresponds to the size of predicted visual change.
“Afterimages become a useful tool for studying how the brain keeps the visual world stable by predicting the sensory consequences of its own movements,” says Schweitzer. Understanding these predictive mechanisms may provide insights beyond basic vision science, for example in robotics, virtual reality, and clinical studies of eye-movement disorders, where linking movement with sensory consequences reliably is essential.
At a glance
• Human eyes move several times per second in rapid jumps called saccades, yet the visual world appears stable.
• Afterimages allow researchers to isolate the brain’s internal signals that track these eye movements.
• The brain predicts the visual consequences of eye movements with striking accuracy.
• However, perceived afterimage movement slightly undershoots the true eye movement, reaching about 94% of the actual shift.
• This consistent undershoot suggests a small but expectable bias in the brain’s internal estimate of eye movement-induced change.
• The findings help explain how the brain keeps the visual world stable despite constant motion of the eyes.
Key Questions Answered:
A: It’s a trick of the brain’s own logic. In the real world, when your eyes move, the world moves across your retina. Your brain “subtracts” its eye movement to keep the world still. But an afterimage is stuck to your retina. When your brain subtracts the eye movement and sees the afterimage hasn’t moved on the retina, it assumes the image must be moving through space at the same speed as your eyes.
A: Not necessarily! Natural eye movements often fall slightly short of their targets. The brain’s 6% undershoot likely reflects this biological reality. It’s better for the brain to be reliably aligned with how our muscles actually behave than to be mathematically perfect but biologically disconnected.
A: Absolutely. Motion sickness often happens when there is a mismatch between what your eyes see and what your brain’s “efference copy” predicts. Understanding that the brain naturally expects a 94% shift could help developers create Virtual Reality environments that feel more stable and natural to the human eye.
Editorial Notes:
- This article was edited by a Neuroscience News editor.
- Journal paper reviewed in full.
- Additional context added by our staff.
About this visual neuroscience research news
Author: Maria Ott
Source: TUB
Contact: Maria Ott – TUB
Image: The image is credited to Neuroscience News
Original Research: Open access.
“High-fidelity but hypometric spatial localization of afterimages across saccades” by Richard Schweitzer, Thomas Seel, Jörg Raisch, and Martin Rolfs. Science Advances
DOI:10.1126/sciadv.aeb0557
Abstract
Humans typically perceive their visual world as stable and continuous, despite frequent shifts of the retinotopic reference frame caused by saccades. This perceptual stability is paralleled by afterimage movement across saccades: Although retinotopically stable, afterimages appear to move in egocentric space wherever the eye moves.
To investigate the mechanisms underlying this phenomenon, we tasked human observers to localize afterimages relative to briefly flashed probes in complete darkness. This psychophysical tracking of afterimages was accompanied by eye tracking, allowing us to fit a dedicated computational model to accurately predict afterimage movement based on the size of eye movements.
The gain of afterimage movement was significantly hypometric, remained unaffected by postsaccadic visual feedback and saccadic adaptation, and was inversely related to saccade gain.
Assuming a parsimonious framework of head-centered localization, afterimage movement is driven by efference-based, feedforward predictions of visual consequences of saccades, demonstrating the phenomenon’s usefulness for studying perceptual stability.

