Summary: Researchers have developed AI-generated “visual anagrams” — images that transform into entirely new objects when rotated — to explore how the brain processes perception. Unlike traditional optical illusions, these rotating images allow scientists to isolate how people interpret size, emotion, and animacy in visual information.
Early experiments revealed that people’s aesthetic preferences still matched real-world size expectations, even when viewing the exact same image in different orientations. The work provides a breakthrough tool for studying perception and cognition with precision never before possible.
Key Facts:
- AI Visual Anagrams: The team created AI-generated images that appear as one object (like a bear) when upright and another (like a butterfly) when rotated.
- Mind-Perception Breakthrough: These images allow scientists to study how the brain interprets attributes such as size, emotion, and motion using identical visual inputs.
- New Tool for Psychology: Researchers plan to use this method to test perception of animacy and emotion, offering new insights into how humans recognize and react to objects.
Source: JHU
New artificial intelligence-generated images that appear to be one thing, but something else entirely when rotated, are helping scientists test the human mind.
The work by Johns Hopkins University perception researchers addresses a longstanding need for uniform stimuli to rigorously study how people mentally process visual information.
“These images are really important because we can use them to study all sorts of effects that scientists previously thought were nearly impossible to study in isolation—everything from size to animacy to emotion,” said first author Tal Boger, a PhD student studying visual perception.
“Not to mention how fun they are to look at,” adds senior author Chaz Firestone, who runs the university’s Perception & Mind Lab.
The team adapted a new AI tool to create “visual anagrams.” An anagram is a word that spells something else when its letters are rearranged. Visual anagrams are images that look like something else when rotated.
The visual anagrams the team created include a single image that is both a bear and a butterfly, another that is an elephant and a rabbit, and a third that is both a duck and a horse.
“This is an important new kind of image for our field,” said Firestone. “If something looks like a butterfly in one orientation and a bear in another—but it’s made of the exact same pixels in both cases—then we can study how people perceive aspects of images in a way that hasn’t really been possible before.”
The findings are published today in Current Biology.
The team ran initial experiments exploring how people perceive the real-world size of objects. Real-world size has posed a longstanding puzzle for perception scientists, because one can never be certain if subjects are reacting to an object’s size or to some other more subtle visual property like an object’s shape, color or fuzziness.
“Let’s say we want to know how the brain responds to the size of an object. Past research shows that big things get processed in a different brain region than small things. But if we show people two objects that differ in how big they are—say, a butterfly and a bear—those objects are also going to differ in lots of other ways: their shape, their texture, how bright or colorful they are, and so on,” Firestone explained.
“That makes it hard to know what’s really driving the brain’s response. Are people reacting to the fact that bears are big and butterflies are small, or is it that bears are rounder or furrier? The field has really struggled to address this issue.”
With the visual anagrams, the team found evidence for many classic real-world size effects, even when the large and small objects used in their studies were just rotated versions of the same image.
For example, previous work has found that people find images more aesthetically pleasing when they are depicted in ways that match their real-world size—preferring, say, pictures of bears to be bigger than pictures of butterflies.
Boger and Firestone found that this was also true for visual anagrams: When subjects adjusted the bear image to be its ideal size, they made it bigger than when they adjusted the butterfly image to be its ideal size—even though the butterfly and the bear are the very same image in different orientations.
The team hopes to use visual anagrams to study how people respond to animate and inanimate objects and expects the technique to have many possible uses for future experiments in psychology and neuroscience.
“We used anagrams to study size, but you could use them for just about anything,” Firestone said.
“Animate and inanimate objects are processed in different areas of the brain too so you could make anagrams that look like a truck in one orientation but a dog in another orientation. The approach is quite general, and we can foresee researchers using it for many different purposes.”
Key Questions Answered:
A: They are single images that look like one object in one orientation and a completely different one when rotated, created using artificial intelligence.
A: They allow scientists to isolate and study how people interpret key visual properties without confounding variables like color or texture.
A: Participants preferred image sizes that matched real-world expectations — even when viewing the same pixels rearranged as different objects.
About this AI and visual neuroscience research news
Author: Jill Rosen
Source: JHU
Contact: Jill Rosen – JHU
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Visual anagrams reveal high-level effects with ‘identical’ stimuli” by Tal Boger et al. Current Biology
Abstract
Visual anagrams reveal high-level effects with ‘identical’ stimuli
A fundamental question in psychology and neuroscience concerns how the mind represents not only lower-level stimulus features such as luminance, contrast, or spatial frequency, but also richer, higher-level properties such as animacy, emotion, or real-world size.
Numerous findings suggest that such high-level properties are encoded automatically, engage visual attention, and organize neural responses.
However, a critical challenge arises when interpreting such findings: High-level categories systematically covary with lower-level features, such that effects attributed to high-level properties may instead be driven by their lower-level covariates.
Can this challenge be overcome? Here, we introduce a novel approach by leveraging ‘visual anagrams’ — a diffusion-based technique for generating images whose interpretations change radically with orientation, such as a cow when upright and a mouse when inverted.
Using real-world size as a case study, we generated anagrams depicting a canonically large object in one orientation and a canonically small object in another, and placed them in classic experimental paradigms.
Five experiments revealed that many (but not all) effects of real-world size persisted under such conditions.
Together, our findings address a longstanding challenge in perception research and establish a broadly applicable tool for psychology and neuroscience.