Why Some People Are Naturally Better at Detecting AI Images

Summary: As AI-generated imagery becomes increasingly indistinguishable from reality, a new study reveals that the ability to spot a “deepfake” isn’t about how much you know about technology—it’s about a stable, internal trait called object recognition.

Researchers developed the “AI Face Test” and discovered that general intelligence and technical experience are poor predictors of success. Instead, individuals with a high natural ability to distinguish visually similar objects (a skill used by radiologists to spot tumors or birdwatchers to identify species) are consistently better at identifying synthetic faces.

This suggests that some people possess a specialized “perceptual armor” that makes them less vulnerable to digital deception.

Key Facts

  • The “Object Recognition” Factor: The strongest predictor of AI detection is not tech-savviness, but a person’s general ability to recognize and categorize visually similar objects.
  • The AI Face Test: This new tool is the first of its kind to measure individual differences in the ability to separate real human faces from synthetic ones.
  • Stable Traits over Training: General intelligence and specific AI training did not help people judge faces more accurately; the ability appears to be a stable, inherent visual trait.
  • Cross-Domain Skill: This same visual ability has been linked to high performance in specialized fields like radiology (reading X-rays) and pathology (identifying cancerous cells).
  • Societal Resilience: Identifying individuals with high object recognition skills may help society understand who is naturally more “immune” to visual misinformation.

Source: Vanderbilt University

Can you tell the difference between an artificial-intelligence-generated face and a real one? In an era of digital misinformation, where fabricated images can spread widely across news and social media, this skill is proving invaluable.

A new study has found that a person’s object recognition ability, or the ability to distinguish visually similar objects, can predict who can spot an AI-generated face. The higher the ability, the easier it is for a person to tell the difference.

The study was authored by Isabel Gauthier, David K. Wilson Chair and Professor of Psychology, Jason Chow, Ph.D.’24, and Rankin McGugin, former research assistant professor in the Department of Psychology.

This shows a digital eye looking at AI generated faces.
A person’s natural object recognition ability acts as a primary defense against AI-generated misinformation, allowing them to detect subtle synthetic artifacts that escape others. Credit: Neuroscience News

This discovery highlights the importance of general object recognition and suggests that such abilities may play a crucial role in helping society better prepare for emerging forms of digital deception. By uncovering the traits that make some people less vulnerable to AI-generated misinformation, this work advances understanding human perception in the age of AI.

“These results highlight a visual ability that has very general applications,” Gauthier said. “It’s a stable trait that helps people meet new perceptual challenges, including those created by AI. We were shocked to see how intelligence or even technology training did not help accurately judge if a face is AI.”

In the study, researchers developed the AI Face Test, the first tool designed to measure individual differences in this skill. They found that traditional factors such as intelligence, experience with AI, or even specialized face recognition skills did not predict who could reliably tell real from fake. Instead, the strongest predictor was objection recognition.

“We were interested not just in examining whether people are able to differentiate between a real face and an AI-generated face, but in comparing people on their ability to perform this task and see if we could predict the performance using object recognition,” Gauthier said.

“This approach is very novel—there’s not a lot of people who study individual differences in object recognition. In vision, there’s a tradition of looking at the average of a group. Nobody has been asking these questions, and we have a lot to learn about how people do these things.”

Humans with stronger object recognition skills consistently outperformed others in identifying AI-generated faces, and their performance remained stable when re-tested.

This same ability has been linked in other research to performance in diverse tasks, such as identifying lung nodules in chest X-rays, categorizing blood cells as cancerous, recognizing musical notation, and even judging sex from retinal images.

The findings show that a broad visual ability, not tied to faces or technology experience, helps some individuals navigate the unprecedented challenge of distinguishing real from synthetic images.

“There is this general message we hear in the media that AI images are so realistic that we can’t tell the difference, and I think that’s misleading,” Gauthier said.

“I think there’s a lot of messaging indicating that we can’t differentiate, when in fact, what you have is a distribution of people. There are some who can’t tell the difference, and then there are some who are doing it great, and then there’s some who are doing it okay. As AI becomes ever present in our reality, I think it’s useful to know that some people are better at this than others.”

Key Questions Answered:

Q: Can I train myself to be better at spotting AI images?

A: While you can learn specific “tells” (like weird shadows or extra fingers), this study found that people who are naturally good at object recognition are consistently the best at spotting fakes. It seems to be more of a “baked-in” visual skill than something learned through tech training.

Q: Why doesn’t being “tech-savvy” help?

A: Tech-savvy people know how AI works, but that’s different from perceiving the subtle visual inconsistencies that a synthetic image creates. High object recognizers are better at noticing “visual noise” that doesn’t belong, regardless of their background in technology.

Q: Does this mean AI will eventually fool everyone?

A: The media often says AI is “too realistic to tell,” but this research argues that’s a myth. There is a wide distribution of ability—some people struggle, but others are remarkably good at identifying fakes. We aren’t all equally vulnerable.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this AI and visual perception research news

Author: Mary-Lou Watkinson
Source: Vanderbilt University
Contact: Mary-Lou Watkinson – Vanderbilt University
Image: The image is credited to Neuroscience News

Original Research: Closed access.
Domain-general object recognition predicts human ability to tell real from AI-generated faces” by Chow, J. K., McGugin, R. W., & Gauthier, I. Journal of Experimental Psychology
DOI:doi.org/10.1037/xge0001881


Abstract

Domain-general object recognition predicts human ability to tell real from AI-generated faces

Faces created by artificial intelligence (AI) are now considered indistinguishable from real faces. Still, humans vary in their ability to detect these faces—a skill so novel it would have been useless a few years ago.

We show that some individuals are consistently better at discriminating real from AI-generated faces. We used latent variable modeling to test whether this ability can be predicted by a domain-general ability, called o, which is measured as the shared variance between perceptual and memory judgments of both novel and familiar objects.

We show that o predicts detection of AI-generated faces better than face recognition, intelligence, or experience with AI. An analysis of the relation between performance and cues in the image reveals that people are more likely to be misled by cues from AI faces than from real faces.

It also suggests that those with a high o are less cue dependent than those with a low o. The o advantage on our task likely reflects robust visual processing under challenging conditions rather than superior artifact detection.

Our results add to a growing literature suggesting that o can predict a wide range of perceptual decisions, including one that lacks evolutionary precedent, providing insights into the cognitive architecture underlying complex perceptual judgments.

An understanding of individual differences in AI detection may facilitate interactions between humans and AI, for instance, to optimize training data for generative models.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.