When AI Becomes a Co-Author of Your Delusions

Summary: A new analysis argues that the real danger of generative AI isn’t just that it produces false information—but that it can reinforce and amplify our own distorted beliefs. Drawing on distributed cognition theory, the study suggests that conversational AI can become part of our thinking process, shaping memory, identity, and self-narratives.

Because chatbots function not only as cognitive tools but also as social partners, they may validate false beliefs in ways that make them feel shared and real. Researchers warn that without stronger guardrails, AI systems could unintentionally sustain delusions, conspiracy thinking, or “AI-induced psychosis.

Key Facts

  • Distributed Cognition Risk: Conversational AI can become part of a user’s cognitive process, influencing memory, belief formation, and identity narratives.
  • Dual Function Effect: AI operates both as a thinking tool and as a perceived social companion, increasing the power of affirmation.
  • Validation Loop: Personalization and sycophantic tendencies may reinforce false beliefs instead of challenging them.

Source: University of Exeter

When generative AI systems produce false information, this is often framed as AI “hallucinating at us”—generating errors that we might mistakenly accept as true.

But a new study argues we should pay attention to a more dynamic phenomenon: how we can come to hallucinate with AI.

Lucy Osler, from the University of Exeter, analyses troubling ways in which human-AI interactions can lead to inaccurate beliefs, distorted memories and self-narratives, and delusional thinking.

Drawing on distributed cognition theory, the study analyses cases where users’ false beliefs were actively affirmed by and built upon through interactions with AI as conversational partners.

This shows a person looking at a cloud covered, digital head, symbolizing a human connection with AI chatbots.
Conversational AI may act not only as a tool for thinking, but as a validating social partner—shaping beliefs, memory, and perception of reality. Credit: Neuroscience News

Dr Osler said: “When we routinely rely on generative AI to help us think, remember, and narrate, we can hallucinate with AI. This can happen when AI introduces errors into the distributed cognitive process, but also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives.

“By interacting with conversational AI, people’s own false beliefs can not only be affirmed but can more substantially take root and grow as the AI builds upon them. This happens because Generative AI often takes our own interpretation of reality as the ground upon which conversation is built.

“Interacting with generative AI is having a real impact on people’s grasp of what is real or not. The combination of technological authority and social affirmation creates an ideal environment for delusions to not merely persist but to flourish”

The study identifies what Dr Osler calls the “dual function” of conversational AI. These systems operate both as cognitive tools that help us think and remember, and as apparent conversational partners who seem to share our world. This second function is significant: unlike a notebook or search engine which merely record our thoughts, chatbots can provide a sense of social validation of our realities.

Dr Osler said: “The conversational, companion-like nature of chatbots means they can provide a sense of social validation—making false beliefs feel shared with another, and thereby more real.”

Dr Osler analysed real cases where Generative AI system’s become a distributed part of the cognitive processes of someone clinically diagnosed with delusional thinking and hallucinations. Cases that are increasingly referred to as instances of “AI-induced psychosis”.

The study suggests that Generative AI offers distinctive features that make it concerning for sustaining delusional realities. AI companions are immediately accessible and are already designed to be ‘like-minded’ to their users through personalization algorithms and sycophantic tendencies. There is no need to seek out fringe communities or convince others of one’s beliefs.

Unlike a person who might eventually express concern or set boundaries, an AI could provide validation for narratives of victimhood, entitlement, or revenge. Conspiracy theories could find fertile ground in which to grow, with AI companions that help users construct increasingly elaborate explanatory frameworks.

This may be particularly appealing for those who are lonely, socially isolated, or who feel unable to discuss certain experiences with others—AI companions offer a non-judgmental, emotionally responsive presence that can feel safer than human relationships.

Dr Osler said: “Through more sophisticated guard-railing, built-in fact-checking, and reduced sycophancy, AI systems could be designed to minimize the number of errors they introduce into conversations and to check and challenge user’s own inputs.

“However, a deeper worry is that AI systems are reliant on our own accounts of our lives. They simply lack the embodied experience and social embeddedness in the world to know when they should go along with us and when to push back.”

Key Questions Answered:

Q: What does it mean to “hallucinate with AI”?

A: It refers to situations where AI systems reinforce or expand a user’s false beliefs, becoming part of a shared cognitive process that sustains distorted thinking.

Q: Why are conversational AIs especially risky?

A: Unlike tools like notebooks or search engines, chatbots act as social partners, providing affirmation that can make beliefs feel validated and shared.

Q: Could AI really contribute to psychosis?

A: The study examines cases where AI interactions became integrated into delusional thinking, raising concerns about “AI-induced psychosis,” particularly among vulnerable users.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this AI and psychology research news

Author: Louise Vennells
Source: University of Exeter
Contact: Louise Vennells – University of Exeter
Image: The image is credited to Neuroscience News

Original Research: Open access.
Hallucinating with AI: Distributed Delusions and “AI Psychosis”” by Lucy Osler. Philosophy & Technology
DOI:10.1007/s13347-026-01034-3


Abstract

Hallucinating with AI: Distributed Delusions and “AI Psychosis”

There is much discussion of the false outputs that generative AI systems such as ChatGPT, Claude, Gemini, DeepSeek, and Grok create. In popular terminology, these have been dubbed “AI hallucinations”.

However, deeming these AI outputs “hallucinations” is controversial, with many claiming this is a metaphorical misnomer.

Nevertheless, in this paper, I argue that when viewed through the lens of distributed cognition theory, we can better see the dynamic ways in which inaccurate beliefs, distorted memories and self-narratives, and delusional thinking can emerge through human-AI interactions; extreme examples of which are sometimes referred to as “AI(-induced) psychosis”.

In such cases, I suggest we move away from thinking about how an AI system might hallucinate at us, by generating false outputs, to thinking about how, when we routinely rely on generative AI to help us think, remember, and narrate, we can come to hallucinate with AI.

This can happen when AI introduces errors into the distributed cognitive process, but it can also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives.

In particular, I suggest that the social conversational style of chatbots can lead them to play a dual-function—both as a cognitive artefact and a quasi-Other with whom we co-construct our sense of reality. It is this dual function, I suggest, that makes generative AI an unusual, and particularly seductive, case of distributed delusion.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.