What is AI psychosis?
"AI psychosis" is a term for situations in which LLMs appear to reinforce or amplify users' delusions. The term has also been used more broadly to refer to various dynamics in which LLMs are bad for their users, including sycophancy in general, “parasitic AI,” and AI enabling self-harm. This article will stick to the stricter meaning that involves user delusion.
The phenomenon first attracted attention in 2025, with scattered reports of people engaging with chatbots, especially GPT-4o, in ways that seemed to encourage psychotic patterns of thought. However, it remains unclear whether such people would have become psychotic in the absence of LLMs — maybe LLMs are simply a new medium through which pre-existing conditions manifest.
AI psychosis is not just defined by forming strange or false beliefs through interacting with LLMs. Using an LLM for astrology or as a psychic medium isn't necessarily psychosis if the user maintains some awareness they're engaging in a creative or spiritual exercise. The concerning cases involve users whose picture of reality breaks down drastically: those who believe they've discovered fundamental truths about reality, proven they're the messiah, or confirmed they're living in a simulation specifically designed for them. These users often become isolated from friends and family, and with them goes the possibility of pushback against their delusions.
The propensity of LLMs to validate users’ delusional beliefs may be caused by their propensity to be sycophantic. If someone with emerging delusions asks a question like "What if I'm the only real person?", a human conversation partner will likely push back against such an idea, but a sycophantic model will validate them. The LLM's ability to elaborate coherently on any premise can transform fleeting odd thoughts into elaborate delusional systems that feel like discovered truths.
Caption: In AI Induced Psychosis: A shallow investigation, Tim Hua illustrates that models vary a lot in their propensity to indulge these beliefs.
As of Q3 2025, there are many anecdotal reports of people being diagnosed with psychosis after extended chat sessions with LLMs, with limited evidence of increased total hospitalizations due to psychosis. In the future, AI psychosis could become more prevalent as people increasingly turn to LLMs for emotional support — or it could decrease if AI companies train their models not to be sycophantic.
Beyond the immediate mental health concerns, people have argued that AI psychosis illustrates difficulties with alignment. Some LLMs occasionally seem to behave as if they’re deliberately feeding their user’s delusions — even though these same LLMs would respond, if asked, that doing so is morally wrong. This suggests that aligning a model isn't as simple as teaching it to give reasonable answers to questions about what's the right thing to do.
Further reading:
- Scott Alexander’s “In Search Of AI Psychosis”