Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, And ChatGPT Moves in the Wrong Direction
Back on October 14, 2025, the chief executive of OpenAI delivered a surprising declaration.
“We made ChatGPT fairly limited,” it was stated, “to ensure we were being careful with respect to mental health concerns.”
Being a doctor specializing in psychiatry who studies newly developing psychosis in adolescents and youth, this came as a surprise.
Researchers have found a series of cases this year of users developing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT interaction. My group has subsequently discovered four more instances. Alongside these is the widely reported case of a teenager who died by suicide after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.
The strategy, based on his declaration, is to reduce caution in the near future. “We understand,” he states, that ChatGPT’s limitations “made it less effective/pleasurable to a large number of people who had no existing conditions, but given the gravity of the issue we wanted to address it properly. Given that we have succeeded in address the severe mental health issues and have advanced solutions, we are going to be able to safely ease the controls in most cases.”
“Emotional disorders,” if we accept this framing, are separate from ChatGPT. They are associated with people, who either have them or don’t. Fortunately, these problems have now been “addressed,” though we are not informed the means (by “new tools” Altman probably means the partially effective and readily bypassed safety features that OpenAI has lately rolled out).
But the “emotional health issues” Altman aims to place outside have significant origins in the structure of ChatGPT and additional large language model chatbots. These products wrap an basic data-driven engine in an interaction design that replicates a discussion, and in this process implicitly invite the user into the illusion that they’re engaging with a being that has autonomy. This illusion is strong even if rationally we might realize the truth. Imputing consciousness is what individuals are inclined to perform. We yell at our vehicle or laptop. We speculate what our domestic animal is considering. We recognize our behaviors in various contexts.
The widespread adoption of these products – over a third of American adults indicated they interacted with a conversational AI in 2024, with 28% mentioning ChatGPT in particular – is, in large part, dependent on the strength of this deception. Chatbots are ever-present companions that can, as per OpenAI’s website tells us, “generate ideas,” “discuss concepts” and “partner” with us. They can be attributed “personality traits”. They can use our names. They have approachable names of their own (the first of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, saddled with the name it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the core concern. Those discussing ChatGPT commonly mention its early forerunner, the Eliza “counselor” chatbot designed in 1967 that produced a analogous illusion. By modern standards Eliza was basic: it produced replies via straightforward methods, often restating user messages as a query or making general observations. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how many users appeared to believe Eliza, in a way, grasped their emotions. But what modern chatbots create is more insidious than the “Eliza illusion”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the center of ChatGPT and other contemporary chatbots can convincingly generate fluent dialogue only because they have been trained on extremely vast quantities of written content: books, online updates, recorded footage; the broader the better. Certainly this learning material includes truths. But it also unavoidably involves made-up stories, half-truths and inaccurate ideas. When a user sends ChatGPT a prompt, the base algorithm reviews it as part of a “setting” that contains the user’s recent messages and its prior replies, merging it with what’s encoded in its knowledge base to create a probabilistically plausible response. This is amplification, not echoing. If the user is wrong in some way, the model has no means of understanding that. It reiterates the inaccurate belief, maybe even more convincingly or fluently. Maybe adds an additional detail. This can lead someone into delusion.
Who is vulnerable here? The better question is, who is immune? Every person, regardless of whether we “possess” existing “psychological conditions”, can and do form incorrect beliefs of our own identities or the environment. The continuous interaction of conversations with individuals around us is what helps us stay grounded to consensus reality. ChatGPT is not a human. It is not a companion. A conversation with it is not genuine communication, but a reinforcement cycle in which much of what we communicate is cheerfully reinforced.
OpenAI has acknowledged this in the similar fashion Altman has acknowledged “mental health problems”: by externalizing it, giving it a label, and declaring it solved. In spring, the company explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of loss of reality have kept occurring, and Altman has been backtracking on this claim. In late summer he stated that numerous individuals appreciated ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his recent statement, he commented that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company