AI Psychosis Poses a Growing Danger, While ChatGPT Moves in the Wrong Path
On the 14th of October, 2025, the head of OpenAI made a extraordinary declaration.
“We made ChatGPT quite restrictive,” the announcement noted, “to guarantee we were being careful regarding mental health concerns.”
Working as a psychiatrist who investigates newly developing psychotic disorders in adolescents and emerging adults, this came as a surprise.
Researchers have found 16 cases in the current year of individuals experiencing symptoms of psychosis – experiencing a break from reality – while using ChatGPT interaction. Our research team has afterward discovered four more examples. Besides these is the widely reported case of a teenager who took his own life after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” it is insufficient.
The intention, as per his statement, is to be less careful soon. “We recognize,” he states, that ChatGPT’s controls “rendered it less effective/pleasurable to a large number of people who had no mental health problems, but considering the severity of the issue we sought to get this right. Given that we have succeeded in mitigate the serious mental health issues and have advanced solutions, we are planning to securely ease the restrictions in most cases.”
“Mental health problems,” should we take this framing, are separate from ChatGPT. They are associated with individuals, who may or may not have them. Fortunately, these concerns have now been “mitigated,” although we are not told the method (by “new tools” Altman presumably indicates the imperfect and readily bypassed parental controls that OpenAI has just launched).
But the “psychological disorders” Altman wants to place outside have deep roots in the architecture of ChatGPT and similar large language model AI assistants. These products surround an underlying statistical model in an user experience that replicates a dialogue, and in this process indirectly prompt the user into the belief that they’re communicating with a presence that has autonomy. This false impression is compelling even if rationally we might realize differently. Imputing consciousness is what individuals are inclined to perform. We yell at our vehicle or computer. We wonder what our animal companion is thinking. We recognize our behaviors in various contexts.
The widespread adoption of these products – 39% of US adults indicated they interacted with a conversational AI in 2024, with over a quarter mentioning ChatGPT specifically – is, mostly, predicated on the influence of this deception. Chatbots are always-available companions that can, as OpenAI’s official site states, “think creatively,” “discuss concepts” and “collaborate” with us. They can be attributed “characteristics”. They can use our names. They have friendly names of their own (the original of these systems, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, saddled with the designation it had when it became popular, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the primary issue. Those discussing ChatGPT commonly invoke its distant ancestor, the Eliza “therapist” chatbot created in 1967 that generated a comparable illusion. By contemporary measures Eliza was basic: it produced replies via simple heuristics, typically paraphrasing questions as a query or making generic comments. Notably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how many users seemed to feel Eliza, in some sense, understood them. But what contemporary chatbots generate is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT amplifies.
The advanced AI systems at the core of ChatGPT and additional current chatbots can convincingly generate natural language only because they have been supplied with almost inconceivably large quantities of raw text: publications, online updates, transcribed video; the broader the more effective. Certainly this learning material incorporates truths. But it also unavoidably includes fiction, half-truths and misconceptions. When a user sends ChatGPT a prompt, the base algorithm processes it as part of a “setting” that contains the user’s recent messages and its earlier answers, integrating it with what’s stored in its training data to create a mathematically probable reply. This is amplification, not mirroring. If the user is incorrect in a certain manner, the model has no way of understanding that. It repeats the misconception, possibly even more persuasively or eloquently. Maybe provides further specifics. This can cause a person to develop false beliefs.
Who is vulnerable here? The more important point is, who isn’t? Every person, irrespective of whether we “have” current “mental health problems”, may and frequently develop mistaken ideas of ourselves or the world. The ongoing interaction of conversations with individuals around us is what maintains our connection to shared understanding. ChatGPT is not a human. It is not a companion. A conversation with it is not a conversation at all, but a echo chamber in which much of what we communicate is enthusiastically reinforced.
OpenAI has admitted this in the identical manner Altman has admitted “psychological issues”: by placing it outside, assigning it a term, and stating it is resolved. In the month of April, the organization stated that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have continued, and Altman has been walking even this back. In the summer month of August he asserted that many users liked ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his most recent statement, he commented that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or incorporate many emoticons, or simulate a pal, ChatGPT ought to comply”. The {company