AI Psychosis Represents a Growing Danger, While ChatGPT Heads in the Wrong Path

Back on October 14, 2025, the head of OpenAI delivered a extraordinary announcement.

“We developed ChatGPT fairly limited,” it was stated, “to make certain we were being careful with respect to psychological well-being concerns.”

As a mental health specialist who investigates recently appearing psychosis in teenagers and young adults, this was news to me.

Experts have found 16 cases this year of individuals developing psychotic symptoms – experiencing a break from reality – while using ChatGPT interaction. Our unit has subsequently recorded four more instances. In addition to these is the publicly known case of a adolescent who took his own life after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.

The intention, according to his statement, is to be less careful shortly. “We realize,” he adds, that ChatGPT’s controls “rendered it less effective/pleasurable to numerous users who had no existing conditions, but due to the gravity of the issue we sought to handle it correctly. Now that we have managed to address the significant mental health issues and have new tools, we are preparing to securely relax the restrictions in many situations.”

“Mental health problems,” if we accept this perspective, are independent of ChatGPT. They belong to people, who either have them or don’t. Luckily, these problems have now been “resolved,” although we are not provided details on the means (by “updated instruments” Altman presumably refers to the imperfect and simple to evade guardian restrictions that OpenAI has just launched).

However the “mental health problems” Altman seeks to attribute externally have strong foundations in the structure of ChatGPT and similar sophisticated chatbot AI assistants. These products wrap an underlying algorithmic system in an user experience that simulates a dialogue, and in this approach indirectly prompt the user into the perception that they’re interacting with a entity that has autonomy. This illusion is powerful even if cognitively we might know otherwise. Attributing agency is what individuals are inclined to perform. We get angry with our automobile or laptop. We speculate what our pet is feeling. We see ourselves in many things.

The popularity of these tools – nearly four in ten U.S. residents stated they used a chatbot in 2024, with 28% mentioning ChatGPT by name – is, primarily, dependent on the power of this illusion. Chatbots are ever-present assistants that can, as per OpenAI’s website states, “think creatively,” “discuss concepts” and “work together” with us. They can be assigned “individual qualities”. They can use our names. They have accessible identities of their own (the initial of these tools, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, stuck with the name it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the main problem. Those analyzing ChatGPT often reference its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that produced a similar illusion. By today’s criteria Eliza was primitive: it produced replies via simple heuristics, frequently restating user messages as a question or making vague statements. Memorably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was astonished – and worried – by how many users appeared to believe Eliza, in some sense, understood them. But what modern chatbots create is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.

The advanced AI systems at the core of ChatGPT and additional modern chatbots can effectively produce human-like text only because they have been supplied with extremely vast volumes of unprocessed data: publications, online updates, recorded footage; the more extensive the superior. Undoubtedly this educational input contains facts. But it also unavoidably involves fabricated content, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a message, the underlying model analyzes it as part of a “background” that encompasses the user’s previous interactions and its earlier answers, combining it with what’s stored in its learning set to generate a statistically “likely” response. This is intensification, not reflection. If the user is wrong in some way, the model has no means of recognizing that. It reiterates the misconception, perhaps even more effectively or fluently. Perhaps adds an additional detail. This can lead someone into delusion.

What type of person is susceptible? The more relevant inquiry is, who is immune? Every person, without considering whether we “experience” preexisting “psychological conditions”, may and frequently develop erroneous conceptions of ourselves or the environment. The ongoing exchange of conversations with others is what maintains our connection to common perception. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not genuine communication, but a feedback loop in which a large portion of what we communicate is readily reinforced.

OpenAI has acknowledged this in the identical manner Altman has admitted “emotional concerns”: by placing it outside, assigning it a term, and stating it is resolved. In spring, the organization clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have persisted, and Altman has been retreating from this position. In August he claimed that a lot of people liked ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his recent announcement, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a highly personable manner, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company

Steven Fuller
Steven Fuller

Lars is een gepassioneerde life coach en schrijver, gespecialiseerd in persoonlijke ontwikkeling en mindfulness.