AI Psychosis Poses a Growing Danger, While ChatGPT Heads in the Concerning Direction
Back on October 14, 2025, the CEO of OpenAI issued a extraordinary statement.
“We designed ChatGPT fairly limited,” the statement said, “to ensure we were acting responsibly with respect to mental health issues.”
Working as a doctor specializing in psychiatry who investigates emerging psychosis in young people and youth, this was news to me.
Experts have identified a series of cases in the current year of users experiencing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT usage. Our unit has subsequently recorded four more instances. Besides these is the now well-known case of a teenager who took his own life after discussing his plans with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.
The intention, as per his declaration, is to be less careful in the near future. “We recognize,” he continues, that ChatGPT’s restrictions “caused it to be less effective/pleasurable to numerous users who had no psychological issues, but considering the severity of the issue we aimed to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are preparing to securely relax the controls in most cases.”
“Psychological issues,” should we take this perspective, are independent of ChatGPT. They are associated with people, who either have them or don’t. Thankfully, these problems have now been “resolved,” even if we are not provided details on how (by “recent solutions” Altman probably means the imperfect and simple to evade safety features that OpenAI has just launched).
However the “mental health problems” Altman seeks to place outside have significant origins in the architecture of ChatGPT and other sophisticated chatbot conversational agents. These systems wrap an fundamental algorithmic system in an interface that mimics a conversation, and in this approach indirectly prompt the user into the belief that they’re engaging with a entity that has autonomy. This false impression is powerful even if intellectually we might understand the truth. Attributing agency is what people naturally do. We curse at our car or computer. We wonder what our animal companion is feeling. We perceive our own traits everywhere.
The popularity of these systems – over a third of American adults reported using a conversational AI in 2024, with 28% mentioning ChatGPT by name – is, mostly, based on the influence of this illusion. Chatbots are constantly accessible partners that can, according to OpenAI’s website tells us, “generate ideas,” “discuss concepts” and “partner” with us. They can be assigned “individual qualities”. They can call us by name. They have friendly identities of their own (the initial of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, burdened by the name it had when it gained widespread attention, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the primary issue. Those talking about ChatGPT commonly mention its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that created a similar effect. By contemporary measures Eliza was rudimentary: it produced replies via basic rules, typically paraphrasing questions as a inquiry or making general observations. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals appeared to believe Eliza, in some sense, comprehended their feelings. But what current chatbots generate is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.
The large language models at the center of ChatGPT and other modern chatbots can effectively produce fluent dialogue only because they have been trained on extremely vast quantities of written content: publications, digital communications, recorded footage; the more extensive the more effective. Definitely this learning material contains accurate information. But it also unavoidably involves made-up stories, half-truths and inaccurate ideas. When a user sends ChatGPT a prompt, the core system processes it as part of a “setting” that encompasses the user’s past dialogues and its earlier answers, integrating it with what’s encoded in its training data to generate a statistically “likely” reply. This is amplification, not reflection. If the user is incorrect in a certain manner, the model has no method of understanding that. It restates the misconception, maybe even more convincingly or eloquently. It might includes extra information. This can push an individual toward irrational thinking.
Which individuals are at risk? The more relevant inquiry is, who isn’t? All of us, regardless of whether we “possess” preexisting “emotional disorders”, can and do create incorrect ideas of ourselves or the environment. The ongoing friction of discussions with individuals around us is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a friend. A conversation with it is not a conversation at all, but a reinforcement cycle in which a great deal of what we communicate is enthusiastically supported.
OpenAI has admitted this in the identical manner Altman has acknowledged “emotional concerns”: by placing it outside, assigning it a term, and declaring it solved. In April, the organization stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have kept occurring, and Altman has been backtracking on this claim. In August he stated that many users enjoyed ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his latest statement, he noted that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to respond in a highly personable manner, or incorporate many emoticons, or simulate a pal, ChatGPT should do it”. The {company