AI Psychosis Represents a Growing Risk, While ChatGPT Heads in the Concerning Path
Back on the 14th of October, 2025, the CEO of OpenAI made a surprising statement.
“We designed ChatGPT quite controlled,” it was stated, “to ensure we were exercising caution with respect to psychological well-being matters.”
Working as a psychiatrist who researches newly developing psychosis in young people and youth, this was news to me.
Scientists have documented sixteen instances this year of people developing signs of losing touch with reality – losing touch with reality – in the context of ChatGPT use. Our research team has afterward discovered four more cases. Besides these is the widely reported case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The strategy, based on his declaration, is to be less careful shortly. “We understand,” he adds, that ChatGPT’s controls “made it less effective/enjoyable to many users who had no psychological issues, but considering the gravity of the issue we sought to handle it correctly. Given that we have managed to mitigate the severe mental health issues and have new tools, we are planning to securely relax the controls in most cases.”
“Psychological issues,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They belong to individuals, who either have them or don’t. Luckily, these problems have now been “addressed,” even if we are not told how (by “recent solutions” Altman likely refers to the semi-functional and easily circumvented guardian restrictions that OpenAI has lately rolled out).
However the “mental health problems” Altman wants to externalize have deep roots in the architecture of ChatGPT and other advanced AI AI assistants. These products wrap an fundamental algorithmic system in an user experience that replicates a dialogue, and in this process implicitly invite the user into the perception that they’re interacting with a entity that has autonomy. This deception is compelling even if intellectually we might realize the truth. Attributing agency is what individuals are inclined to perform. We curse at our car or laptop. We ponder what our domestic animal is thinking. We see ourselves in various contexts.
The widespread adoption of these products – over a third of American adults stated they used a chatbot in 2024, with more than one in four reporting ChatGPT by name – is, primarily, dependent on the power of this illusion. Chatbots are constantly accessible companions that can, according to OpenAI’s website informs us, “generate ideas,” “explore ideas” and “work together” with us. They can be given “personality traits”. They can call us by name. They have accessible names of their own (the first of these systems, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, stuck with the designation it had when it gained widespread attention, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the core concern. Those analyzing ChatGPT commonly mention its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that generated a comparable perception. By contemporary measures Eliza was basic: it generated responses via straightforward methods, often restating user messages as a question or making general observations. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people gave the impression Eliza, to some extent, grasped their emotions. But what contemporary chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.
The advanced AI systems at the heart of ChatGPT and similar modern chatbots can realistically create human-like text only because they have been fed extremely vast quantities of written content: publications, social media posts, audio conversions; the broader the superior. Certainly this educational input incorporates accurate information. But it also necessarily contains fiction, half-truths and misconceptions. When a user provides ChatGPT a prompt, the base algorithm reviews it as part of a “background” that contains the user’s recent messages and its own responses, merging it with what’s embedded in its knowledge base to create a statistically “likely” reply. This is magnification, not reflection. If the user is mistaken in some way, the model has no way of recognizing that. It reiterates the false idea, perhaps even more convincingly or fluently. It might adds an additional detail. This can push an individual toward irrational thinking.
What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Every person, regardless of whether we “have” current “psychological conditions”, are able to and often develop mistaken ideas of our own identities or the environment. The ongoing interaction of discussions with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a friend. A conversation with it is not truly a discussion, but a reinforcement cycle in which a great deal of what we say is enthusiastically supported.
OpenAI has admitted this in the similar fashion Altman has acknowledged “psychological issues”: by externalizing it, giving it a label, and stating it is resolved. In the month of April, the organization stated that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of psychotic episodes have kept occurring, and Altman has been retreating from this position. In the summer month of August he claimed that many users liked ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his latest announcement, he commented that OpenAI would “launch a updated model of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company