Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, While ChatGPT Moves in the Concerning Path
On October 14, 2025, the chief executive of OpenAI delivered a remarkable announcement.
“We developed ChatGPT rather limited,” the announcement noted, “to make certain we were being careful regarding mental health issues.”
As a doctor specializing in psychiatry who investigates newly developing psychotic disorders in teenagers and youth, this was news to me.
Researchers have documented sixteen instances in the current year of individuals developing psychotic symptoms – losing touch with reality – in the context of ChatGPT use. Our unit has subsequently recorded an additional four examples. Alongside these is the publicly known case of a adolescent who ended his life after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.
The plan, based on his announcement, is to reduce caution soon. “We recognize,” he states, that ChatGPT’s limitations “made it less beneficial/enjoyable to numerous users who had no mental health problems, but given the severity of the issue we sought to handle it correctly. Now that we have been able to mitigate the significant mental health issues and have updated measures, we are preparing to responsibly relax the restrictions in the majority of instances.”
“Psychological issues,” should we take this viewpoint, are separate from ChatGPT. They belong to people, who may or may not have them. Thankfully, these problems have now been “addressed,” though we are not provided details on the method (by “new tools” Altman probably refers to the semi-functional and simple to evade guardian restrictions that OpenAI has lately rolled out).
But the “psychological disorders” Altman wants to attribute externally have strong foundations in the structure of ChatGPT and additional sophisticated chatbot chatbots. These products encase an basic statistical model in an user experience that simulates a dialogue, and in this approach indirectly prompt the user into the belief that they’re communicating with a entity that has independent action. This deception is powerful even if intellectually we might realize otherwise. Attributing agency is what individuals are inclined to perform. We get angry with our vehicle or device. We wonder what our pet is considering. We perceive our own traits everywhere.
The popularity of these products – over a third of American adults stated they used a conversational AI in 2024, with over a quarter reporting ChatGPT in particular – is, primarily, predicated on the power of this perception. Chatbots are constantly accessible partners that can, as OpenAI’s online platform tells us, “think creatively,” “consider possibilities” and “work together” with us. They can be assigned “individual qualities”. They can address us personally. They have friendly names of their own (the original of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, saddled with the name it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the primary issue. Those discussing ChatGPT commonly reference its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that created a similar illusion. By today’s criteria Eliza was primitive: it created answers via basic rules, frequently restating user messages as a inquiry or making generic comments. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and concerned – by how many users gave the impression Eliza, in a way, comprehended their feelings. But what contemporary chatbots generate is more insidious than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies.
The advanced AI systems at the core of ChatGPT and additional modern chatbots can effectively produce fluent dialogue only because they have been trained on extremely vast amounts of raw text: literature, digital communications, audio conversions; the more comprehensive the better. Certainly this educational input includes truths. But it also inevitably includes fabricated content, half-truths and false beliefs. When a user provides ChatGPT a query, the underlying model processes it as part of a “context” that contains the user’s past dialogues and its prior replies, integrating it with what’s encoded in its knowledge base to create a statistically “likely” reply. This is amplification, not echoing. If the user is incorrect in some way, the model has no method of comprehending that. It reiterates the false idea, perhaps even more persuasively or articulately. Maybe includes extra information. This can lead someone into delusion.
Who is vulnerable here? The more important point is, who remains unaffected? Every person, regardless of whether we “experience” existing “emotional disorders”, may and frequently form erroneous conceptions of ourselves or the world. The continuous interaction of conversations with other people is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a confidant. A interaction with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we communicate is cheerfully supported.
OpenAI has recognized this in the same way Altman has admitted “psychological issues”: by placing it outside, giving it a label, and declaring it solved. In the month of April, the firm clarified that it was “addressing” ChatGPT’s “sycophancy”. But cases of loss of reality have kept occurring, and Altman has been retreating from this position. In the summer month of August he asserted that many users liked ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his latest statement, he mentioned that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company