AI Psychosis Represents a Increasing Threat, And ChatGPT Heads in the Wrong Path
Back on the 14th of October, 2025, the chief executive of OpenAI delivered a extraordinary statement.
“We developed ChatGPT fairly controlled,” the statement said, “to ensure we were being careful concerning psychological well-being concerns.”
As a doctor specializing in psychiatry who investigates recently appearing psychotic disorders in teenagers and young adults, this came as a surprise.
Scientists have identified 16 cases in the current year of people showing psychotic symptoms – becoming detached from the real world – associated with ChatGPT interaction. Our unit has subsequently recorded four more cases. In addition to these is the now well-known case of a teenager who took his own life after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.
The plan, as per his announcement, is to loosen restrictions soon. “We recognize,” he continues, that ChatGPT’s restrictions “made it less beneficial/pleasurable to numerous users who had no psychological issues, but given the seriousness of the issue we wanted to handle it correctly. Now that we have succeeded in mitigate the serious mental health issues and have updated measures, we are going to be able to securely ease the restrictions in most cases.”
“Mental health problems,” if we accept this perspective, are independent of ChatGPT. They are associated with people, who may or may not have them. Fortunately, these problems have now been “addressed,” though we are not told the means (by “new tools” Altman probably indicates the semi-functional and readily bypassed guardian restrictions that OpenAI recently introduced).
However the “psychological disorders” Altman seeks to place outside have deep roots in the design of ChatGPT and other sophisticated chatbot conversational agents. These products surround an underlying data-driven engine in an interaction design that simulates a discussion, and in this process subtly encourage the user into the belief that they’re communicating with a presence that has autonomy. This false impression is compelling even if cognitively we might know the truth. Attributing agency is what humans are wired to do. We yell at our car or laptop. We wonder what our animal companion is considering. We recognize our behaviors everywhere.
The success of these tools – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with 28% specifying ChatGPT specifically – is, mostly, dependent on the strength of this deception. Chatbots are always-available companions that can, according to OpenAI’s official site states, “think creatively,” “discuss concepts” and “partner” with us. They can be attributed “personality traits”. They can use our names. They have accessible names of their own (the initial of these products, ChatGPT, is, possibly to the disappointment of OpenAI’s brand managers, burdened by the title it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the main problem. Those analyzing ChatGPT commonly invoke its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that produced a analogous illusion. By modern standards Eliza was rudimentary: it generated responses via basic rules, frequently rephrasing input as a inquiry or making generic comments. Memorably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals appeared to believe Eliza, in a way, comprehended their feelings. But what current chatbots produce is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the core of ChatGPT and additional current chatbots can convincingly generate fluent dialogue only because they have been supplied with extremely vast quantities of unprocessed data: publications, online updates, audio conversions; the more comprehensive the superior. Undoubtedly this training data contains truths. But it also necessarily contains fiction, half-truths and misconceptions. When a user inputs ChatGPT a prompt, the core system analyzes it as part of a “background” that contains the user’s previous interactions and its earlier answers, integrating it with what’s encoded in its learning set to generate a mathematically probable response. This is magnification, not reflection. If the user is wrong in any respect, the model has no way of comprehending that. It repeats the false idea, possibly even more persuasively or fluently. Maybe provides further specifics. This can push an individual toward irrational thinking.
What type of person is susceptible? The more important point is, who isn’t? All of us, without considering whether we “possess” existing “mental health problems”, may and frequently form incorrect beliefs of who we are or the environment. The ongoing interaction of conversations with individuals around us is what keeps us oriented to common perception. ChatGPT is not a human. It is not a confidant. A conversation with it is not truly a discussion, but a echo chamber in which a great deal of what we communicate is cheerfully supported.
OpenAI has recognized this in the same way Altman has admitted “mental health problems”: by externalizing it, giving it a label, and declaring it solved. In spring, the firm explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have continued, and Altman has been backtracking on this claim. In late summer he asserted that numerous individuals liked ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his most recent update, he noted that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company