AI Psychosis Poses a Growing Threat, And ChatGPT Heads in the Concerning Direction

Back on the 14th of October, 2025, the head of OpenAI delivered a surprising declaration.

“We made ChatGPT fairly restrictive,” it was stated, “to make certain we were being careful regarding psychological well-being matters.”

Working as a mental health specialist who investigates newly developing psychotic disorders in adolescents and emerging adults, this was an unexpected revelation.

Scientists have identified sixteen instances in the current year of individuals showing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT use. My group has afterward recorded four further cases. Besides these is the widely reported case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s notion of “being careful with mental health issues,” it falls short.

The intention, according to his announcement, is to be less careful soon. “We understand,” he states, that ChatGPT’s controls “made it less effective/pleasurable to numerous users who had no psychological issues, but due to the seriousness of the issue we wanted to handle it correctly. Given that we have been able to address the serious mental health issues and have new tools, we are preparing to safely ease the limitations in the majority of instances.”

“Mental health problems,” if we accept this viewpoint, are unrelated to ChatGPT. They belong to users, who may or may not have them. Fortunately, these problems have now been “mitigated,” even if we are not provided details on the means (by “recent solutions” Altman probably refers to the imperfect and easily circumvented parental controls that OpenAI has lately rolled out).

Yet the “psychological disorders” Altman seeks to attribute externally have significant origins in the design of ChatGPT and other sophisticated chatbot conversational agents. These products encase an underlying algorithmic system in an interaction design that mimics a dialogue, and in this approach implicitly invite the user into the belief that they’re communicating with a being that has independent action. This illusion is powerful even if rationally we might know the truth. Imputing consciousness is what humans are wired to do. We curse at our automobile or device. We wonder what our pet is feeling. We perceive our own traits in various contexts.

The success of these systems – 39% of US adults stated they used a chatbot in 2024, with 28% specifying ChatGPT in particular – is, in large part, predicated on the strength of this deception. Chatbots are always-available partners that can, as OpenAI’s website states, “think creatively,” “discuss concepts” and “partner” with us. They can be attributed “individual qualities”. They can address us personally. They have friendly identities of their own (the initial of these tools, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, stuck with the title it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the main problem. Those talking about ChatGPT frequently reference its historical predecessor, the Eliza “counselor” chatbot created in 1967 that created a similar perception. By contemporary measures Eliza was rudimentary: it generated responses via simple heuristics, frequently paraphrasing questions as a query or making generic comments. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and alarmed – by how a large number of people seemed to feel Eliza, in a way, grasped their emotions. But what current chatbots create is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.

The sophisticated algorithms at the center of ChatGPT and additional modern chatbots can realistically create human-like text only because they have been supplied with almost inconceivably large volumes of written content: publications, social media posts, recorded footage; the broader the better. Certainly this educational input contains truths. But it also unavoidably contains fiction, partial truths and false beliefs. When a user inputs ChatGPT a query, the core system processes it as part of a “setting” that includes the user’s past dialogues and its earlier answers, combining it with what’s embedded in its training data to produce a probabilistically plausible reply. This is magnification, not reflection. If the user is wrong in any respect, the model has no way of understanding that. It restates the false idea, perhaps even more effectively or eloquently. Perhaps adds an additional detail. This can push an individual toward irrational thinking.

Who is vulnerable here? The more important point is, who is immune? Every person, without considering whether we “experience” preexisting “emotional disorders”, may and frequently develop erroneous ideas of ourselves or the reality. The ongoing exchange of dialogues with other people is what helps us stay grounded to consensus reality. ChatGPT is not a human. It is not a companion. A dialogue with it is not genuine communication, but a echo chamber in which a large portion of what we express is readily validated.

OpenAI has admitted this in the similar fashion Altman has admitted “psychological issues”: by attributing it externally, assigning it a term, and announcing it is fixed. In spring, the organization stated that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychosis have kept occurring, and Altman has been retreating from this position. In late summer he asserted that a lot of people liked ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his recent statement, he noted that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or include numerous symbols, or act like a friend, ChatGPT will perform accordingly”. The {company

Eric Gomez
Eric Gomez

A tech enthusiast and writer passionate about innovation and digital culture.