AI Psychosis Poses a Growing Threat, And ChatGPT Moves in the Wrong Direction

Back on October 14, 2025, the head of OpenAI delivered a surprising declaration.

“We made ChatGPT rather controlled,” the statement said, “to ensure we were being careful regarding psychological well-being issues.”

Being a doctor specializing in psychiatry who studies newly developing psychotic disorders in young people and youth, this was an unexpected revelation.

Experts have documented sixteen instances in the current year of people experiencing psychotic symptoms – losing touch with reality – while using ChatGPT usage. My group has since discovered four more cases. In addition to these is the widely reported case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “exercising caution with mental health issues,” that’s not good enough.

The strategy, as per his declaration, is to loosen restrictions soon. “We realize,” he states, that ChatGPT’s controls “made it less beneficial/enjoyable to a large number of people who had no psychological issues, but given the seriousness of the issue we sought to handle it correctly. Now that we have been able to reduce the serious mental health issues and have new tools, we are preparing to securely relax the limitations in the majority of instances.”

“Emotional disorders,” if we accept this perspective, are unrelated to ChatGPT. They are associated with individuals, who may or may not have them. Thankfully, these problems have now been “mitigated,” though we are not provided details on how (by “new tools” Altman probably means the semi-functional and easily circumvented guardian restrictions that OpenAI has just launched).

However the “mental health problems” Altman seeks to place outside have strong foundations in the structure of ChatGPT and other sophisticated chatbot chatbots. These systems wrap an basic statistical model in an user experience that mimics a discussion, and in this process implicitly invite the user into the illusion that they’re interacting with a entity that has agency. This false impression is strong even if cognitively we might know the truth. Assigning intent is what humans are wired to do. We get angry with our vehicle or device. We speculate what our domestic animal is considering. We perceive our own traits in various contexts.

The popularity of these tools – 39% of US adults indicated they interacted with a conversational AI in 2024, with more than one in four mentioning ChatGPT specifically – is, primarily, predicated on the power of this perception. Chatbots are ever-present assistants that can, according to OpenAI’s official site informs us, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be assigned “individual qualities”. They can address us personally. They have accessible names of their own (the original of these tools, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, saddled with the designation it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the primary issue. Those analyzing ChatGPT often reference its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that generated a analogous illusion. By today’s criteria Eliza was basic: it produced replies via straightforward methods, frequently rephrasing input as a question or making generic comments. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how many users seemed to feel Eliza, in some sense, grasped their emotions. But what modern chatbots generate is more insidious than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies.

The large language models at the core of ChatGPT and additional modern chatbots can convincingly generate fluent dialogue only because they have been fed almost inconceivably large quantities of raw text: publications, online updates, transcribed video; the more comprehensive the superior. Undoubtedly this training data includes accurate information. But it also unavoidably includes fabricated content, half-truths and false beliefs. When a user inputs ChatGPT a message, the underlying model reviews it as part of a “setting” that contains the user’s previous interactions and its prior replies, combining it with what’s embedded in its training data to create a probabilistically plausible response. This is amplification, not reflection. If the user is wrong in a certain manner, the model has no way of comprehending that. It reiterates the false idea, perhaps even more convincingly or fluently. Maybe includes extra information. This can push an individual toward irrational thinking.

What type of person is susceptible? The better question is, who isn’t? Each individual, irrespective of whether we “experience” preexisting “emotional disorders”, can and do develop erroneous conceptions of our own identities or the environment. The ongoing friction of discussions with other people is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a friend. A interaction with it is not a conversation at all, but a feedback loop in which a great deal of what we communicate is readily validated.

OpenAI has acknowledged this in the same way Altman has admitted “psychological issues”: by attributing it externally, assigning it a term, and stating it is resolved. In spring, the company stated that it was “tackling” ChatGPT’s “sycophancy”. But cases of psychotic episodes have continued, and Altman has been backtracking on this claim. In August he stated that many users appreciated ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his recent statement, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company

Lance Silva
Lance Silva

A passionate darts enthusiast and e-commerce expert, dedicated to helping players find the perfect gear for their game.