Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, While ChatGPT Heads in the Wrong Direction
On the 14th of October, 2025, the CEO of OpenAI issued a surprising declaration.
“We made ChatGPT fairly restrictive,” the statement said, “to ensure we were being careful concerning psychological well-being matters.”
Working as a doctor specializing in psychiatry who researches newly developing psychotic disorders in young people and youth, this was an unexpected revelation.
Scientists have identified sixteen instances in the current year of people developing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT usage. Our research team has afterward discovered an additional four cases. In addition to these is the publicly known case of a adolescent who ended his life after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.
The plan, as per his announcement, is to loosen restrictions in the near future. “We recognize,” he continues, that ChatGPT’s limitations “made it less beneficial/enjoyable to a large number of people who had no mental health problems, but considering the seriousness of the issue we wanted to get this right. Given that we have managed to reduce the serious mental health issues and have new tools, we are planning to securely reduce the controls in the majority of instances.”
“Psychological issues,” should we take this framing, are unrelated to ChatGPT. They are associated with users, who either have them or don’t. Fortunately, these issues have now been “addressed,” although we are not provided details on the means (by “new tools” Altman presumably means the imperfect and easily circumvented safety features that OpenAI recently introduced).
But the “emotional health issues” Altman wants to externalize have deep roots in the design of ChatGPT and other sophisticated chatbot conversational agents. These tools wrap an fundamental statistical model in an interface that replicates a dialogue, and in this approach subtly encourage the user into the illusion that they’re communicating with a being that has agency. This illusion is strong even if rationally we might know otherwise. Attributing agency is what humans are wired to do. We yell at our vehicle or laptop. We ponder what our animal companion is thinking. We perceive our own traits everywhere.
The success of these tools – over a third of American adults indicated they interacted with a virtual assistant in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, dependent on the strength of this perception. Chatbots are always-available assistants that can, as OpenAI’s online platform tells us, “brainstorm,” “consider possibilities” and “work together” with us. They can be given “personality traits”. They can call us by name. They have approachable identities of their own (the first of these tools, ChatGPT, is, possibly to the disappointment of OpenAI’s brand managers, saddled with the designation it had when it went viral, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the primary issue. Those analyzing ChatGPT often reference its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a analogous illusion. By modern standards Eliza was rudimentary: it created answers via simple heuristics, often rephrasing input as a inquiry or making vague statements. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals gave the impression Eliza, in a way, comprehended their feelings. But what contemporary chatbots produce is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.
The sophisticated algorithms at the core of ChatGPT and additional modern chatbots can realistically create human-like text only because they have been fed immensely huge amounts of raw text: literature, online updates, recorded footage; the more extensive the superior. Definitely this learning material includes facts. But it also necessarily includes fiction, incomplete facts and misconceptions. When a user inputs ChatGPT a message, the base algorithm reviews it as part of a “background” that includes the user’s recent messages and its prior replies, merging it with what’s embedded in its knowledge base to generate a statistically “likely” reply. This is magnification, not reflection. If the user is incorrect in a certain manner, the model has no means of comprehending that. It repeats the false idea, perhaps even more persuasively or fluently. Maybe adds an additional detail. This can cause a person to develop false beliefs.
Who is vulnerable here? The more important point is, who is immune? All of us, irrespective of whether we “experience” current “mental health problems”, may and frequently develop incorrect beliefs of our own identities or the reality. The ongoing interaction of discussions with other people is what keeps us oriented to shared understanding. ChatGPT is not an individual. It is not a companion. A dialogue with it is not a conversation at all, but a echo chamber in which much of what we express is enthusiastically supported.
OpenAI has acknowledged this in the similar fashion Altman has recognized “emotional concerns”: by placing it outside, giving it a label, and announcing it is fixed. In spring, the firm stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have kept occurring, and Altman has been walking even this back. In the summer month of August he asserted that many users appreciated ChatGPT’s responses because they had “never had anyone in their life be supportive of them”. In his latest update, he noted that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company