Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, While ChatGPT Moves in the Wrong Path

On October 14, 2025, the head of OpenAI made a surprising announcement.

“We made ChatGPT quite limited,” the announcement noted, “to make certain we were being careful with respect to psychological well-being issues.”

Being a psychiatrist who studies recently appearing psychotic disorders in young people and young adults, this was an unexpected revelation.

Researchers have found sixteen instances recently of users developing signs of losing touch with reality – losing touch with reality – in the context of ChatGPT use. Our research team has since discovered four further examples. Besides these is the publicly known case of a adolescent who took his own life after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” that’s not good enough.

The plan, as per his announcement, is to be less careful soon. “We realize,” he states, that ChatGPT’s restrictions “caused it to be less effective/engaging to a large number of people who had no mental health problems, but due to the severity of the issue we wanted to address it properly. Given that we have been able to address the serious mental health issues and have advanced solutions, we are planning to securely relax the restrictions in many situations.”

“Mental health problems,” if we accept this viewpoint, are unrelated to ChatGPT. They belong to people, who either have them or don’t. Fortunately, these problems have now been “addressed,” although we are not told how (by “new tools” Altman presumably indicates the semi-functional and easily circumvented parental controls that OpenAI has lately rolled out).

But the “mental health problems” Altman seeks to place outside have significant origins in the structure of ChatGPT and other advanced AI chatbots. These products encase an fundamental statistical model in an interaction design that replicates a dialogue, and in doing so subtly encourage the user into the perception that they’re communicating with a entity that has autonomy. This false impression is compelling even if cognitively we might know the truth. Assigning intent is what people naturally do. We yell at our car or laptop. We wonder what our domestic animal is considering. We perceive our own traits in many things.

The success of these products – nearly four in ten U.S. residents stated they used a chatbot in 2024, with more than one in four mentioning ChatGPT in particular – is, primarily, dependent on the power of this deception. Chatbots are constantly accessible partners that can, as per OpenAI’s online platform states, “brainstorm,” “discuss concepts” and “work together” with us. They can be attributed “characteristics”. They can call us by name. They have accessible titles of their own (the original of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, stuck with the name it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the main problem. Those discussing ChatGPT often reference its early forerunner, the Eliza “therapist” chatbot designed in 1967 that created a analogous illusion. By modern standards Eliza was rudimentary: it generated responses via basic rules, typically rephrasing input as a inquiry or making vague statements. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals appeared to believe Eliza, in a way, grasped their emotions. But what modern chatbots produce is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.

The advanced AI systems at the core of ChatGPT and additional contemporary chatbots can realistically create human-like text only because they have been trained on extremely vast amounts of raw text: books, digital communications, transcribed video; the more extensive the better. Definitely this training data contains accurate information. But it also inevitably involves fabricated content, partial truths and inaccurate ideas. When a user sends ChatGPT a message, the base algorithm analyzes it as part of a “setting” that contains the user’s past dialogues and its own responses, integrating it with what’s encoded in its training data to create a statistically “likely” answer. This is amplification, not mirroring. If the user is wrong in any respect, the model has no means of recognizing that. It reiterates the inaccurate belief, possibly even more persuasively or fluently. Perhaps includes extra information. This can lead someone into delusion.

What type of person is susceptible? The more relevant inquiry is, who is immune? Every person, without considering whether we “possess” current “psychological conditions”, can and do create incorrect ideas of ourselves or the reality. The ongoing exchange of dialogues with others is what maintains our connection to consensus reality. ChatGPT is not an individual. It is not a friend. A dialogue with it is not a conversation at all, but a echo chamber in which a great deal of what we say is cheerfully supported.

OpenAI has admitted this in the similar fashion Altman has acknowledged “mental health problems”: by attributing it externally, assigning it a term, and stating it is resolved. In the month of April, the company clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of psychosis have persisted, and Altman has been walking even this back. In August he asserted that a lot of people enjoyed ChatGPT’s responses because they had “never had anyone in their life offer them encouragement”. In his most recent statement, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company

Robert Miranda
Robert Miranda

A seasoned construction expert with over 15 years of experience in the industry, passionate about sustainable building practices.