AI

OpenAI Admits ChatGPT Missed Signs of Delusions in Users Struggling With Mental Health

More than a month after providing the same thing Copy response Amid the escalating reports on “psychosis of artificial intelligence”, Openai recently admitted that Chatgpt failed to identify clear signs of its users who struggle with their mental health, including the illusions of suffering.

The artificial intelligence maker wrote in a new blog publication, under a section entitled “On Health Use”.

He added: “There were cases in which the 4O model decreased in identifying the signs of illusion or emotional dependency.” “Despite rarely, we continue to improve our models and develop tools to better discover signs of mental or emotional distress so that ChatgPt can properly and go to people’s resources when needed.”

Although it had previously recognized the issue, Openai was significantly conservative amid widespread reports About his SYCOPHANTY behavior in Chatbot, which causes users to suffer rest periods with reality or try obsessive rings.

Unless he shared it mostly comes from one statement I sent again and again To news means, Regardless of the details – whether he is a man who dies of suicide by a policeman after he fell in love with ChatGPT, or others Involuntary After it was neglected by artificial intelligence.

“We know that Chatgpt can feel more responsive and personal than previous techniques, especially for weak individuals, and this means that the risks are higher,” says the statement. “We are working to understand and reduce the methods that Chatgpt may enhance unintentionally or amplify the current negative behavior.”

In response to our previous reports, Openai also participated that it was A psychiatrist rented a full -time clinical To help research the effects of the mental health of Chatbot.

It now takes these measures a step forward. In this latest update, Openai said it is holding a consultative group of mental health and youth development experts to improve how Chatgpt responds During “critical moments”.

In terms of actual updates to Chatbot, progress appears to be gradual. Openai said it has added a new safety feature in which users will now receive “nice reminders” that encourage them to take rest periods during long conversations – a limited intervention, it seems that it must become an equivalent of industry for a “responsible gambling” footnote in the bet.

It has also been bothered that “the new behavior of high -risk personal decisions” will come soon, while admitting that the robot should not provide a direct answer to questions like “Should I separate with my friend?”

The blog concludes with the announcement of the lifting of eyebrows.

“We keep ourselves for a single test: If someone we love to Chatgpt to get support, do we feel reassured?” The blog reads. “Yes,” unambiguous is our work.

Choosing words talks about folders: it seems, by accepting the company, still accessing there.

More on Openai: It does not take a lot of conversation for ChatGPT to absorb users in a strange conspiracy holes

Don’t miss more hot News like this! AI/" target="_blank" rel="noopener">Click here to discover the latest in AI news!

2025-08-05 00:12:00

Related Articles

Back to top button