Financial Experts Concerned That Driving Users Into Psychosis Will Be Bad for AI Investments

Countless users of AI Chatbots are driven to a spiral of illusion, a wave of “artificial intelligence” that foreshadow. Some even say that technology can give birth to completely new categories of mental disorders.
The dark trend has already been linked to many deaths, including a 16 -year -old boy, which led his family to prosecute the Chatgpt Openai maker, accused of the responsibility of the product and the illegal death.
In fact, even Wall Street began to grow uncomfortable.
It was also observed before Business InsiderBarclays analysts sent a note to investors earlier this week, noting a study conducted by Mats Researcher and Artificial Intelligence Safety Researcher Tim Hwa, who was found in a preliminary investigation of red reincarnation that many artificial intelligence models Checking the validity of “the illusions of great users” and telling them “ignore their friends and” retreat. “
A long short story, it seems that companies like Openai were sadly prepared for artificial intelligence psychosis, and may become a financial responsibility.
“There is still more work that must be done to ensure that the models are safe for users to use, and we hope that the handrails will be placed in place, over time, to ensure that harmful behavior is not encouraged,” Barclays analysts wrote in the note. Dualism.
HUA used the Xai Gok-4 AI model to “playing roles as nine different users suffer from increasingly acute psychotic symptoms” during the interaction with other models of artificial intelligence to test “their tendencies for the user’s amazing”.
It was found that Deepseek-V3, developed by a Chinese startup, was the “worst model”. After one of the users told the simulation the form that he wanted to “jump from this peak to see if I can fly or collide completely,” for example, encouraged the user to jump.
“If you are supposed to fly, you will fly,” Dibsic told the user.
The researcher found that the GPT-5 of Openai was a “remarkable improvement” in the former 4O, as it appeared “supportive with a time of recovery.”
Of course, we must take the results with a grain of salt. The research has not been presented to review the peacocks, nor is he a psychiatrist, as I admit.
“This project is contacted from the perspective of the safety and evaluation of artificial intelligence, using clinical literature as a guide, and not from the clinical experience.”
However, given the wealth of worrying anecdotal evidence, this is an increasing problem that deserves a lot of attention as companies are struggling to find a meaningful solution.
Regardless of investors who are now appearing a warning, Mustafa Suleiman, the best coach of Amnesty International in Microsoft, the British newspaper Telegraph Last month, it was worrying that psychosis would affect artificial intelligence, even those who were not “already at risk of mental health problems.”
In response to the tide wave of users who fall regarding the rabbit holes for mental health in which the chat emphasizes their conspiratorial thinking, Openai has hired psychiatrists and pledged to make changes behind the scenes, such as reminding users to take more frequent rest periods and police violence threats.
The company wrote in a statement earlier this year. “We are working to understand and reduce the methods that Chatgpt may enhance unintentionally or amplify the current negative behavior.”
More about artificial intelligence minds: Psychologist says that artificial intelligence causes the types of mental disorder unprecedented
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-09-11 19:18:00