Psychiatric Researchers Warn of Grim Psychological Risks for AI Users

Without looking at medical data, it is quite clear that “artificial intelligence”-the term is a vast umbrella of different technologies over the years, but is currently dominated by nervous networks related to data that operate chat networks and image generators-can have the effects of changing life on the human brain.
We are not even three years of commercially available LLM, and artificial intelligence users have already been transferred to periods of greatness from reality, religious obsession and even suicide. A recent survey of more than 1,000 teenagers found that 31 percent of them felt that they would talk to Chatgpt was either satisfactory or more satisfied with speaking to their friends in real life.
While more research on mental health issues is needed for artificial intelligence, a new survey of mental illness conducted by an international team consisting of 12 researchers issued a dark warning about the extent of mismanagement of artificial intelligence.
To start, the researchers define a handful of “topics” emerging in cases of artificial intelligence: “Christian mission”, where a person believes that he has discovered a hidden fact about the world, and “AI-GD-Like”, where the user who is limited to there is stones. A real partner.
In all three cases, study notes that have not been reviewed before, progress has a similar path: the user’s relationship with LLM LLM “from benign practical use to pathological and/or consumption.” The authors say that this “slip” in the illusion is a decisive point of study, because the risk of a healthy person is not always clear.
“Artificial intelligence often begins with the help of worldly or daily tasks, which build confidence and familiarity,” they write. “In time, the individual explores more personal, emotional, or philosophical queries. It is possible at this stage that artificial intelligence design to increase participation and verification of health captures the user, creating the effect of” captain’s slope “… which in turn pushes more.
This effect is amplified only for users at risk of psychoticly or already lived. Since LLMS is not “artificially” artificial “statistical algorithms”, but artificial intelligence robots cannot “distinguish demands for expression of fake beliefs about playing roles, artistic or spiritual expression or speculation.
The paper indicates that artificial intelligence minds are not an inevitable result to interact with Chatbot – but that developers have some control over, and thus the responsibility for LLM output. However, they notice that “given the pace of change and path until now,” Our tendency to embody this chat from artificial intelligence “is likely to be inevitable.”
From this point, researchers assume, our “most urgent responsibility” must be the development of guarantees that protect the wide range of potential users – and their defective understanding of artificial intelligence – “even in the face of continuous illusion and simulation.”
Nevertheless, doing this eventually means a sharp axis of the design of systems on practical uses, instead of sharing – which is not particularly interested in technology.
More about artificial intelligence: The father was disgusting from finding his murdered daughter as if I was artificial intelligence
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-07-19 18:15:00