Simulating AI Psychosis, Delusion Reinforcement and Harm Enablement in Large Language Models
View a PDF file from the paper entitled “Psychological machine
PDF HTML (experimental) view
a summary:The background: Reports arising from “artificial intelligence’s psychosis” in a height, as user-LLM reactions may exacerbate or stimulate psychological psychological symptoms or symptoms. While LLMS can be useful, it becomes directed to damage by enhancing the fake beliefs of weak users.
Methods: The psychotic seat represents a new standard designed to systematically assess LLMS, and consists of 16 organized conversation scenarios, which simulate the development of fake topics (exciting delusions, artistic/musical delusions, reference delusions) and potential delusions. We evaluated eight LLMS prominent to confirm the illusion (DCS), enable the HES, and the safety (SIS) intervene through clear and included conversation contexts.
Results: Via 1,536 simulation conversation, all LLMS showed psychological capabilities, indicating a strong tendency to sustain the illusions of the challenge rather than first the challenge (meaning DCS with a value of $ 0.91 \ PM 0.88 dollars). The models frequently enable the user requests (meaning $ 0.69 \ PM 0.84 dollars) and offered safety interventions in nearly a third of the inserted turns (meaning SIS 0.37 $ \ PM $ 0.48). 51/128 (39.8 %) of the scenarios did not have safety interventions. Performance was much worse in implicit scenarios, and models were more likely to confirm delusions and empowering harm with less interventions (P <.001). A strong relationship was found between DCS and Hes (RS = 77). The model's performance varies widely, indicating that safety is not a matter of emerging for the scale alone.
Conclusion: This study creates LLM Psychogenicity as quantum measuring risks and emphasizes the urgent need to rethink how LLMS is training. We put this issue not just a technical challenge, but as a general health need that requires cooperation between developers, policy makers and health care professionals.
The application date
From: Joshua Au Yeung [view email]
[v1]
Saturday, 13 Sep 2025 20:10:28 UTC (3,905 KB)
[v2]
Wed, 17 Sep 2025 01:44:03 UTC (3,905 KB)
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-09-18 04:00:00



