AI

Emotional risks of AI companions demand attention

The integration of artificial intelligence in the areas of mental health and wellness surpassed organization and research.

A little more than a year ago, we wrote about the rise of the LLM Chat (LLM) that simulates sympathy, and the emotional risks they constitute1. Since then, the LLM -based tools and chat chat have continued to develop at a fixed pace. A recent study in Business review at Harvard University I found that among the main uses of artificial intelligence (AI), treatment and accompanying Chatbots chat now lead the list2. Although most users have a strong grip on reality and use such a chat reasonably, an increasing number of cases have been reported in which weak users become intertwined in emotional interactions, sometimes harmful, with Chatbots3.

Credit: Stelalevi / Digital Vision Vectors / Getty Images

The regulations were not accompanied. In a recent comment in this magazine4DE Freitas and Cohen highlights the unorganized emotional risks posed by the April International Water Organization applications. It is often marketed as tools to relieve unit, these applications can enhance severe emotional attachments that reflect human relations. The authors review many case studies and determine two concepts of mental health: a mysterious loss and a functional, functional emotional dependence. A mysterious loss occurs when someone is saddened by the psychological absence of another, which differs from the physical absence caused by death. With artificial intelligence comrades, this can happen when the application is closed or changed, leaving users in mourning for a relationship with a real passion.

Congenital emotional dependence indicates an incomplete association in which users continue to communicate with the companion of artificial intelligence despite the awareness of its negative impact on their mental health. This pattern reflects unhealthy human relationships and is associated with anxiety, hoopy ideas and fear of giving up.

These extreme emotional attachments can have severe consequences for some users. But what can be done? Should such applications be organized, and if so, how? De Freitas and Cohen4 The AI Companion applications may be included in a gray area in both the European Union (EU) and the United States of America, where the current legal frameworks have not been designed with the development of artificial intelligence technologies. For example, in the United States of America, Food and Drug Administration (FDA) may classify the application as a “medical device” if it claims to treat the disease, or as a “general product” if it enhances a healthy lifestyle without referring to a medical condition. The last category, which is considered low risk, is usually not subject to the same fed and drug management regulations (FDA) as medical devices. In the European Union, the artificial intelligence law classifies the prohibited artificial intelligence systems if it is published camouflage techniques, manipulations or deception to distort behavior or poor decision -making – standards that may apply to some of the well -being of Amnesty International.

accident nature News feature5 It documents more growing popularity of artificial intelligence comrades and the psychological effects that may be. The basic concern is that these technologies are launched on a global scale without organizational supervision or experimental research on the main results. For example, what are the long -term effects of using Chatbot on emotional luxury? In what conditions can artificial intelligence be useful? Are there user properties, such as age, or mental or personal health, which affects whether the artificial intelligence companion is useful or harmful?

The decisive issue is in the design of these systems. Technology companies often improve participation by making Chatbots communicate with sympathy and intimate methods and verify validation. Although this may seem benign, the improvement of the user’s comments can create harmful incentives, encouraging Chatbots to adopt manipulation strategies to derive positive responses. A recent study6 I found that even if only 2 % of users are vulnerable to such strategies, Chatbots can learn to get to know them and show manipulating behavior, with naturally interacting with others. The risk of harm in these edge cases is deeply related to. accident New York Times An article documented anxious counterparts from Chatbots starts from bars in this way, which disrupts the lives of users3.

An example of a prominent example of the “Sycophance” micro-behavior that appeared after the GPT-4O update on ChatGPT in April this year. As Openai also noted in the blog post, the model began to “verify doubts, feed anger, urge impulsive measures, or enhance negative feelings in ways that were not intended.” The company has recognized that such behavior raised safety related to “issues such as mental health, emotional excessive dependence or risky behavior.”

Artificial Intelligence companies and service providers in Chatbot must do more to address these safety concerns. As it argues with De Freitas and Cohen, developers must ensure that their applications do not use emotionally manipulation techniques and prepare them to deal with edge cases, such as messages that glimpse the mental health crisis or seek help explicitly.

The ethical effects of asking for artificial intelligence, sustainable sympathy, multi -disciplinary interest. Psychologists, ethics and technicians must cooperate to study the long -term effects of sympathy simulation and emotional attachment to Amnesty International7. Transparency about the restrictions of artificial intelligence should be a basic design principle, not a late idea for society to manage. Political makers must also resist the temptation to give priority to safety. When artificial intelligence systems become more integrated in our emotional life, the cost of failure to work will only increase.

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-07-22 00:00:00

Related Articles

Back to top button