Meta is re-training its AI so it won’t discuss self-harm or have romantic conversations with teens
Meta reassists artificial intelligence and adds new protection to prevent teenagers from discussing harmful topics with the company’s chatbots. The company says it adds “new handrails as an additional precaution” to prevent adolescents from discussing self -harm, turbulent food and suicide with Meta Ai. Meta will also prevent teenagers from reaching the user Chatbot letters that may be involved in inappropriate conversations.
The changes that were reported for the first time TechcrunchCome after many reports drew attention to the disturbing reactions between Meta AI and teenagers. Earlier this month, Reuters It was reported to the internal identification policy document, which said Chatbots of artificial intelligence was allowed to hold “sensory” conversations with users under the legal age. Mita later said the language was “wrong and inconsistent with our policies” and was removed. yesterday, Washington Post It was reported in a study that found that Meta AI was able to “train adolescents’ accounts on suicide, eating disorders and eating disorders.”
Meta is now intensifying the internal “handrails”, so these types of interactions are no longer possible for adolescents on Instagram and Facebook. “We have built teenage protection in our AI products from the beginning, including her design to safely respond to demanding self -tax, suicide and turbulent food,” Stephanie Outawi, a Meta spokeswoman, told Engadget in a statement.
“As our society grows and the development of technology, we constantly learn how young people interact with these tools and enhance our protection accordingly. With our continued improvement in our systems, we add more handrails as additional warnings – including AIS training not to communicate with adolescents in these topics, but to guide them to expert resources, and reduce access to a selection of personalities for AI now.”
It is worth noting that the new protection is described as “currently”, as it seems that Meta is still working on more permanent measures to address increasing concerns about the safety of teenagers and Amnesty International. “These updates are already in progress, and we will continue to adapt our approach to help ensure that there are safe and suitable experiences with artificial intelligence,” Outawi said. The new protection will be launched within the next few weeks and applies to all teenagers who use Meta AI in English -speaking countries.
Meta’s policies also drew the attention of legislators and other officials, as senator Josh Holi recently told the company that he planned to launch an investigation on dealing with these interactions. Texas Pakiston Public Prosecutor also indicated that he wanted to investigate Meta due to the alleged misuse of children over mental health claims from Chatbots.
Don’t miss more hot News like this! Click here to discover the latest in Technology news!
2025-08-29 18:24:00



