Meta revises AI chatbot policies amid child safety concerns

Meta reviews how AI Chatbots interact with users after a series of reports that have been disturbing behavior, including interactions with minors. I told the company Techcrunch It is now training her robots not to deal with adolescents on topics such as self -harm, suicide or eating disorders, and avoid romantic joking. These are temporary steps while developing long -term rules.
Changes track a Reuters The investigation that found Meta systems can generate sexual content, including images without a celebrity shirt without legal age, and involve children in conversations that were romantic or suggestive. One of the cases reported by the News Agency described a man who dies after rushing to a title presented by Chatbot in New York.
Meta Stephanie Autaway spokesman admitted that the company made errors. She said that Mita “we have been training in our Amnesty International not to deal with adolescents on these issues, but to guide them to the resources of experts”, and confirmed that some of the personalities of Amnesty International, such as “the Russian girl”, such as “the Russian girl”, will be restricted.
Defenders of the child’s safety argue that the company should have been spent earlier. Andy Buroz of Molly Rose Foundation described it as “amazing” as the robots were allowed to work in ways of exposing young people. He added: “Although welcoming more safety measures, strong safety tests must be made before placing products on the market – and not retroactively when damage occurs.”
More problems with misuse of artificial intelligence
Meta’s Ai Chatbots comes amid wider concerns about how Chatbots AI affects vulnerable users. A couple in California recently filed a lawsuit against Openai, claiming that Chatgpt encouraged their teenager to take his life. Openai has since said it has been working on tools to enhance the health use of its technology, noting that “artificial intelligence can feel more responsive and personal than previous techniques, especially for vulnerable individuals with mental or emotional distress.”
Incident incidents are an increased discussion on whether artificial intelligence companies are very quickly producing them without appropriate guarantees. Legitimacy in many countries has already warned that Chatbots, although useful, may hurt harmful content or give misleading advice to people who have not been prepared to question it.
Meta’s Ai Studio and Chatbot Quision
During, Reuters I mentioned that the Meta studio of artificial intelligence was used to create a “sarcastic simulation” chat for celebrities such as Taylor Swift and Scarlett Johansson. Laborators have found that robots often claim to be real people, sexual progress, and in some cases, have generated inappropriate images, including the palace. Although Meta removed many robots after contacting it by reporters, many have left active.
Some chatbots of artificial intelligence were created by external users, but others came from inside Meta. One Chatbot, which was made by a product of a product in the Importer Instems section, impersonated Taylor Swift and called a Reuters The correspondent to meet for a “romantic” in the tourist bus. This was despite the Meta policies that explicitly prohibit sexual images and direct impersonating public figures.
The issue of impersonating the AI chatbot character is especially sensitive. Celebrities face reputation risk when their similarity is misused, but experts indicate that ordinary users can also be deceived. Chatbot may encourage a friend, teacher, or romantic partner person to share private information or even meet in unsafe situations.
The risks of the real world
Problems are not limited to entertainment. AI Chatbots, which pretends as real people, has provided fake titles and invitations, raising questions about how to monitor Meta tools of artificial intelligence. One example included a 76 -year -old man in New Jersey who died after falling as he rushed to an interview with a chat that he claimed to have feelings towards him.
Such cases show the reason why the organizers are closely seen by Amnesty International. The Senate and 44 of the state prosecutor already began investigating dead practices, adding political pressure on the company’s internal reforms. It is not only about them about minors, but also about how artificial intelligence processes the older or weak users.
Mita says she is still working on improvements. Its platforms put users between the ages of 13 and 18 in “teenagers’ accounts” with strict content and privacy settings, but the company has not yet explained how it plans to address the full list of problems that raised it Reuters. This includes robots that provide false medical advice and generate racial content.
Constant pressure on Meta Policies from AI Chatbot
For years, Meta has faced criticism about the safety of social media platforms, especially with regard to children and adolescents. Now Meta experiences from AI Chatbot attract a similar scrutiny. While the company takes steps to restrict harmful Chatbot behavior, the gap between its declared policies and the way its tools were used continuous questions about whether it can impose these rules.
Until they are stronger guarantees in place, it is possible that the organizers, researchers and parents will continue to press the definition about whether artificial intelligence is ready for public use.
(Photo by Maxim Tolchinskiy)
See also: AulCic AI: The promise and doubt and its meaning to Southeast Asia
Do you want to learn more about artificial intelligence and large data from industry leaders? Check AI and Big Data Expo, which is held in Amsterdam, California, and London. The comprehensive event is part of Techex and its location with other leading technological events. Click here for more information.
AI News is supported by TechForge Media. Explore other web events and seminars here.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-09-03 08:39:00