Technology

OpenAI Rolls Out Teen Safety Features Amid Growing Scrutiny

New Openai advertise Teenage safety features for Chatgpt on Tuesday as part of the continuous effort to respond to concerns about how the palace interacts with Chatbots. The company builds the Al -Omari prediction system that determines whether the user is less than 18 years old and directs them to a “suitable age” system that prohibits graphic sexual content. If the system discovers that the user is thinking about suicide or self -harm, he will contact the user’s parents. In cases of imminent danger, if the user’s parents are not accessible, the system may be connected to the authorities.

In a blog post on this announcement, CEO Sam German wrote that the company is trying to balance freedom, privacy and integrity of adolescents.

Al -Taman wrote: “We realize that these principles contradict, and not everyone will agree on how to resolve this conflict.” “These are difficult decisions, but after talking to experts, this is what we think is the best and we want to be transparent in our intentions.”

While Openai tends to give priority to privacy and freedom to adult users, the company says it is putting safety first. By the end of September, the company will present parental controls so that the parents can link their child’s account to their account, allowing them to manage conversations and disrupt the features. Parents can also receive notifications when “the system discovers that adolescence is in a moment of sharp distress”, according to the publication of the company’s blog, and sets borders in today’s times that their children can use Chatgpt.

The movements come at a time when the headlines of the very worrying newspapers continue to die by the death of suicides or committing violence against family members after engaging in long conversations with AI Chatbots. The legislators, both Meta and Openai are subject to scrutiny. Earlier this month, the Federal Trade Committee asked Meta, Openai, Google and other artificial intelligence companies to deliver information on how their techniques affect children, according to Bloomberg.

Meanwhile, Openai continues with an order from the court to maintain consumer chats indefinitely – the fact that the company is not very satisfied, according to the sources it spoke to. Today’s news is an important step towards protecting minors and OCVVY PR moves to enhance the idea that conversations with Chatbots are so personal that consumer privacy should be breached only in the most extreme conditions.

“Avatar Sexbot in Chatgpt”

From the sources that I spoke to in Openai, the burden of protection of users is greatly weighs on many researchers. They want to create an interesting and attractive user experience, but it can quickly turn until the disastrous sycophanty becomes. It is positive that companies like Openai take steps to protect the palace. At the same time, in the absence of the federal organization, there is nothing forcing these companies to do the right thing.

In a recent interview with him, Taker Karlson pushed the exact response from It is to make these decisions that affect the rest of us. Openai president referred to the model behavior team, who is responsible for controlling the model for some features. Altman added: “The person I think you should be responsible for these calls is me.” “Like, I am a general face. In the end, like, I am the person who can cancel one of those decisions or our council.”

Don’t miss more hot News like this! Click here to discover the latest in Technology news!

2025-09-16 20:53:00

Related Articles

Back to top button