OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI

Openai has updated its preparation framework – the internal system it uses to assess the integrity of artificial intelligence models and determine the necessary guarantees during development and publication. In the update, Openai stated that it may “modify” its safety requirements if the competitor is a “high -risk” system without similar protection.
Change reflects the increasing competitive pressures on commercial artificial intelligence developers to spread models quickly. Openai has been accused of reducing safety standards in favor of the fastest releases, and has failed to submit in time reports the safety test details. Last week, 12 former employees in Openai presented a summary in the Elon Musk case against Openai, arguing that the company will be encouraged to cut off more The angles on safety if you completed the restructuring of the planned companies.
Perhaps criticism expects Openai that it will not make these political amendments light, and that it will maintain its guarantees in a “more protection level”.
Openai wrote in a blog publication published on Tuesday afternoon: “If the AI developer of Frontier AI released a high -risk system without similar guarantees, we may adjust our requirements.” “However, we first assert carefully that the scene has already changed, publicly admitted that we are making an amend, and we evaluate that the amendment does not use it beneficial to the risk of severe damage, and keeping the guarantees is still at a more protection level.”
The refreshing alert frame also shows that Openai is more dependent on automatic assessments to accelerate the development of the product. The company says that although it did not abandon the tests of the human being completely, it was built “an increasing group of automatic assessments” that are supposed to be “accompanying with it. [a] faster [release] rhythm.”
Some reports contradict this. According to the Financial Times, Openai gave a test less than a week to check for a leading main model – a compressed time schedule compared to previous versions. The publication sources also claimed that many safety tests in Openai are now being made on previous versions of models instead of the publications released to the public.
In phrases, Openai opposed the idea that it waives safety.
Openai quietly reduces safety obligations.
It was deleted from the list of changes in the framework of the alert:
It no longer requires safety tests for Holocaust.
Suggadler April 15, 2025
Other changes are related to Openai’s framework how to classify the company according to risks, including models that can hide its capabilities, escape its guarantees, prevent stopping it, and even self -repetition. Openai says it will now focus on whether the models meet one of two softeers: the “high” ability or “critical” ability.
Openai’s first definition is a model that can “amplify current paths into severe damage.” The latter are models “offering unprecedented new tracks for severe damage,” according to the company.
“The covered systems that reach the high capacity must have enough guarantees that reduce the risk associated with severe damage before publishing them,” Openai wrote in the blog post. “Systems that reach critical ability also require guarantees that reduce enough risk during development.”
The updates are the first that Openai has made to the framework of preparation since 2023.
Don’t miss more hot News like this! Click here to discover the latest in Technology news!
2025-04-15 19:50:00