California Governor Gavin Newsom signs landmark AI safety law SB 53
California has taken an important step towards the organization of artificial intelligence with the governor of Gavin New Tiroum, which signed a new state law that requires the main Amnesty International companies, which is headquartered in the state, to publicly reveal how it mitigates to alleviate the potential catastrophic risks posed by advanced AI models.
The law also creates mechanisms to report critical safety accidents, and the protection of informants extends to the employees of the Artificial Intelligence Company, and begins to develop Calcompute, a government union charged with the establishment of the General Computing Group for safe, moral and sustainable research. Through convincing companies, including Openai, Meta, Google DeepMind, and Anthropic, to follow these new rules at home, California may determine the standard for artificial intelligence control.
Newsom frams the law as a balance between the protection of the public and the encouragement of innovation. In a statement, he wrote: “California has proven that we can create regulations to protect our societies, while also ensuring that the increasing industrial intelligence industry continues to prosper. This legislation is straining this balance.”
The legislation, written by senator Scott Winner, follows a failed attempt to pass the Amnesty International Law last year. Winner said that the new law, which was known by SB 53 SB 53 (for the Senate 53), focuses on transparency rather than responsibility, which is a departure from the previous SB 1047 bill, which the newsom intercepted last year.
“By creating transparency and accountability,” said the SB 53, a noticeable victory for California and the artificial intelligence industry as a whole. “It paves the way for the competitive, secure and world -class ecological system,” said Sunni Gandhi, Vice President of Political Affairs at Encode AI, a activist and talab company.
Industrial reactions are divided into new legislation. Jack Clark, one of the founders of AI, who supported SB 53, wrote on X: “We are witnessing [the California Governor] To sign up [Scott Wiener’s] SB 53, create transparency requirements for AI border companies that will all help us get better data about these systems and companies that you are building. Anthrop is proud of the presence of this bill. He stressed that although federal standards are still important to prevent a mixture of state bases, California has established a framework that balances public safety with continuous innovation.
Openai, who did not support the bill, told the media, that “it was pleased to see that California has created a crucial path towards compatibility with the federal government – the most effective approach to the integrity of artificial intelligence,” adding that if it was implemented correctly, the law will enable cooperation between federal governments and state governments to spread artificial intelligence. The Meta Christopher SGRO spokesman told the media the same that the company “supports the balanced intelligence list”, describing SB 53 as a “positive step in this direction”, and said Meta to work with legislators to protect consumers while enhancing innovation.
Although it is a law at the state level, California legislation will have a global scope, as 32 of the best 50 Amnesty International companies in the state. The bill requires artificial intelligence companies to report incidents to the California Emergency Services Office and protect informants, allowing engineers and other employees to increase safety concerns without risking their professions. SB 53 also includes civil sanctions for non -compliance, which can be implemented by the state’s public prosecutor, although artificial intelligence experts such as Miles BRUNDAGE notice that these sanctions are relatively weak, even compared to those imposed by AI’s law in the European Union.
Brunding, who was formerly the head of policy research at Openai, said in the X publication that although SB 53 represents a “step forward”, there was a need for “actual transparency” in reporting, the minimum risk thresholds, and strong technical assessments.
Collin Macon, head of government affairs at Andrink Horwatz, also warned the law “risking pressure on startups, slowing innovation, and establishing the largest players,” and said that he puts a dangerous precedent for the navy. “He also presented many artificial intelligence companies that pressured against the bill similar arguments.
California aims to enhance transparency and accountability in the artificial intelligence sector with the condition of general disclosure and reporting accidents; However, critics like McCune argue that the law can make compliance a challenge to smaller companies and the consolidation of the AI’s dominance of Big Tech.
Thomas Woodside, founder of the AI Secure project, Kosjonsor of Law, described concerns about the “exaggerated” startups.
“This bill applies only to companies that train artificial intelligence models with a large amount of account that costs hundreds of millions of dollars, something that cannot start a small operation.” luck. “This is to report very dangerous things that get worse, protect those who report violations, and a very essential level of transparency. Other obligations do not apply even to companies that have less than $ 500 million in annual revenue.”
Don’t miss more hot News like this! Click here to discover the latest in Business news!
2025-09-30 18:42:00



