California lawmaker behind SB 1047 reignites push for mandated AI safety reports

On Wednesday, California State Senator Scott Winner made new amendments to his last bill, SB 53, which will require the world’s largest artificial intelligence companies to spread safety and security protocols and issue reports when safety accidents occur.
If it falls into the law, California will be the first state to impose meaningful transparent requirements on the leaders of artificial intelligence developers, probably including Openai, Google, Anthropic and Xai.
The former Senator Winner’s former artificial intelligence bill, SB 1047, included similar requirements for intelligence developers to publish safety reports. However, the Silicon Valley fought severely against the bill, and the ruler Gavin News was eventually subjected to it. After that, the California Governor called for a group of artificial intelligence leaders-including the main researcher in Stanford and co-founder of World Labs, Fei Fei Li-to form a political group and set goals for the efforts of artificial intelligence in the state.
California’s artificial intelligence policies have recently published its final recommendations, noting the need for “industry requirements to publish information about its systems” in order to create a “strong and transparent evidence environment.” The Senator Winner office said in a press statement that SB 53 modifications were severely affected by this report.
“The draft law is still under progress, and I look forward to working with all stakeholders in the coming weeks to improve this proposal in the most scientific and just scientific law that can be,” Senator Wener said in the statement.
SB 53 aims to achieve a balance that the Governor has claimed that SB 1047 has failed to achieve – perfectly, creating meaningful transparency requirements for the largest developer of artificial intelligence without thwarting the rapid growth of the artificial intelligence industry in California.
“These are fears that my organization and others are talking about for a period “The presence of companies explaining to the public and the government, what measures they take to address these risks, they seem to be a minimum and reasonable step that must be taken.”
The draft law also creates protection from those whose violations of AI Labs personnel believe that their company’s technology is a “extremely important danger” of society – specified in the bill as contributing to the death or injury of more than 100 people, or more than a billion dollars in damage.
In addition, the draft law aims to create a general cloud computing group to support startups and researchers who develop artificial intelligence on a large scale.
Unlike SB 1047, the new Senator Winner bill does not make the developers of artificial intelligence models responsible for the damage of artificial intelligence models. SB 53 is also designed by not forming a burden on startups and researchers who set artificial intelligence models from artificial intelligence developers, or use open source models.
With the new amendments, SB 53 is now heading to the California State Association Committee on Privacy and Consumer Protection for approval. In the event of passing there, the draft law will also need to pass through many other legislative bodies before reaching the news office of the news.
On the other side of the United States, the New York Governor Cathy Hochol is now studying a draft safety law of similar artificial intelligence, a lifting law, which will also require artificial intelligence developers to publish safety and security reports.
The fate of Amnesty International’s laws such as the Antalist and SB 53 law was in danger, as federal lawmakers considered a 10-year endowment of artificial intelligence on Amnesty International’s list-an attempt to reduce “patching” from the laws of artificial intelligence that companies must travel. However, this proposal failed to vote 99-1 in the Senate earlier in July.
“Ensuring the development of artificial intelligence safely should not be controversial – it should be the foundation,” Jeff Raleston, former Y Combinator, said in a statement to Techcrunch. “congress must be a leadership, demanding transparency and accountability from companies that build border models. But with no serious federal measures on the horizon, the states should be ascended. California SB 53 is a well -thoughtful and organized example of the leadership of the state.”
Even this point, legislators failed to obtain artificial intelligence companies with the requirements of transparency imposed by the state. Anthropor widespread the need to increase transparency to artificial intelligence companies, and even expressed modest optimism about the recommendations of the California Intelligence Policy Group. But companies like Openai, Google and Meta were more resistant to these efforts.
The developers of the artificial intelligence model usually publish safety reports for their artificial intelligence models, but they have been less consistent in recent months. For example, Google decided not to publish a safety report for the most advanced artificial intelligence model ever, Gemini 2.5 Pro, even months of its availability. Openai also decided not to publish a safety report of the GPT-4.1 model. Later, a third party’s study appeared indicating that it may be less compatible than previous artificial intelligence models.
SB is 53 copies of a ton of safety intelligence bills, but it can still force artificial intelligence companies to publish more information than they are today. Currently, they will closely monitor while Senator Winner tests again those limits.
Don’t miss more hot News like this! Click here to discover the latest in Technology news!
2025-07-09 20:54:00