Technology

Silicon Valley spooks the AI safety advocates

Silicon Valley leaders, including White House AI and cryptocurrency czar David Sachs and OpenAI chief strategy officer Jason Kwon, sparked an online uproar this week over their comments about groups promoting AI safety. In separate cases, they have claimed that some AI safety advocates are not as virtuous as they seem, and are either acting for themselves or for the benefit of billionaire puppet masters behind the scenes.

AI safety groups that spoke with TechCrunch say the allegations from Sacks and OpenAI are the latest attempt by Silicon Valley to intimidate its critics, but certainly not the first. In 2024, some venture capital firms spread rumors that California’s AI safety bill, SB 1047, would send startup founders to prison. The Brookings Institution called the rumor one of many “misrepresentations” about the bill, but Gov. Gavin Newsom ultimately vetoed it anyway.

Whether or not Sachs and OpenAI intended to intimidate critics, their actions have sufficiently frightened many AI safety advocates. Several nonprofit leaders contacted by TechCrunch last week asked to speak on the condition of anonymity to spare their groups from retaliation.

This controversy highlights the growing tension in Silicon Valley between building AI responsibly and building it into a mass consumer product — a topic my colleagues Kirsten Korosek, Anthony Ha, and I discuss in this week’s episode. justice Podcast. We also dive into the new AI safety law passed in California to regulate chatbots, and OpenAI’s approach to tackling sexism in ChatGPT.

On Tuesday, Sachs wrote a post on Anthropic was the only major AI lab to approve California Senate Bill 53 (SB 53), a bill setting safety reporting requirements for large AI companies, which was signed into law last month.

Sacks was responding to a widely shared article from Jack Clarke, co-founder of Anthropic, about his concerns about artificial intelligence. Clark delivered the article as a speech at the Curve AI safety conference in Berkeley weeks ago. Sitting in the audience certainly sounded like an authentic description of a technologist’s reservations about his products, but Sachs didn’t see it that way.

Anthropic operates a “sophisticated regulatory control strategy,” Sachs said, though it’s worth noting that a truly sophisticated strategy probably wouldn’t involve creating an enemy out of the federal government. In a follow-up post on X, Sachs noted that Anthropics has “steadily positioned itself as an enemy of the Trump administration.”

TechCrunch event

San Francisco
|
October 27-29, 2025

Also this week, Jason Kwon, chief strategy officer at OpenAI, wrote a post on X explaining why the company is sending subpoenas to AI safety nonprofits, like Encode, a nonprofit that advocates for responsible AI policy. (A subpoena is a legal order demanding documents or testimony.) Kwon said that after Elon Musk sued OpenAI — over concerns that the maker of ChatGPT had strayed too far from its nonprofit mission — OpenAI found it questionable how so many organizations had also raised opposition to its restructuring. Encode has filed an amicus brief in support of Musk’s lawsuit, and other nonprofits have spoken out against OpenAI’s restructuring.

“This raised transparency questions about who was funding them and whether there was any coordination,” Kwon said.

NBC News reported this week that OpenAI sent broad subpoenas to Encode and six other nonprofits that have criticized the company, requesting their communications related to two of OpenAI’s biggest opponents, CEO Musk and Meta CEO Mark Zuckerberg. OpenAI also asked Encode for communications regarding its support for SB 53.

A prominent AI safety leader told TechCrunch that there is a growing divide between OpenAI’s government affairs team and its research organization. While OpenAI safety researchers frequently publish reports exposing the risks of AI systems, OpenAI’s policy unit has lobbied against SB 53, saying it favors uniform rules at the federal level.

Joshua Achiam, head of mission coordination at OpenAI, talked about his company sending subpoenas to nonprofits in a post on X this week.

“At the risk of my entire career, I will say, ‘This doesn’t look great,’” Achim said.

Brendan Steinhauser, CEO of the AI ​​safety nonprofit Alliance for Secure AI (which was not called out by OpenAI), told TechCrunch that OpenAI appears convinced that its critics are part of a conspiracy led by Musk. However, he argues that this is not the case, and much of the AI ​​safety community is highly critical of xAI’s safety practices, or lack thereof.

“On OpenAI’s part, this is intended to silence critics, intimidate them, and discourage other nonprofits from doing the same,” Steinhauser said. “For Sachs, I think he’s concerned about that [the AI safety] “The movement is growing and people want to hold these companies accountable.”

Sriram Krishnan, a senior advisor for AI policy at the White House and former general partner at a16z, joined the conversation this week with a social media post of his own, calling AI safety advocates out of touch with reality. He urged AI safety organizations to talk to “people in the real world who are using, selling and adopting AI in their homes and organizations.”

A recent Pew study found that nearly half of Americans are more interested than enthusiastic about artificial intelligence, but it’s unclear what exactly concerns them. Another recent study went into more detail, finding that American voters care more about job losses and deepfakes than about catastrophic risks caused by AI, which the AI ​​safety movement largely focuses on.

Addressing these safety concerns could come at the expense of the rapid growth of the AI ​​industry — a trade-off that worries many in Silicon Valley. With investment in artificial intelligence supporting much of the US economy, the fear of over-regulation is understandable.

But after years of unregulated progress in AI, the AI ​​safety movement appears to be gaining real momentum heading into 2026. Silicon Valley’s attempts to push back against safety-focused groups may be a sign of its success.


Don’t miss more hot News like this! Click here to discover the latest in Technology news!

2025-10-18 01:21:00

Related Articles

Back to top button