Silicon Valley leaders, including White House AI and Crypto Car David Sacks and OpenAI Chief Strategy Officer Jason Kwon, caused a stir online this week with their comments about groups promoting AI security. In separate cases, they have alleged that some AI safety advocates are not as virtuous as they appear and are either acting in self-interest or behind the scenes of billionaire puppet masters.
Artificial intelligence security groups who spoke to TechCrunch say the allegations by Sacks and OpenAI are the latest attempt to intimidate Silicon Valley critics, but certainly not the first. In 2024, some venture capital firms spread rumors that California’s Artificial Intelligence Security Act, SB 1047, would send startup founders to prison. The Brookings Institution described this rumor as one of many “false statements” but Gov. Gavin Newsom ultimately vetoed it anyway.
Whether Sacks and OpenAI intended to intimidate critics or not, their actions have sufficiently frightened several AI safety advocates. Many nonprofit leaders contacted by TechCrunch last week asked to speak on the condition of anonymity to spare their groups from retaliation.
The controversy highlights the growing tension in Silicon Valley between building AI responsibly and building it to become a mass consumer product – a topic my colleagues Kirsten Korosec, Anthony Ha and I explore on this week’s podcast. We also delve into the novel AI safety law passed in California that regulates chatbots, and OpenAI’s approach to erotica in ChatGPT.
On Tuesday, Sacks wrote: write to X claiming that Anthropic – which has raised concerns over AI’s ability to contribute to unemployment, cyberattacks, and catastrophic harm to society – this is simply fear-mongering in order to pass regulations that will only bring benefits and drown out smaller startups in paperwork. Anthropic was the only major AI lab to support California Senate Bill 53 (SB 53), a bill establishing security reporting requirements for immense AI companies that was signed into law last month.
Sacks replied to: viral essay from Anthropic co-founder Jack Clark on his concerns about artificial intelligence. Clark had delivered an essay-style speech at the Curve AI security conference in Berkeley a few weeks earlier. Sitting in the audience, it certainly seemed like a genuine description of the technologist’s reservations about his products, but Sacks didn’t see it that way.
Sacks said Anthropic is pursuing a “sophisticated regulatory capture strategy,” though it’s worth noting that a truly sophisticated strategy probably wouldn’t involve making an enemy of the federal government. In continue post on X, Sacks noted that Anthropic has “consistently positioned itself as an enemy of the Trump administration.”
Techcrunch event
San Francisco
|
October 27-29, 2025
Also this week, OpenAI Chief Strategy Officer Jason Kwon wrote: write to X explaining why the company was sending subpoenas to AI safety nonprofits like Encode, a nonprofit that advocates for responsible AI policy. (A subpoena is a legal order requiring documents or testimony.) Kwon said that after Elon Musk sued OpenAI – over concerns that the ChatGPT creator had strayed from its non-profit mission – OpenAI found it suspicious that several organizations also opposed its restructuring. Encode filed an amicus brief in support of Musk’s lawsuit, and other nonprofits have spoken publicly against OpenAI’s restructuring.
“This raised transparency questions about who was funding them and whether there was any coordination,” Kwon said.
NBC News reported this week that OpenAI issued broad subpoenas to Encode i six other nonprofits who criticized the company by asking for communications regarding two of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also asked Encode for communications related to its support of SB 53.
A top AI security leader told TechCrunch that there is a growing divide between OpenAI’s government affairs team and its research organization. While OpenAI security researchers often publish reports revealing the risks of AI systems, OpenAI’s strategy unit lobbied against SB 53, saying it would prefer to have uniform rules at the federal level.
OpenAI’s head of mission alignment, Joshua Achiam, shared that his company sends calls to nonprofits in write to X this week.
“Even though it may be a risk to my entire career, I will say: It doesn’t seem great,” Achiam said.
Brendan Steinhauser, CEO of the AI security nonprofit Alliance for Secure AI (which has not been subpoenaed by OpenAI), told TechCrunch that OpenAI appears to believe its critics are part of a conspiracy led by Musk. However, he argues that this is not the case and that much of the AI security community is quite critical of xAI’s security practices, or lack thereof.
“On OpenAI’s part, this is intended to silence critics, intimidate them, and dissuade other nonprofits from doing the same,” Steinhauser said. “As for Sacks, I think he’s concerned about it [the AI safety] the movement is growing and people want to hold these companies accountable.”
Sriram Krishnan, White House senior adviser on artificial intelligence and former general partner of A16z, joined in this week’s conversation with: social media post himself, calling AI safety advocates out of touch. He urged AI security organizations to talk to “people in the real world who are using, selling and adopting AI in their homes and organizations.”
A recent Pew study found that about half of Americans are like this more anxious than excited about artificial intelligence, but it’s not clear what exactly worries them. Another recent study was more detailed and found that American voters care more about this job loss and deepfakes than the catastrophic threats posed by AI, which is largely the focus of the AI safety movement.
Addressing these security concerns may come at the expense of the rapid growth of the artificial intelligence industry, a trade-off that worries many in Silicon Valley. With investments in artificial intelligence supporting much of the U.S. economy, concern about overregulation is understandable.
But after years of unregulated AI progress, the AI security movement appears to be gaining real momentum ahead of 2026. Silicon Valley’s attempts to crack down on security groups could be a sign that they’re working.
