Silicon Valley leaders together with White Home AI & Crypto Czar David Sacks and OpenAI Chief Technique Officer Jason Kwon brought on a stir on-line this week for his or her feedback about teams selling AI security. In separate cases, they alleged that sure advocates of AI security aren’t as virtuous as they seem, and are both performing within the curiosity of themselves or billionaire puppet masters behind the scenes.
AI security teams that spoke with TechCrunch say the allegations from Sacks and OpenAI are Silicon Valley’s newest try and intimidate its critics, however definitely not the primary. In 2024, some enterprise capital companies unfold rumors {that a} California AI security invoice, SB 1047, would ship startup founders to jail. The Brookings Establishment labeled the rumor as certainly one of many “misrepresentations” in regards to the invoice, however Governor Gavin Newsom finally vetoed it anyway.
Whether or not or not Sacks and OpenAI meant to intimidate critics, their actions have sufficiently scared a number of AI security advocates. Many nonprofit leaders that TechCrunch reached out to within the final week requested to talk on the situation of anonymity to spare their teams from retaliation.
The controversy underscores Silicon Valley’s rising rigidity between constructing AI responsibly and constructing it to be a large client product — a theme my colleagues Kirsten Korosec, Anthony Ha, and I unpack on this week’s Fairness podcast. We additionally dive into a brand new AI security regulation handed in California to manage chatbots, and OpenAI’s method to erotica in ChatGPT.
On Tuesday, Sacks wrote a publish on X alleging that Anthropic — which has raised considerations over AI’s capability to contribute to unemployment, cyberattacks, and catastrophic harms to society — is solely fearmongering to get legal guidelines handed that may profit itself and drown out smaller startups in paperwork. Anthropic was the one main AI lab to endorse California’s Senate Invoice 53 (SB 53), a invoice that units security reporting necessities for giant AI corporations, which was signed into regulation final month.
Sacks was responding to a viral essay from Anthropic co-founder Jack Clark about his fears concerning AI. Clark delivered the essay as a speech on the Curve AI security convention in Berkeley weeks earlier. Sitting within the viewers, it definitely felt like a real account of a technologist’s reservations about his merchandise, however Sacks didn’t see it that method.
Sacks stated Anthropic is working a “subtle regulatory seize technique,” although it’s value noting {that a} really subtle technique most likely wouldn’t contain making an enemy out of the federal authorities. In a comply with up publish on X, Sacks famous that Anthropic has positioned “itself constantly as a foe of the Trump administration.”
Techcrunch occasion
									San Francisco
													|
													October 27-29, 2025
							
Additionally this week, OpenAI’s chief technique officer, Jason Kwon, wrote a publish on X explaining why the corporate was sending subpoenas to AI security nonprofits, similar to Encode, a nonprofit that advocates for accountable AI coverage. (A subpoena is a authorized order demanding paperwork or testimony.) Kwon stated that after Elon Musk sued OpenAI — over considerations that the ChatGPT-maker has veered away from its nonprofit mission — OpenAI discovered it suspicious how a number of organizations additionally raised opposition to its restructuring. Encode filed an amicus temporary in assist of Musk’s lawsuit, and different nonprofits spoke out publicly in opposition to OpenAI’s restructuring.
“This raised transparency questions on who was funding them and whether or not there was any coordination,” stated Kwon.
NBC Information reported this week that OpenAI despatched broad subpoenas to Encode and six different nonprofits that criticized the corporate, asking for his or her communications associated to 2 of OpenAI’s largest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI additionally requested Encode for communications associated to its assist of SB 53.
One distinguished AI security chief informed TechCrunch that there’s a rising cut up between OpenAI’s authorities affairs group and its analysis group. Whereas OpenAI’s security researchers incessantly publish reviews disclosing the dangers of AI techniques, OpenAI’s coverage unit lobbied in opposition to SB 53, saying it will reasonably have uniform guidelines on the federal degree.
OpenAI’s head of mission alignment, Joshua Achiam, spoke out about his firm sending subpoenas to nonprofits in a publish on X this week.
“At what’s probably a threat to my complete profession I’ll say: this doesn’t appear nice,” stated Achiam.
Brendan Steinhauser, CEO of the AI security nonprofit Alliance for Safe AI (which has not been subpoenaed by OpenAI), informed TechCrunch that OpenAI appears satisfied its critics are a part of a Musk-led conspiracy. Nevertheless, he argues this isn’t the case, and that a lot of the AI security neighborhood is sort of important of xAI’s security practices, or lack thereof.
“On OpenAI’s half, that is meant to silence critics, to intimidate them, and to dissuade different nonprofits from doing the identical,” stated Steinhauser. “For Sacks, I feel he’s involved that [the AI safety] motion is rising and folks wish to maintain these corporations accountable.”
Sriram Krishnan, the White Home’s senior coverage advisor for AI and a former a16z normal companion, chimed in on the dialog this week with a social media publish of his personal, calling AI security advocates out of contact. He urged AI security organizations to speak to “individuals in the true world utilizing, promoting, adopting AI of their properties and organizations.”
A latest Pew research discovered that roughly half of People are extra involved than excited about AI, but it surely’s unclear what worries them precisely. One other latest research went into extra element and located that American voters care extra about job losses and deepfakes than catastrophic dangers attributable to AI, which the AI security motion is essentially centered on.
Addressing these security considerations might come on the expense of the AI trade’s fast development — a trade-off that worries many in Silicon Valley. With AI funding propping up a lot of America’s economic system, the worry of over-regulation is comprehensible.
However after years of unregulated AI progress, the AI security motion seems to be gaining actual momentum heading into 2026. Silicon Valley’s makes an attempt to combat again in opposition to safety-focused teams could also be an indication that they’re working.
