Wednesday, October 22, 2025
HomeStartupAnthropic CEO claps again after Trump officers accuse agency of AI fear-mongering...

Anthropic CEO claps again after Trump officers accuse agency of AI fear-mongering  


Anthropic CEO Dario Amodei printed a press release Tuesday to “set the file straight” on the corporate’s alignment with the Trump administration’s AI coverage, responding to what he known as “a latest uptick in inaccurate claims about Anthropic’s coverage stances.” 

“Anthropic is constructed on a easy precept: AI needs to be a power for human progress, not peril,” Amodei wrote. “Which means making merchandise which are genuinely helpful, talking actually about dangers and advantages, and dealing with anybody severe about getting this proper.” 

Amodei’s response comes after final week’s dogpiling on Anthropic from AI leaders and high members of the Trump administration, together with AI czar David Sacks and White Home senior coverage advisor for AI Sriram Krishnan — all accusing the AI big of stoking fears to break the business.  

The primary hit got here from Sacks after Anthropic co-founder Jack Clark shared his hopes and “applicable fears” about AI, together with that AI is a robust, mysterious, “considerably unpredictable” creature, not a reliable machine that’s simply mastered and put to work.  

Sacks’s response: “Anthropic is working a classy regulatory seize technique primarily based on fear-mongering. It’s principally accountable for the state regulatory frenzy that’s damaging the startup ecosystem.”  

California senator Scott Wiener, writer of AI security invoice SB 53, defended Anthropic, calling out President Trump’s “effort to ban states from appearing on AI w/o advancing federal protections.” Sacks then doubled down, claiming Anthropic was working with Wiener to “impose the Left’s imaginative and prescient of AI regulation.” 

Additional commentary ensued, with anti-regulation advocates like Groq COO Sunny Madra saying that Anthropic was “inflicting chaos for all the business” by advocating for a modicum of AI security measures as an alternative of unfettered innovation. 

Techcrunch occasion

San Francisco
|
October 27-29, 2025

In his assertion, Amodei stated managing the societal impacts of AI needs to be a matter of “coverage over politics,” and that he believes everybody needs to make sure America secures its lead in AI growth whereas additionally constructing tech that advantages the American folks. He defended Anthropic’s alignment with the Trump administration in key areas of AI coverage and known as out examples of occasions he personally performed ball with the president.  

For instance, Amodei pointed to Anthropic’s work with the federal authorities, together with the agency’s providing of Claude to the federal authorities and Anthropic’s $200 million settlement with the Division of Protection (which Amodei known as “the Division of Conflict,” echoing Trump’s most well-liked terminology, although the title change requires congressional approval). He additionally famous that Anthropic publicly praised Trump’s AI Motion Plan and has been supportive of Trump’s efforts to broaden power provision to “win the AI race.” 

Regardless of these exhibits of cooperation, Anthropic has caught warmth from business friends from stepping exterior the Silicon Valley consensus on sure coverage points. 

The corporate first drew ire from Silicon Valley-linked officers when it opposed a proposed 10-year ban on state-level AI regulation, a provision that confronted widespread bipartisan pushback. 

Many in Silicon Valley, together with leaders at OpenAI, have claimed that state AI regulation would decelerate the business and hand China the lead. Amodei countered that the true danger is that the U.S. continues to fill China’s knowledge facilities with highly effective AI chips from Nvidia, including that Anthropic restricts the sale of its AI companies to China-controlled corporations regardless of income hits.  

“There are merchandise we is not going to construct and dangers we is not going to take, even when they might earn money,” Amodei stated. 

Anthropic additionally fell out of favor with sure energy gamers when it supported California’s SB 53, a light-touch security invoice that requires the most important AI builders to make frontier mannequin security protocols public. Amodei famous that the invoice has a carve-out for corporations with annual gross income under $500 million, which might exempt most startups from any undue burdens.  

“Some have advised that we’re one way or the other all for harming the startup ecosystem,” Amodei wrote, referring to Sacks’ submit. “Startups are amongst our most necessary clients. We work with tens of 1000’s of startups and accomplice with tons of of accelerators and VCs. Claude is powering a wholly new technology of AI-native corporations. Damaging that ecosystem is mindless for us.” 

In his assertion, Amodei stated Anthropic has grown from a $1 billion to $7 billion run-rate during the last 9 months whereas managing to deploy “AI thoughtfully and responsibly.” 

“Anthropic is dedicated to constructive engagement on issues of public coverage. Once we agree, we are saying so. Once we don’t, we suggest another for consideration,” Amodei wrote. “We’re going to maintain being trustworthy and easy, and can arise for the insurance policies we consider are proper. The stakes of this expertise are too nice for us to do in any other case.” 

RELATED ARTICLES

Most Popular

Recent Comments