This isn’t California state Senator Scott Wiener’s first try at addressing the hazards of AI.
In 2024, Silicon Valley mounted a fierce marketing campaign in opposition to his controversial AI security invoice, SB 1047, which might have made tech corporations accountable for the potential harms of their AI methods. Tech leaders warned that it could stifle America’s AI increase. Governor Gavin Newsom in the end vetoed the invoice, echoing comparable issues, and a well-liked AI hacker home promptly threw a “SB 1047 Veto Social gathering.” One attendee instructed me, “Thank god, AI remains to be authorized.”
Now Wiener has returned with a brand new AI security invoice, SB 53, which sits on Governor Newsom’s desk awaiting his signature or veto someday within the subsequent few weeks. This time round, the invoice is way more well-liked or a minimum of, Silicon Valley doesn’t appear to be at conflict with it.
Anthropic outright endorsed SB 53 earlier this month. Meta spokesperson Jim Cullinan tells TechCrunch that the corporate helps AI regulation that balances guardrails with innovation and says “SB 53 is a step in that course,” although there are areas for enchancment.
Former White Home AI coverage advisor Dean Ball tells TechCrunch that SB 53 is a “victory for affordable voices,” and thinks there’s a powerful likelihood Governor Newsom indicators it.
If signed, SB 53 would impose a few of the nation’s first security reporting necessities on AI giants like OpenAI, Anthropic, xAI, and Google — corporations that right now face no obligation to disclose how they take a look at their AI methods. Many AI labs voluntarily publish security studies explaining how their AI fashions may very well be used to create bioweapons and different risks, however they do that at will and they’re not all the time constant.
The invoice requires main AI labs — particularly these making greater than $500 million in income — to publish security studies for his or her most succesful AI fashions. Very similar to SB 1047, the invoice particularly focuses on the worst sorts of AI dangers: their potential to contribute to human deaths, cyberattacks, and chemical weapons. Governor Newsom is contemplating a number of different payments that handle different kinds of AI dangers, resembling engagement-optimization strategies in AI companions.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
SB 53 additionally creates protected channels for workers working at AI labs to report security issues to authorities officers, and establishes a state-operated cloud computing cluster, CalCompute, to supply AI analysis assets past the massive tech corporations.
One motive SB 53 could also be extra well-liked than SB 1047 is that it’s much less extreme. SB 1047 additionally would have made AI corporations accountable for any harms brought on by their AI fashions, whereas SB 53 focuses extra on requiring self-reporting and transparency. SB 53 additionally narrowly applies to the world’s largest tech corporations, fairly than startups.
However many within the tech business nonetheless imagine states ought to go away AI regulation as much as the federal authorities. In a current letter to Governor Newsom, OpenAI argued that AI labs ought to solely must adjust to federal requirements — which is a humorous factor to say to a state governor. The enterprise agency Andreessen Horowitz wrote a current weblog submit vaguely suggesting that some payments in California may violate the Structure’s dormant Commerce Clause, which prohibits states from unfairly limiting interstate commerce.
Senator Wiener addresses these issues: he lacks religion within the federal authorities to move significant AI security regulation, so states have to step up. In reality, Wiener thinks the Trump administration has been captured by the tech business, and that current federal efforts to dam all state AI legal guidelines are a type of Trump “rewarding his funders.”
The Trump administration has made a notable shift away from the Biden administration’s concentrate on AI security, changing it with an emphasis on development. Shortly after taking workplace, Vice President J.D. Vance appeared at an AI convention in Paris and stated: “I’m not right here this morning to speak about AI security, which was the title of the convention a few years in the past. I’m right here to speak about AI alternative.”
Silicon Valley has applauded this shift, exemplified by Trump’s AI Motion Plan, which eliminated boundaries to constructing out the infrastructure wanted to coach and serve AI fashions. In the present day, Large Tech CEOs are commonly seen eating on the White Home or saying hundred-billion-dollar knowledge facilities alongside President Trump.
Senator Wiener thinks it’s essential for California to guide the nation on AI security, however with out choking off innovation.
I not too long ago interviewed Senator Wiener to debate his years on the negotiating desk with Silicon Valley and why he’s so targeted on AI security payments. Our dialog has been edited evenly for readability and brevity. My questions are in daring, and his solutions usually are not.
Maxwell Zeff: Senator Wiener, I interviewed you when SB 1047 was sitting on Governor Newsom’s desk. Discuss to me concerning the journey you’ve been on to manage AI security in the previous few years.
Scott Wiener: It’s been a curler coaster, an unbelievable studying expertise, and simply actually rewarding. We’ve been in a position to assist elevate this difficulty [of AI safety], not simply in California, however within the nationwide and worldwide discourse.
We now have this extremely highly effective new expertise that’s altering the world. How will we ensure it advantages humanity in a approach the place we cut back the danger? How will we promote innovation, whereas additionally being very conscious of public well being and public security. It’s an essential — and in some methods, existential — dialog concerning the future. SB 1047, and now SB 53, have helped to foster that dialog about protected innovation.
Within the final 20 years of expertise, what have you ever realized concerning the significance of legal guidelines that may maintain Silicon Valley to account?
I’m the man who represents San Francisco, the beating coronary heart of AI innovation. I’m instantly north of Silicon Valley itself, so we’re proper right here in the midst of all of it. However we’ve additionally seen how the massive tech corporations — a few of the wealthiest corporations in world historical past — have been in a position to cease federal regulation.
Each time I see tech CEOs having dinner on the White Home with the aspiring fascist dictator, I’ve to take a deep breath. These are all actually good individuals who have generated huge wealth. Loads of of us I signify work for them. It actually pains me after I see the offers which might be being struck with Saudi Arabia and the United Arab Emirates, and the way that cash will get funneled into Trump’s meme coin. It causes me deep concern.
I’m not somebody who’s anti-tech. I would like tech innovation to occur. It’s extremely essential. However that is an business that we must always not belief to manage itself or make voluntary commitments. And that’s not casting aspersions on anybody. That is capitalism, and it might probably create huge prosperity but in addition trigger hurt if there usually are not smart rules to guard the general public curiosity. In relation to AI security, we’re attempting to string that needle.
SB 53 is targeted on the worst harms that AI may imaginably trigger — loss of life, large cyber assaults, and the creation of bioweapons. Why focus there?
The dangers of AI are various. There’s algorithmic discrimination, job loss, deep fakes, and scams. There have been varied payments in California and elsewhere to deal with these dangers. SB 53 was by no means meant to cowl the sphere and handle each threat created by AI. We’re targeted on one particular class of threat, by way of catastrophic threat.
That difficulty got here to me organically from of us within the AI area in San Francisco — startup founders, frontline AI technologists, and people who find themselves constructing these fashions. They got here to me and stated, ‘This is a matter that must be addressed in a considerate approach.’
Do you’re feeling that AI methods are inherently unsafe, or have the potential to trigger loss of life and large cyberattacks?
I don’t suppose they’re inherently protected. I do know there are lots of people working in these labs who care very deeply about attempting to mitigate threat. And once more, it’s not about eliminating threat. Life is about threat, until you’re going to reside in your basement and by no means go away, you’re going to have threat in your life. Even in your basement, the ceiling may fall down.
Is there a threat that some AI fashions may very well be used to do important hurt to society? Sure, and we all know there are individuals who would love to try this. We must always attempt to make it more durable for dangerous actors to trigger these extreme harms, and so ought to the individuals growing these fashions.
Anthropic issued its assist for SB 53. What are your conversations like with different business gamers?
We’ve talked to everybody: giant corporations, small startups, buyers, and teachers. Anthropic has been actually constructive. Final yr, they by no means formally supported [SB 1047] however that they had constructive issues to say about facets of the invoice. I don’t suppose [Anthropic} loves every aspect of SB 53, but I think they concluded that on balance the bill was worth supporting.
I’ve had conversations with large AI labs who are not supporting the bill, but are not at war with it in the way they were with SB 1047. It’s not surprising. SB 1047 was more of a liability bill, SB 53 is more of a transparency bill. Startups have been less engaged this year because the bill really focuses on the largest companies.
Do you feel pressure from the large AI PACs that have formed in recent months?
This is another symptom of Citizens United. The wealthiest companies in the world can just pour endless resources into these PACs to try to intimidate elected officials. Under the rules we have, they have every right to do that. It’s never really impacted how I approach policy. There have been groups trying to destroy me for as long as I’ve been in elected office. Various groups have spent millions trying to blow me up, and here I am. I’m in this to do right by my constituents and try to make my community, San Francisco, and the world a better place.
What’s your message to Governor Newsom as he’s debating whether to sign or veto this bill?
My message is that we heard you. You vetoed SB 1047 and provided a very comprehensive and thoughtful veto message. You wisely convened a working group that produced a very strong report, and we really looked to that report in crafting this bill. The governor laid out a path, and we followed that path in order to come to an agreement, and I hope we got there.