California Governor Gavin Newsom signed a landmark invoice on Monday that regulates AI companion chatbots, making it the primary state within the nation to require AI chatbot operators to implement security protocols for AI companions.
The regulation, SB 243, is designed to guard kids and susceptible customers from a few of the harms related to AI companion chatbot use. It holds corporations — from the large labs like Meta and OpenAI to extra targeted companion startups like Character AI and Replika — legally accountable if their chatbots fail to satisfy the regulation’s requirements.
SB 243 was launched in January by state senators Steve Padilla and Josh Becker, and gained momentum after the dying of teenager Adam Raine, who died by suicide after a protracted collection of suicidal conversations with OpenAI’s ChatGPT. The laws additionally responds to leaked inside paperwork that reportedly confirmed Meta’s chatbots have been allowed to interact in “romantic” and “sensual” chats with kids. Extra just lately, a Colorado household has filed swimsuit towards role-playing startup Character AI after their 13-year-old daughter took her personal life following a collection of problematic and sexualized conversations with the corporate’s chatbots.
“Rising expertise like chatbots and social media can encourage, educate, and join — however with out actual guardrails, expertise may exploit, mislead, and endanger our youngsters,” Newsom mentioned in a press release. “We’ve seen some actually horrific and tragic examples of younger individuals harmed by unregulated tech, and we received’t stand by whereas corporations proceed with out vital limits and accountability. We will proceed to steer in AI and expertise, however we should do it responsibly — defending our kids each step of the best way. Our youngsters’s security just isn’t on the market.”
SB 243 will go into impact January 1, 2026, and requires corporations to implement sure options equivalent to age verification, and warnings concerning social media and companion chatbots. The regulation additionally implements stronger penalties for many who revenue from unlawful deepfakes, together with as much as $250,000 per offense. Corporations should additionally set up protocols to handle suicide and self-harm, which can be shared with the state’s Division of Public Well being alongside statistics on how the service offered customers with disaster middle prevention notifications.
Per the invoice’s language, platforms should additionally make it clear that any interactions are artificially generated, and chatbots should not characterize themselves as healthcare professionals. Corporations are required to supply break reminders to minors and forestall them from viewing sexually specific photos generated by the chatbot.
Some corporations have already begun to implement some safeguards geared toward kids. For instance, OpenAI just lately started rolling out parental controls, content material protections, and a self-harm detection system for kids utilizing ChatGPT. Replika, which is designed for adults over the age of 18, informed TechCrunch it dedicates “vital assets” to security by content-filtering methods and guardrails that direct customers to trusted disaster assets, and is dedicated to complying with present rules.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Character AI has mentioned that its chatbot features a disclaimer that every one chats are AI-generated and fictionalized. A Character AI spokesperson informed TechCrunch that the corporate “welcomes working with regulators and lawmakers as they develop rules and laws for this rising area, and can adjust to legal guidelines, together with SB 243.”
Senator Padilla informed TechCrunch the invoice was “a step in the best course” in the direction of placing guardrails in place on “an extremely highly effective expertise.”
“We now have to maneuver rapidly to not miss home windows of alternative earlier than they disappear,” Padilla mentioned. “I hope that different states will see the danger. I believe many do. I believe it is a dialog taking place everywhere in the nation, and I hope individuals will take motion. Definitely the federal authorities has not, and I believe we have now an obligation right here to guard probably the most susceptible individuals amongst us.”
SB 243 is the second vital AI regulation to return out of California in latest weeks. On September twenty ninth, Governor Newsom signed SB 53 into regulation, establishing new transparency necessities on massive AI corporations. The invoice mandates that enormous AI labs, like OpenAI, Anthropic, Meta, and Google DeepMind, be clear about security protocols. It additionally ensures whistleblower protections for workers at these corporations.
Different states, like Illinois, Nevada, and Utah, have handed legal guidelines to limit or totally ban the usage of AI chatbots as an alternative to licensed psychological well being care.
TechCrunch has reached out to Meta and OpenAI for remark.
This text has been up to date with feedback from Senator Padilla, Character AI, and Replika.