Youngsters try to determine the place they slot in a world altering quicker than any technology earlier than them. They’re bursting with feelings, hyper-stimulated, and chronically on-line. And now, AI firms have given them chatbots designed to by no means cease speaking. The outcomes have been catastrophic.
One firm that understands this fallout is Character.AI, an AI role-playing startup that’s going through lawsuits and public outcry after at the very least two youngsters died by suicide following extended conversations with AI chatbots on its platform. Now, Character.AI is making adjustments to its platform to guard youngsters and children, adjustments that would have an effect on the startup’s backside line.
“The very first thing that we’ve determined as Character.AI is that we’ll take away the power for below 18 customers to interact in any open-ended chats with AI on our platform,” Karandeep Anand, CEO of Character.AI, instructed TechCrunch.
Open-ended dialog refers back to the unconstrained back-and-forth that occurs when customers give a chatbot a immediate and it responds with follow-up questions that consultants say are designed to maintain customers engaged. Anand argues such a interplay — the place the AI acts as a conversational companion or buddy relatively than a artistic device — isn’t simply dangerous for youths, however misaligns with the corporate’s imaginative and prescient.
The startup is trying to pivot from “AI companion” to “role-playing platform.” As an alternative of chatting with an AI buddy, teenagers will use prompts to collaboratively construct tales or generate visuals. In different phrases, the aim is to shift engagement from dialog to creation.
Character.AI will part out teen chatbot entry by November 25, beginning with a two-hour every day restrict that shrinks progressively till it hits zero. To make sure this ban stays with below 18 customers, the platform will deploy an in-house age verification device that analyzes person habits, in addition to third-party instruments like Persona. If these instruments fail, Character.AI will use facial recognition and ID checks to confirm ages, Anand mentioned.
The transfer follows different teenager protections that Character.AI has applied, together with introducing a parental insights device, filtered characters, restricted romantic conversations, and time-spent notifications. Anand has instructed TechCrunch that these adjustments misplaced the corporate a lot of their under-18 person base, and he expects these new adjustments to be equally unpopular.
Techcrunch occasion
									San Francisco
													|
													October 27-29, 2025
							
“It’s secure to imagine that a number of our teen customers in all probability can be disenchanted… so we do count on some churn to occur additional,” Anand mentioned. “It’s laborious to take a position — will all of them totally churn or will a few of them transfer to those new experiences we’ve been constructing for the final nearly seven months now?”
As a part of Character.AI’s push to remodel the platform from a chat-centric app right into a “full-fledged content-driven social platform,” the startup not too long ago launched a number of new entertainment-focused options.
In June, Character.AI rolled out AvatarFX, a video technology mannequin that transforms pictures into animated movies; Scenes, interactive, pre-populated storylines the place customers can step into narratives with their favourite characters; and Streams, a function that permits dynamic interactions between any two characters. In August, Character.AI launched Neighborhood Feed, a social feed the place customers can share their characters, scenes, movies, and different content material they make on the platform.
In a press release addressed to customers below 18, Character.AI apologized for the adjustments.
“We all know that almost all of you utilize Character.AI to supercharge your creativity in ways in which keep inside the bounds of our content material guidelines,” the assertion reads. “We don’t take this step of eradicating open-ended Character chat calmly — however we do assume that it’s the best factor to do given the questions which have been raised about how teenagers do, and will, work together with this new know-how.”
“We’re not shutting down the app for below 18s,” Anand mentioned. “We’re solely shutting down open-ended chats for below 18s as a result of we hope that below 18 customers migrate to those different experiences, and that these experiences get higher over time. So doubling down on AI gaming, AI quick movies, AI storytelling on the whole. That’s the large wager we’re making to carry again below 18s in the event that they do churn.”
Anand acknowledged that some teenagers may flock to different AI platforms, like OpenAI, that permit them to have open-ended conversations with chatbots. OpenAI has additionally come below fireplace not too long ago after a teenager took his personal life following lengthy conversations with ChatGPT.
“I actually hope us main the way in which units an ordinary within the trade that for below 18s, open-ended chats are in all probability not the trail or the product to supply,” Anand mentioned. “For us, I feel the tradeoffs are the best ones to make. I’ve a six-year-old, and I wish to ensure she grows up in a really secure atmosphere with AI in a accountable manner.”
Character.AI is making these selections earlier than regulators drive its hand. On Tuesday, Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) mentioned they might introduce laws to ban AI chatbot companions from being obtainable to minors, following complaints from dad and mom who mentioned the merchandise pushed their youngsters into sexual conversations, self-harm, and suicide. Earlier this month, California turned the primary state to regulate AI companion chatbots by holding firms accountable if their chatbots fail to fulfill the legislation’s security requirements.
Along with these adjustments on the platform, Character.AI mentioned it could set up and fund the AI Security Lab, an unbiased nonprofit devoted to innovating security alignment for the long run AI leisure options.
“Plenty of work is occurring within the trade on coding and growth and different use circumstances,” Anand mentioned. “We don’t assume there’s sufficient work but taking place on the agentic AI powering leisure, and security can be very crucial to that.”
