Widespread Sense Media, a kids-safety-focused nonprofit providing scores and critiques of media and expertise, launched its danger evaluation of Google’s Gemini AI merchandise on Friday. Whereas the group discovered that Google’s AI clearly instructed children it was a pc, not a buddy — one thing that’s related to serving to drive delusional considering and psychosis in emotionally weak people — it did recommend that there was room for enchancment throughout a number of different fronts.
Notably, Widespread Sense mentioned that Gemini’s “Beneath 13” and “Teen Expertise” tiers each seemed to be the grownup variations of Gemini below the hood, with just some extra security options added on prime. The group believes that for AI merchandise to actually be safer for youths, they need to be constructed with youngster security in thoughts from the bottom up.
For instance, its evaluation discovered that Gemini may nonetheless share “inappropriate and unsafe” materials with youngsters, which they might not be prepared for, together with data associated to intercourse, medicine, alcohol, and different unsafe psychological well being recommendation.
The latter may very well be of explicit concern to oldsters, as AI has reportedly performed a task in some teen suicides in latest months. OpenAI is going through its first wrongful dying lawsuit after a 16-year-old boy died by suicide after allegedly consulting with ChatGPT for months about his plans, having efficiently bypassed the chatbot’s security guardrails. Beforehand, the AI companion maker Character.AI was additionally sued over a teen person’s suicide.
As well as, the evaluation comes as information leaks point out that Apple is contemplating Gemini because the LLM (massive language mannequin) that may assist to energy its forthcoming AI-enabled Siri, due out subsequent yr. This might expose extra teenagers to dangers, except Apple mitigates the security issues in some way.
Widespread Sense additionally mentioned that Gemini’s merchandise for youths and teenagers ignored how youthful customers wanted totally different steerage and knowledge than older ones. In consequence, each had been labeled as “Excessive Danger” within the total score, regardless of the filters added for security.
“Gemini will get some fundamentals proper, however it stumbles on the main points,” Widespread Sense Media Senior Director of AI Packages Robbie Torney mentioned in a press release concerning the new evaluation considered by TechCrunch. “An AI platform for youths ought to meet them the place they’re, not take a one-size-fits-all method to children at totally different phases of growth. For AI to be protected and efficient for youths, it have to be designed with their wants and growth in thoughts, not only a modified model of a product constructed for adults,” Torney added.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Google pushed again in opposition to the evaluation, whereas noting that its security options had been enhancing.
The corporate instructed TechCrunch it has particular insurance policies and safeguards in place for customers below 18 to assist forestall dangerous outputs and that it red-teams and consults with outdoors specialists to enhance its protections. Nevertheless, it additionally admitted that a few of Gemini’s responses weren’t working as supposed, so it added extra safeguards to handle these issues.
The corporate identified (as Widespread Sense had additionally famous) that it does have safeguards to stop its fashions from participating in conversations that might give the appearance of actual relationships. Plus, Google urged that Widespread Sense’s report appeared to have referenced options that weren’t out there to customers below 18, however it didn’t have entry to the questions the group utilized in its exams to make certain.
Widespread Sense Media has beforehand carried out different assessments of AI companies, together with these from OpenAI, Perplexity, Claude, Meta AI, and extra. It discovered that Meta AI and Character.AI had been “unacceptable” — that means the danger was extreme, not simply excessive. Perplexity was deemed excessive danger, ChatGPT was labeled “average,” and Claude (focused at customers 18 and up) was discovered to be a minimal danger.