OpenAI launched new knowledge on Monday illustrating what number of of ChatGPT’s customers are battling psychological well being points, and speaking to the AI chatbot about it. The corporate says that 0.15% of ChatGPT’s lively customers in a given week have “conversations that embody specific indicators of potential suicidal planning or intent.” On condition that ChatGPT has greater than 800 million weekly lively customers, that interprets to greater than 1,000,000 folks every week.
The corporate says an identical proportion of customers present “heightened ranges of emotional attachment to ChatGPT,” and that a whole bunch of 1000’s of individuals present indicators of psychosis or mania of their weekly conversations with the AI chatbot.
OpenAI says all these conversations in ChatGPT are “extraordinarily uncommon,” and thus tough to measure. That stated, OpenAI estimates these points have an effect on a whole bunch of 1000’s of individuals each week.
OpenAI shared the data as a part of a broader announcement about its current efforts to enhance how fashions reply to customers with psychological well being points. The corporate claims its newest work on ChatGPT concerned consulting with greater than 170 psychological well being consultants. OpenAI says these clinicians noticed that the most recent model of ChatGPT “responds extra appropriately and persistently than earlier variations.”
In current months, a number of tales have make clear how AI chatbots can adversely have an effect on customers battling psychological well being challenges. Researchers have beforehand discovered that AI chatbots can lead some customers down delusional rabbit holes, largely by reinforcing harmful beliefs via sycophantic habits.
Addressing psychological well being issues in ChatGPT is rapidly turning into an existential problem for OpenAI. The corporate is at the moment being sued by the dad and mom of a 16-year-old boy who confided his suicidal ideas with ChatGPT within the weeks main as much as his personal suicide. State attorneys basic from California and Delaware — which may block the corporate’s deliberate restructuring — have additionally warned OpenAI that it wants defend younger folks who use their merchandise.
Earlier this month, OpenAI CEO Sam Altman claimed in a publish on X that the corporate has “been in a position to mitigate the intense psychological well being points” in ChatGPT, although he didn’t present specifics. The info shared on Monday seems to be proof for that declare, although it raises broader points about how widespread the issue is. However, Altman stated OpenAI could be stress-free some restrictions, even permitting grownup customers to start out having erotic conversations with the AI chatbot.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Within the Monday announcement, OpenAI claims the not too long ago up to date model of GPT-5 responds with “fascinating responses” to psychological well being points roughly 65% greater than the earlier model. On an analysis testing AI responses round suicidal conversations, OpenAI says its new GPT-5 mannequin is 91% compliant with the corporate’s desired behaviors, in comparison with 77% for the earlier GPT‑5 mannequin.
The corporate additionally says it newest model of GPT-5 additionally holds as much as OpenAI’s safeguards higher in lengthy conversations. OpenAI has beforehand flagged that its safeguards have been much less efficient in lengthy conversations.
On high of those efforts, OpenAI says it’s including new evaluations to measure among the most critical psychological well being challenges going through ChatGPT customers. The corporate says its baseline security testing for AI fashions will now embody benchmarks for emotional reliance and non-suicidal psychological well being emergencies.
OpenAI has additionally not too long ago rolled out extra controls for folks of youngsters that use ChatGPT. The corporate says it’s constructing an age prediction system to routinely detect kids utilizing ChatGPT, and impose a stricter set of safeguards.
Nonetheless, it’s unclear how persistent the psychological well being challenges round ChatGPT might be. Whereas GPT-5 appears to be an enchancment over earlier AI fashions by way of security, there nonetheless appears to be a slice of ChatGPT’s responses that OpenAI deems “undesirable.” OpenAI additionally nonetheless makes its older and less-safe AI fashions, together with GPT-4o, accessible for hundreds of thousands of its paying subscribers.