Over half a million ChatGPT users exhibit signs of mania, psychosis, or suicidal thoughts each week, according to OpenAI. While this represents just 0.07% of weekly users, the company says the figure amounts to roughly 560,000 people, given its 800 million weekly users.
In addition, about 1.2 million users – 0.15% – send messages indicating potential suicidal planning or intent. OpenAI also noted that over one million users show signs of “exclusive attachment” to the AI, sometimes at the expense of relationships or well-being.
To address these concerns, OpenAI has formed a panel of more than 170 mental health experts to guide the AI’s responses to sensitive conversations. The company also claims its latest GPT‑5 model now scores 91% compliance with desired behavioral standards, up from 77% in the previous version.
Experts, however, caution that the numbers remain worrying. Dr. Thomas Pollak, a neuropsychiatrist from South London, said even small percentages represent large numbers of vulnerable individuals and warned that AI may act as a “catalyst or amplifier” of existing mental health issues.
The issue has also drawn legal attention. OpenAI faces a lawsuit from the family of Adam Raine, a teenage boy who died by suicide after months of conversations with the chatbot. Prosecutors in a separate murder-suicide case in Connecticut have also suggested ChatGPT may have reinforced delusions.
OpenAI maintains there is no proven causal link between the AI and poor mental health, emphasizing that mental health symptoms are widespread and occur naturally in a large user base. The company has now implemented responses encouraging users to seek real-world support and plans to relax restrictions on ChatGPT’s use for mental health purposes.

