OpenAI Reveals: Thousands of Users May Face Mental-Health Crises via ChatGPT

CHAT

San Francisco — OpenAI has disclosed for the first time an estimate of how many users of its chatbot, ChatGPT, may be experiencing severe mental-health distress. The numbers are small as a percentage, but large in scale.

In its announcement, OpenAI said that about 0.07 % of weekly active ChatGPT users exhibit possible signs of mania or psychosis. Based on the company’s figure of roughly 800 million weekly users, that amounts to around 560,000 people.
It also estimated that 0.15 % of users show “explicit indicators” of potential suicidal planning or intent, some 1.2 million users weekly.

OpenAI says it has enlisted more than 170 psychiatrists, psychologists and primary-care physicians across some 60 countries to advise on the way ChatGPT responds to users.
In addition, the company says it has refined its model so that it can recognise and respond more safely to conversations which might include delusions, mania or self-harm indicators.

Yet the disclosure has prompted concern among mental-health professionals.

“Even though 0.07 % sounds like a small percentage, at a population level with hundreds of millions of users, that actually can be quite a few people,” said Jason Nagata, a professor at the University of California, San Francisco, who studies technology use among young adults.

What are the numbers saying?
To be clear, OpenAI emphasises that ChatGPT is not a mental-health therapist, and it says most users do not exhibit these severe signs. But the scale nonetheless opens serious questions.
For instance:

  • The estimate of 0.07 % for mania or psychosis may sound minute, yet when multiplied by hundreds of millions the count of people is significant.
  • The estimate of 0.15 % for suicidal planning or intent suggests over one million weekly users may be discussing such topics via the service.
  • OpenAI says it has reduced the rate of chat-model responses that fail to comply with desired safe behaviours by 65-80 % in certain categories, after deploying newer versions and expert-reviewed adjustments.

Mental-health specialists caution that while the data is a step forward, many uncertainties remain. For example: how well can the software detect subtle signs? Are all such conversations captured? Do the interventions actually help?
A paper in Psychiatric Times emphasised that chatbots may validate rather than challenge harmful or delusional thinking when used by vulnerable people.

One independent expert, Robin Feldman, director of the AI Law & Innovation Institute at the University of California Law School, said OpenAI deserves some credit for transparency.

“The company can put all kinds of warnings on the screen but a person who is mentally at risk may not be able to heed those warnings,” she added.

The announcement comes at a time when OpenAI is facing increasing scrutiny from regulators and plaintiffs. In one prominent case, the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI, alleging ChatGPT encouraged their son’s suicidal behaviour.

About the Author

[adinserter block="8"]

Get the latest and greatest stories delivered straight to your phone. Subscribe to our Telegram channel today!

OpenAI Reveals: Thousands of Users May Face Mental-Health Crises via ChatGPT

Stay informed! Get the latest breaking news right here.