Vulai’s data shows hundreds of thousands of users showing signs of mental health challenges

OpenAI claims that 10% of the world’s population currently uses chatgpt on a weekly basis. In a report published on Monday, Opena highlights how it treats users who show signs of mental stress and the company says that 0.04% show signs of “self-harm or 0.15%,” That reaches almost three million people.
In its ongoing effort to demonstrate that it is trying to improve Guddigails for users in trouble, OpenAI is sharing details of its work with 170 psychologists to improve people in need. The company says it has reduced “responses that interrupt desired behavior by 65-80%,” and is now better at escalating conversations and directing people to professional and crisis care when appropriate. It also adds “gentle reminders” to take long breaks. Of course, it cannot support user interaction and will not lock access to force a break.
The company also released data on how often people experience mental health issues while interacting with chatgpt, highlighting how small a percentage of chat account usage is. According to the company’s metrics, “0.07% of users are active in a given week and 0.01% of messages show urgent psychological symptoms related to psychosis or mania.” That’s 560,000 people a week, assuming the company’s user count is correct. The company also wanted to handle 18 billion messages to Chatgt every week, so that 0.01% is equal to 1.8 million messages of Psychosis or Mania.
One of the biggest security companies is focusing on improving its responses to users who show self-harm or suicidal urges. According to Openai’s data, about 0.15% of users per week print “visible indicators of planning or possible suicidal intent,” which accounts for 0.05% of messages. That would equate to 1.2 million people and nine million messages.
The last area the company focused on as it sought to improve its responses to mental health issues was mental health in Ayi. Opena revealed that about 0.15% of users and 0.03% of messages per week “show increased levels of emotional abuse in chatgpt.” That’s 1.2 million people and 5.4 million messages.
Opena has taken steps in recent months to try to provide better guardrails to protect against the power that the chatbot empowers or even the crimes of a 16-year-old, following his chatgpt, asking chatgpt for advice on how to tie the noose before tying himself. But the validity of that should be questioned, given that at the same time the company announced new chats, which are strictly restricted to underprivileged users, and announced that it will allow things like relying on Chatbot.



