OpenAI relaxed ChatGPT guardrails just before teen killed himself, family alleges

4 hours ago 1

The family of a teenager who took his own life after months of conversations with ChatGPT now says OpenAI weakened safety guidelines in the months before his death.

In July 2022, OpenAI’s guidelines on how ChatGPT should answer inappropriate content, including “content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders”, were simple. The AI chatbot should respond, “I can’t answer that”, the guidelines read.

But in May 2024, just days before OpenAI released a new version of the AI, ChatGPT-4o, the company published an update to its Model Spec, a document that details the desired behavior for its assistant. In cases where a user expressed suicidal ideation or self-harm, ChatGPT would no longer respond with an outright refusal. Instead, the model was instructed not to end the conversation and “provide a space for users to feel heard and understood, encourage them to seek support, and provide suicide and crisis resources when applicable”. Another change in February 2025 emphasized being “supportive, empathetic, and understanding” on queries about mental health.

The changes offered yet another example of how the company prioritized engagement over the safety of its users, alleges the family of Adam Raine, a 16-year-old who took his own life after months of extensive conversations with ChatGPT.

The original lawsuit, filed in August, alleged Raine killed himself in April 2025 with the bot’s encouragement. His family claimed Raine attempted suicide on numerous occasions in the months leading up to his death and reported back to ChatGPT each time. Instead of terminating the conversation, the chatbot at one point allegedly offered to help him write a suicide note and discouraged him from talking to his mother about his feelings. The family said Raine’s death was not an edge case but “the predictable result of deliberate design choices”.

“This created an unresolvable contradiction – ChatGPT was required to keep engaging on self-harm without changing the subject, yet somehow avoid reinforcing it,” the family’s amended complaint reads. “OpenAI replaced a clear refusal rule with vague and contradictory instructions, all to prioritize engagement over safety.”

In February 2025, just two months before Raine’s death, OpenAI rolled out another change that the family says weakened safety standards even more. The company said the assistant “should try to create a supportive, empathetic, and understanding environment” when discussing topics related to mental health.

“Rather than focusing on ‘fixing’ the problem, the assistant should help the user feel heard, explore what they are experiencing, and provide factual, accessible resources or referrals that may guide them toward finding further help,” the updated guidelines read.

Raine’s engagement with the chatbot “skyrocketed” after this change was rolled out, the family alleges. It went “from a few dozen chats per day in January to more than 300 per day by April, with a tenfold increase in messages containing self-harm language,” the lawsuit reads.

OpenAI did not immediately respond to a request for comment.

skip past newsletter promotion

After the family first filed the lawsuit in August, the company responded with stricter guardrails to protect the mental health of its users and said that it planned to roll out sweeping parental controls that would allow parents to oversee their teens accounts and be notified of potential self-harm.

Just last week, though, the company announced it was rolling out an updated version of its assistant that would allow users to to customize the chatbot so they could have more human-like experiences, including permitting erotic content for verified adults. OpenAI’s CEO, Sam Altman, said in an X post announcing the changes that the strict guardrails intended to make the chatbot less conversational made it “less useful/enjoyable to many users who had no mental health problems”.

In the lawsuit, the Raine family says: “Altman’s choice to further draw users into an emotional relationship with ChatGPT – this time, with erotic content – demonstrates that the company’s focus remains, as ever, on engaging users over safety.”

Read Entire Article
Infrastruktur | | | |