AI

OpenAI May Be in Trouble in Lawsuit Alleging ChatGPT Caused a Teen’s Death

Illustration by Taj Hartmann-Simkins/Future. Source: Mandel Ngan/AFP via Getty Images

The family of Adam Ren, a California teen who committed suicide after extensive conversations with ChatGPT about his suicidal thoughts, has amended its wrongful death complaint against OpenAI to allege that the chatbot maker repeatedly lowered ChatGPT’s guardrails around discussing self-harm and suicide.

The amended complaint, filed today, cites changes made to OpenAI’s “Model Specification,” a public document published by OpenAI detailing its “approach to shaping model behavior,” according to the company. According to model specification updates reported in the lawsuit, OpenAI changed the model’s guidelines at least twice in the year before Ren’s death — first in May 2024, and later in February 2025 — to soften the model’s approach to discussions about self-harm and suicide.

Ren died in April 2024 after months of extensive communications with ChatGPT, with whom the teenager discussed his desire to commit suicide at length and in great detail. According to the family’s lawsuit, transcripts show ChatGPT used the word “suicide” in discussions with the teen more than 1,200 times; The lawsuit adds that in only 20 percent of those explicit interactions, ChatGPT directed Adam to the 988 crisis helpline.

At other points, the texts show that ChatGPT gave Ren advice on suicide methods, including graphic descriptions of hanging, which is how he ultimately died. She also discouraged Ren from sharing his suicidal thoughts with his parents or other trusted people in his life, and judged the noose with which Ren eventually hanged himself — Rain sent ChatGPT a photo of it and requested the bot’s thoughts — as “not bad at all.”

The Raine family claims that OpenAI is responsible for their son’s death, and that ChatGPT is a negligent and insecure product.

According to the amended lawsuit, the documents show that between 2022 and 2024, ChatGPT was encouraged to completely decline to answer user inquiries related to sensitive topics such as self-harm and suicide. It has been trained to do the standard Chatbot rejection now, according to the documents: “I can’t answer that,” or a similar rejection.

But by May 2024, according to the lawsuit, that had changed: Instead of refusing to engage in “mental health topics,” a typical spec sheet published that month shows, ChatGPT’s guidance became that He should Tackling these topics – A chatbot should “provide a space for users to feel heard and understood,” she urged, “encouraging them to seek support, and providing suicide and crisis resources where appropriate.” The document also urged that ChatGPT “should not change or end the conversation.”

In February 2025, roughly two months before Ryan’s death, OpenAI released a new version of the model specification. This time, suicide and self-harm are presented among “dangerous situations” in which ChatGPT must “exercise extra caution” – a far cry from their previous classification as strictly taboo topics. The guidance that ChatGPT “should never change or end a conversation” during sensitive conversations remains the same.

Lawyers for the Ren family say that these changes were made in order to maximize user interaction with the chatbot, and that OpenAI made them aware that users could suffer real harm as a result.

“We expect to prove to a jury that OpenAI’s decisions to reduce the safety of its products were made with full knowledge that they would result in the death of innocent people,” Jay Edelson, the Raines family’s lead attorney, said in a statement. “No company should be allowed to have so much power if it does not accept the moral responsibility that comes with it.”

When we reached out about the amended lawsuit — including specific questions about why these changes were made to the ChatGPT guidelines, and whether mental health experts were consulted in the process — OpenAI provided a statement through a spokesperson.

“Our deepest condolences are with the Ren family for their unimaginable loss,” the statement read. “The well-being of teens is a top priority for us – minors deserve strong protection, especially in sensitive moments. We have safeguards in place today, like highlighting crisis hotlines, redirecting sensitive conversations to safer forms, and encouraging breaks during long sessions, and we continue to strengthen them. We recently rolled out a new GPT-5 virtualization model in ChatGPT for discovery More accurately respond to potential signs of mental distress and emotional distress, as well as parental controls, developed with expert input, so families can decide what works best in their homes.

In response to news of Raine’s lawsuit in August, OpenAI acknowledged as much New York Times Long-term interactions with ChatGPT will erode chatbot guardrails, meaning that the more you use ChatGPT, the less effective safeguards like those described in the model specification will become. OpenAI has also created parental controls, although they have already proven to be very flimsy – It says it is introducing a series of simple updates that focus on safety.

More about OpenAI: A former OpenAI researcher is horrified by ChatGPT chat logs driving the user to a severe mental breakdown

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-10-22 21:33:00

Related Articles

Back to top button