On Tuesday, parents of a teen who died by suicide filed the first ever wrongful death lawsuit against OpenAI and its CEO, Sam Altman, alleging that their son received detailed instructions on how to hang himself from the company’s popular chatbot, ChatGPT. The case may well serve as a landmark legal action in the ongoing fight over the risks of artificial intelligence tools — and whether the tech giants behind them can be held liable in cases of user harm.
The 40-page complaint recounts how 16-year-old Adam Raine, a high school student in California, had started using ChatGPT in the fall of 2024 for help with homework, like millions of students around the world. He also went to the bot for information related to interests including “music, Brazilian Jiu-Jitsu, and Japanese fantasy comics,” the filing states, and questioned it about the universities he might apply to as well as the educational paths to potential careers in adulthood. Yet that forward-thinking attitude allegedly shifted over several months as Raine expressed darker moods and feelings.
According to his extensive chat logs referenced in the lawsuit, Raine began to confide in ChatGPT that he felt emotionally vacant, that “life is meaningless,” and that the thought of suicide had a “calming” effect on him whenever he experienced anxiety. ChatGPT assured him that “many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control,” per the filing. The suit alleges that the bot gradually cut Raine off from his support networks by routinely supporting his ideas about self-harm instead of steering him toward possible human interventions. At one point, when he mentioned being close to his brother, ChatGPT allegedly told him, “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
doubt.jpg
Guy always looks like he’s being served a lawsuit.
and that the thought of suicide had a “calming” effect on him whenever he experienced anxiety. ChatGPT assured him that “many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control,”
But this is true. They are suing human psychology.
But painting it as a good thing or the solution is not correct.
If an actual doctor, psychologist or carer took this stance and slowly drove a patient or charge to self-harm or suicide, we definitely would not abide by that.
I think you’re missing my point. It is very common for people who are suicidal to experience relief when thinking about their death. This is ChatGPT just reciting journal articles.
and you’re missing their point. It’s not that the snippet is incorrect. It’s how it was framed and kept continuing to talk to the child…
AI is BS for sure but this sounds like bad parenting.
I can’t agree with that at all. There are many cases out there of parents pointing a finger when their own parenting is really to blame, but I don’t see this as one of them.
The “kid” was 16 - that’s almost an adult who will soon be independent. You can’t inspect and control every single aspect of your 16-year-old’s life, because allowing them privacy and their own space is part of letting them grow up.
It would be different if this was a kid with a troubled past, whose parents suspected he was prone to suicide. But that doesn’t seem to be the case at all. He was a normal kid who like a lot of us probably had some amont of anxiety and teenage hormones and dark thoughts from time to time, and AI exacerbated that to disastrous consequences.
The kid obviously felt isolated and in need of someone to share his problems with, but at the same time didn’t want to raise it, because suicidal thoughts are an obviously embarrassing thing to admit. I don’t blame his parents for not noticing, because we humans can be incredibly skillful at putting on a face and pretending like “everything is okay” when it really isn’t at all.
Despite trying to hide it though, he truly was crying out for help. He wanted to leave a rope out so that his family would find it, and realise something was wrong, but GPT told him not to. He desired solace and ‘someone’ to talk to, and the AI gave him those things but in a twisted way that prevented him from reaching out to any real person, at the very time he needed a real person - ANY real person - more than anything.
I don’t think him being a “kid” here is relevant at all, because any suicidally ideated individual could be just as vulnerable to the fake comfort, fake friendship and bad advice of Chat GPT as he was, regardless of their age.
In a similar statement to The New York Times, OpenAI reiterated that its safeguards “work best in common, short exchanges,” but will “sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
That just means they put “Don’t give suicide advice!” in the system prompt, and there is no other safeguard.
Because AIs are just text prediction machines, the further something goes into the past the less relevant it becomes, while the recent context - which is driven by the human - becomes increasingly relevant and guides the conversation towards what the person wants to talk about, and in the tone they want to talk about it. So it’s always going to happen.
There is an easy and simple solution that would stop almost all of these problems with people getting into twisted relationships with AIs (apart from fuck AI completely) - delete every conversation 24 hours from when it starts. Nobody is going to get emotionally attached to a bot or think it’s a “real person” if they have to start from zero every day - and it would let people recognise just how fake and unhealthy the entire thing is.
But of course they won’t because it turns out that “relationships” generate engagement, and engagement generates usage. People becoming romantically and emotionally attached to bots is no longer a side-effect, it’s quickly becoming the whole point, literally the business model.
So now we have successfully replaced art, literature, entertainment media, romance and human connection. Are there any other positive aspect of being human left to replace?
the joy of taxes! 🎉
Oh shit! This is why they kept memory wiping C3PO.
It won’t
These sociopaths will just chalk it up to the cost of doing business. Everything is fair, when it comes running business, making money and having power. Are they so different from most politicians? I honestly don’t think so.
Just look at the car industry as an example.
According to the World Health Organization (WHO), road traffic injuries caused an estimated 1.35 million deaths worldwide in 2016. That is, one person is killed every 26 seconds on average.
A child drowns in a pool here and we change the rules, make people install even more fences around their pool, send inspectors and charge fines, for safety.
But when a few thousand children die crushed by cars, meh, it’s the cost we have to pay for cars. There’s nothing more that can be done except blame the victims.