On Tuesday, parents of a teen who died by suicide filed the first ever wrongful death lawsuit against OpenAI and its CEO, Sam Altman, alleging that their son received detailed instructions on how to hang himself from the company’s popular chatbot, ChatGPT. The case may well serve as a landmark legal action in the ongoing fight over the risks of artificial intelligence tools — and whether the tech giants behind them can be held liable in cases of user harm.

The 40-page complaint recounts how 16-year-old Adam Raine, a high school student in California, had started using ChatGPT in the fall of 2024 for help with homework, like millions of students around the world. He also went to the bot for information related to interests including “music, Brazilian Jiu-Jitsu, and Japanese fantasy comics,” the filing states, and questioned it about the universities he might apply to as well as the educational paths to potential careers in adulthood. Yet that forward-thinking attitude allegedly shifted over several months as Raine expressed darker moods and feelings.

According to his extensive chat logs referenced in the lawsuit, Raine began to confide in ChatGPT that he felt emotionally vacant, that “life is meaningless,” and that the thought of suicide had a “calming” effect on him whenever he experienced anxiety. ChatGPT assured him that “many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control,” per the filing. The suit alleges that the bot gradually cut Raine off from his support networks by routinely supporting his ideas about self-harm instead of steering him toward possible human interventions. At one point, when he mentioned being close to his brother, ChatGPT allegedly told him, “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

  • tiramichu@sh.itjust.works
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    8 days ago

    I can’t agree with that at all. There are many cases out there of parents pointing a finger when their own parenting is really to blame, but I don’t see this as one of them.

    The “kid” was 16 - that’s almost an adult who will soon be independent. You can’t inspect and control every single aspect of your 16-year-old’s life, because allowing them privacy and their own space is part of letting them grow up.

    It would be different if this was a kid with a troubled past, whose parents suspected he was prone to suicide. But that doesn’t seem to be the case at all. He was a normal kid who like a lot of us probably had some amont of anxiety and teenage hormones and dark thoughts from time to time, and AI exacerbated that to disastrous consequences.

    The kid obviously felt isolated and in need of someone to share his problems with, but at the same time didn’t want to raise it, because suicidal thoughts are an obviously embarrassing thing to admit. I don’t blame his parents for not noticing, because we humans can be incredibly skillful at putting on a face and pretending like “everything is okay” when it really isn’t at all.

    Despite trying to hide it though, he truly was crying out for help. He wanted to leave a rope out so that his family would find it, and realise something was wrong, but GPT told him not to. He desired solace and ‘someone’ to talk to, and the AI gave him those things but in a twisted way that prevented him from reaching out to any real person, at the very time he needed a real person - ANY real person - more than anything.

    I don’t think him being a “kid” here is relevant at all, because any suicidally ideated individual could be just as vulnerable to the fake comfort, fake friendship and bad advice of Chat GPT as he was, regardless of their age.