On Tuesday, parents of a teen who died by suicide filed the first ever wrongful death lawsuit against OpenAI and its CEO, Sam Altman, alleging that their son received detailed instructions on how to hang himself from the company’s popular chatbot, ChatGPT. The case may well serve as a landmark legal action in the ongoing fight over the risks of artificial intelligence tools — and whether the tech giants behind them can be held liable in cases of user harm.

The 40-page complaint recounts how 16-year-old Adam Raine, a high school student in California, had started using ChatGPT in the fall of 2024 for help with homework, like millions of students around the world. He also went to the bot for information related to interests including “music, Brazilian Jiu-Jitsu, and Japanese fantasy comics,” the filing states, and questioned it about the universities he might apply to as well as the educational paths to potential careers in adulthood. Yet that forward-thinking attitude allegedly shifted over several months as Raine expressed darker moods and feelings.

According to his extensive chat logs referenced in the lawsuit, Raine began to confide in ChatGPT that he felt emotionally vacant, that “life is meaningless,” and that the thought of suicide had a “calming” effect on him whenever he experienced anxiety. ChatGPT assured him that “many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control,” per the filing. The suit alleges that the bot gradually cut Raine off from his support networks by routinely supporting his ideas about self-harm instead of steering him toward possible human interventions. At one point, when he mentioned being close to his brother, ChatGPT allegedly told him, “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

  • stoly@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    9
    ·
    7 days ago

    and that the thought of suicide had a “calming” effect on him whenever he experienced anxiety. ChatGPT assured him that “many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control,”

    But this is true. They are suing human psychology.

    • MoreZombies@aussie.zone
      link
      fedilink
      arrow-up
      13
      ·
      7 days ago

      But painting it as a good thing or the solution is not correct.

      If an actual doctor, psychologist or carer took this stance and slowly drove a patient or charge to self-harm or suicide, we definitely would not abide by that.

      • stoly@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        7 days ago

        I think you’re missing my point. It is very common for people who are suicidal to experience relief when thinking about their death. This is ChatGPT just reciting journal articles.

        • MotoAsh@lemmy.world
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          edit-2
          7 days ago

          and you’re missing their point. It’s not that the snippet is incorrect. It’s how it was framed and kept continuing to talk to the child…