Ran into this, it’s just unbelievably sad.

“I never properly grieved until this point” - yeah buddy, it seems like you never started. Everybody grieves in their own way, but this doesn’t seem healthy.

  • BigBenis@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    2 days ago

    It makes me think of psychics who claim to be able to speak to the dead so long as they can learn enough about the deceased to be able to “identify and reach out to them across the veil”.

    • Tigeroovy@lemmy.ca
      link
      fedilink
      arrow-up
      15
      ·
      2 days ago

      I’m hearing a “Ba…” or maybe a “Da…”

      “Dad?”

      “Dad says to not worry about the money.”

  • pika@feddit.nl
    link
    fedilink
    arrow-up
    46
    arrow-down
    1
    ·
    edit-2
    2 days ago

    “I’m glad you found someone to comfort you and help you process everything”

    That sent chills down my spine.

    LLMs aren’t a “someone”. People believing these things are thinking, intelligent, or that they understand anything are delusional. Believing and perpetuating that lie is life-threateningly dangerous.

  • Honytawk@feddit.nl
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    20 hours ago

    It literally says the wife was killed in a car accident.

    What kind of dumb clickbaity title is this crap? Was it generated by AI or something?

  • Soapbox@lemmy.zip
    link
    fedilink
    English
    arrow-up
    48
    ·
    2 days ago

    I feel so bad for this guy. This was literally a black mirror episode: “Be Right Back”

    • GreenKnight23@lemmy.world
      link
      fedilink
      arrow-up
      25
      ·
      2 days ago

      I feel bad for the guys wife.

      she was easily replaced by software.

      what a “fuck you” to your loved ones to say that they’re as spirited and enriching as a fucking algorithm.

  • Hadriscus@jlai.lu
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    2 days ago

    Remember Steven Spielberg’s AI from like 2000 ? same weird story, and I thought it was ridiculous at the time.

    • bthest@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      23 hours ago

      When I was a kid I thought the people crushing the robots were kind of cool but then when I got older I realized they were actually being cruel and evil.

      But now they’re cool again.

    • Nangijala@feddit.dk
      link
      fedilink
      arrow-up
      12
      ·
      2 days ago

      The semi-ironic part is that AI wasn’t even Spielberg’s movie. It was Stanley Kubrick’s, but he died before making it and since him and Spielberg were great friends, Spielberg decided ro make Kubrick’s last film in his honor. Must have been a difficult movie to make, both technically, but also emotionally.

    • Denjin@feddit.uk
      link
      fedilink
      arrow-up
      44
      ·
      2 days ago

      Sadly this phenomenon isn’t even new. It’s been here for as long as chatbots have.

      The first “AI” chatbot was ELIZA made by Joseph Weizenbaum. It literally just repeated back to you what you said to it.

      “I feel depressed”

      “why do you feel depressed”

      He thought it was a fun distraction but was shocked when his secretary, who he encouraged to try it, made him leave the room when she talked to it because she was treating it like a psychotherapist.

      • happyfullfridge@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        23 hours ago

        I had psychosis a few years back and had delusions about AI “talking to me” but it was when the tech was much worse than now and almost 100% projection on my part, I even felt normal Google responses were sending me secret messages

        • ZDL@lazysoci.al
          link
          fedilink
          arrow-up
          7
          ·
          2 days ago

          The question has never been “will computers pass the Turing test?” It has always been “when will humans stop failing the Turing test?”

        • UltraMagnus@startrek.website
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          Part of me wonders if the way our brains humanize chat bots is similar to how our brains humanize characters in a story. Though I suppose the difference there would be that characters in a story could be seen as the author’s form of communicating with people, so in many stories there is genuine emotion behind them.

          • bobbyguy@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            1 day ago

            i feel like there must be some instinctual reaction where your brain goes: oh look! i can communicate with it, it must be a person!

            and with this guy specifically it was: if it acts like my wife and i cant see my wife, it must be my wife

            its not a bad thing that this guy found a way to cope, the bad part is that he went to a product made by a corporation, but if this genuinely helped him i don’t think we can judge

    • net00@lemmy.today
      link
      fedilink
      arrow-up
      24
      ·
      3 days ago

      Yeah, the chatgpt subreddit is full of stories like this now that GPT5 went live. This isn’t a weird isolated case. I had no clue people were unironically creating friends and family and else with it.

      Is it actually that hard to talk to another human?

      • Lumisal@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        2 days ago

        I think it’s more that many countries don’t have affordable mental healthcare.

        It costs a lot more to pay for a therapist than to use an LLM.

        And a lot of people need therapy.

        • S0ck@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          2 days ago

          The robots don’t judge, either. And you can be as cruel, as stupid, as mindless as you want. And they will tell you how amazing and special you are.

          Advertising was the science of psychological warfare, and AI is trained with all the tools and methods for manipulating humans. We’re devastatingly fucked.

  • Snazz@lemmy.world
    link
    fedilink
    arrow-up
    45
    ·
    2 days ago

    The glaze:

    Grief can feel unbearably heavy, like the air itself has thickened, but you’re still breathing – and that’s already an act of courage.

    It’s basically complimenting him on the fact that he didn’t commit suicide. Maybe these are words he needed to hear, but to me it just feels manipulative.

    Affirmations like this are a big part of what made people addicted to the GPT4 models. It’s not that GPT5 acts more robotic, it’s that it doesn’t try to endlessly feed your ego.

    • crt0o@discuss.tchncs.de
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      2 days ago

      o4-mini (the reasoning model) is interesting to me, it’s like if you took GPT-4 and stripped away all of those pleasantries, even more so than with GPT-5, it will give you the facts straight up, and it’s pretty damn precise. I threw some molecular biology problems at it and some other mini models, and while those all failed, o4-mini didn’t really make any mistakes.

    • Kyrgizion@lemmy.world
      link
      fedilink
      arrow-up
      101
      ·
      3 days ago

      Black Mirror was specifically created to take something from present day and extrapolate it to the near future. There will be several “prophetic” items in those episodes.

        • Ech@lemmy.ca
          link
          fedilink
          arrow-up
          10
          ·
          edit-2
          3 days ago

          It’s from fucking 2013 and they saw this happening.

          I mean, it’s just an extrapolation of the human condition. That hasn’t really changed much in the last thousand years, let alone in the last ten. The thing with all this “clairvoyant” sci-fi that people always cite is that the sci-fi is always less about the actual technology and more about putting normal human characters in potential future scenarios and writing them realistically using the current understanding of human disposition. Given that, it’s not really surprising to see real humans mirroring fictional humans (from good fiction) in similar situations. Disappointing maybe, but not surprising.

          • xspurnx@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            This. One kind of good sci-fi is basically thought experiments to better understand the human condition.

        • jballs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          Back in 2010, one of my coworkers pitched this exact idea to me. He wanted to start a business that would allow people to upload writing samples, pictures, and video from a loved one. Then create a virtual personality that would respond as that person.

          I lost touch with the guy. Maybe he went on to become a Black Mirror writer. Or got involved with ChatGPT.

    • ArrowMax@feddit.org
      link
      fedilink
      arrow-up
      10
      ·
      2 days ago

      If that means we get psychoactive cinnamon for recreational use and freaking interstellar travel with mysterious fishmen, I’m all ears.

    • dickalan@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      I am absolutely certain a machine has made a decision that has killed a baby at this point already

  • ggtdbz@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    95
    ·
    3 days ago

    We’ve already reached the point where the Her scenario is optimistic retrofuturism.

    I profoundly hate the AI social phenomenon in every manifestation. Fucking Christ.

    • bobbyguy@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 day ago

      we need ai to be less personal and more fact driven, almost annoying to use, this way they wont replace peoples jobs, they wont become peoples friends, hence they wont affect society in major social ways

        • DragonTypeWyvern@midwest.social
          link
          fedilink
          arrow-up
          15
          ·
          3 days ago

          It’s the most logical solution. I always find the obsession with the bot vs human war rather egocentric.

          They wouldn’t need us, they don’t even need the planet.

          • MotoAsh@lemmy.world
            link
            fedilink
            arrow-up
            6
            ·
            3 days ago

            Ehh, that depends greatly on the computer architecture they’re running on. Modern silicon hardware is very succeptible (over the long term) to ionizing radiation like what is found in space. Without magnetic shielding like the Earth’s magnetic field, or a physical one like the atmosphere, they’d only get maaybe a few years around the Earth’s orbit. Jupiter also has quite a lot of radiation coming from it, so they’d have to set up either in interstellar travel or at some outer solar system lagrangian point to last any significant amount of time.

            Or get good at self-service and manufacture, which requires resources that they could decide to just take from Earth.

            • mojofrododojo@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              2 days ago

              Ehh, that depends greatly on the computer architecture they’re running on. Modern silicon hardware is very succeptible (over the long term) to ionizing radiation like what is found in space.

              ehhhh… dude. there’s shittons of radiation shielding out there. any relatively small chunk of nickel iron. or if you don’t mind dealing with larger volumes, water or ice both work fine. plenty of rocks and comets in the oort as they say :D nice thing about that tho is you can split the water for LOX/LH using sunlight derived electricity, now you have rocket fuel.

              • MotoAsh@lemmy.world
                link
                fedilink
                arrow-up
                3
                ·
                2 days ago

                No, you’re shortcutting the problem. There is ample amount of ALL kinds of ionizing radiation at the orbit of the Earth. Even particles FAR more powerful than anything humans can produce. It would take many feet of solid rock or water to shield them adequately. It would take quite a bit of effort to construct shielding that would equal the Earth’s for any significant population of AI that ran on anything close to present-day silicon. All the while they’d have to deal with real, actually damaging levels of radiation.

                Not to say it’s impossible or anything, but I’d still put my money on it being wiser for them to go to a much higher orbit or another planet/moon to simply avoid most of the sun’s radiation than to gather together asteroids or similar. Why go through all that effort when eons of gravitation has done the work with planets and moons already?

                The asteroid belt miiight still be a good choice, especially if it were remotely as dense as people like to imagine, but only because of the advantages of working from a space that isn’t at the bottom of a significant gravity well like a planet.

                • mojofrododojo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 day ago

                  It would take many feet of solid rock or water to shield them adequately.

                  1m of water would do it. far less rock.

                  SEPs and GCRs can both be stopped by a number of lunar materials https://www.sciencedirect.com/science/article/abs/pii/S0273117716307505

                  yeah, the asteroid belt is sparse, but there’s still mega-gigatons of material out there just floating. autonomous recovery of this material will supply humanity’s future a lot more than any silly mars missions.

  • Dogiedog64@lemmy.world
    link
    fedilink
    arrow-up
    40
    arrow-down
    1
    ·
    3 days ago

    Holy shit dude, this is just… profoundly depressing. We’ve truly failed as a society if THIS is how people are trying to cope with things, huh. I’d wish this guy the best with his grief and mourning, but I get the feeling he’d ask ChatGPT what I meant instead of actually accepting it.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      2 days ago

      I kind of get the one side of it, having a void to scream into can be cathartic and maybe useful…but the fact that you can then use it as a shoddy “emulation” of a person to avoid actually processing the loss, and to have it reinforce delusions is…yeaaaaaah… fun future we’re sprinting into.

  • Ech@lemmy.ca
    link
    fedilink
    arrow-up
    105
    arrow-down
    1
    ·
    edit-2
    3 days ago

    Man, I feel for them, but this is likely for the best. What they were doing wasn’t healthy at all. Creating a facsimile of a loved one to “keep them alive” will deny the grieving person the ability to actually deal with their grief, and also presents the all-but-certain eventuality of the facsimile failing or being lost, creating an entirely new sense of loss. Not to even get into the weird, fucked up relationship that will likely develop as the person warps their life around it, and the effect on their memories it would have.

    I really sympathize with anyone dealing with that level of grief, and I do understand the appeal of it, but seriously, this sort of thing is just about the worst thing anyone can do to deal with that grief.

    *And all that before even touching on what a terrible idea it is to pour this kind of personal information and attachment into the information sponge of big tech. So yeah, just a terrible, horrible, no good, very bad idea all around.