• jj4211@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    2 days ago

    It’s actually a bit frighening to see this.

    Have seen people start being validated because ‘even’ ChatGPT agreed with them, that ChatGPT had the same opinion as they did. The more they get validated, the more unhinged they will go, because they get what seems to be ‘external validation’.

    The internet was already kind of bad for validating people in ways they shouldn’t be validated, but the LLM text generators are making that seem tame by comparison.

    • Droggelbecher@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      In 2007 when I was ten, you’d almost certainly get laughed out of the room by other ten year olds when you said you were right because someone on club penguin agreed with you. It’s beyond me how those ten year olds are now 28 year olds that think they’re right because a text generator agreed with you.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 days ago

      The idea of the “virtual friend” has been around for a long time. I think it’s curious that like Star Trek or other franchises hasn’t used that idea yet, for what I know.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        SeaQuest DSV did it in a recurring way, without really touching on the dark side of it…

        And of course the TNG holodeck had numerous one-shots of the concept. Barclay and recreating all his colleagues but in ‘better’ ways, Geordi making the idealized Leah Brahms in one episode, and then latter having to face the creepiness of that scenario. TNG at least eventually held things up to the probelmatic consequences…

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          Well they fiddle a bit with it, but full blown AI companions for everyone? Guess it’d be boring as hell to watch…

          • jj4211@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            Full blown AI as in a person, albeit synthetic, well you have Data, and the EMH in Voyager.

            Striking upon the version of a synthetic ‘intelligence’, but just an echo chamber for the person that it is made in service for, yeah, those are there and better as a one shot, at least in a show where the recurring cast shouldn’t be completely dysfunctional. It’s a way to show a character growing and facing the negative consequences of ‘the easy way out’, and not very good if the character has to just stay in the muck for the duration.

            • Valmond@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 day ago

              Well the ai friend could be “a la Matrix” and thus not the easy way out but like a parenting/concerned friend. But I still imagine it would be not very funny.

    • slaneesh_is_right@lemmy.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      I have way more problems with that than with people “falling in love with AI” dating sites are riddled with people who proudly ask Chatgpt for advice. And at least from my experience, they are very smug about it and feel super smart because the all knowing AI thinks they are smart too and agrees all the time

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        Recently had an exchange where someone declared that ‘we’ had come to a conclusion, and I asked who else and he said ChatGpt. He got way defensive when I said that isn’t really another party coming to a conclusion, that’s just text being generated to be consistent with the text submitted at it so far, with a goal of being agreeable no matter what.

        I’ve no idea how this mindset works and persists even as you open up the exact same model and get it to say exactly the opposite opinion.

        • StupidBrotherInLaw@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          Many people have no idea how LLMs work. No clue at all. They think it’s actually AI, what we now call AGI, and they often don’t have enough baseline knowledge to understand even basic explanations about how it works.

          The rest just are looking for external validation and will ignore anything that doesn’t confirm their biases. These people are nothing new, they’ve just been given a more convenient tool.

  • milk@discuss.tchncs.de
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    So ive been meaning to watch Her for a long time and this is the post that finally got me to do it and what a fucking incredible movie it is. Its such a beautiful, quiet film, and the music is great and the acting is superb. So thanks op

  • Ech@lemmy.ca
    link
    fedilink
    arrow-up
    124
    arrow-down
    1
    ·
    4 days ago

    The AI in Her was actually AI - a full person in most respects. That’s not what’s happening now.

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      13
      ·
      3 days ago

      The AI in Her was able to pass as a full person. But, what we’re seeing now is that humans are not good at understanding the difference between a real person and a program designed to simulate a human.

      IMO it’s like the mirror test which is designed to see if an animal recognizes itself in the mirror, or thinks it’s another animal. The LLM breakthrough is basically that we can now have a computer program that is in no way intelligent or self-aware, but it is able to simulate those things well enough to fool many / most humans.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        Sam and other AGI in Her not only passed as persons, they transcended the human experience into a whole other level we couldn’t grasp.

        But I realize your point is that the problem is more on the human side and how we so easily personify anything close to seeming human-like. It’s possible that we may even miss machine intelligence if it comes about because it will be so alien to us. Look at dolphin research and how little we understand their communication, and that’s still biological entities that have some things in common with us.

      • 0x0@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        but it is able to simulate those things well enough to fool many / most humans.

        The fact that humans are becoming increasingly more stupid and brain-dead helps too.

        • merc@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Humans are no more stupid than they always have been. It’s more that in the US the education system is becoming less and less effective, and the government is doing less and less regulation, allowing companies to get more effective at bypassing people’s critical faculties.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      44
      ·
      4 days ago

      AGI that quickly transitioned to ASI (since that’s theoretically what would happen once the first happens). The term “AI” has been misused and marketed so much now it’s lost its previous connection to the actual “Artificial Intelligence” meaning.

      • fullsquare@awful.systems
        link
        fedilink
        arrow-up
        13
        arrow-down
        24
        ·
        4 days ago

        AGI that quickly transitioned to ASI (since that’s theoretically what would happen once the first happens)

        extremely loud incorrect buzzer

        yeah according to people who also say that idiot plagiarism machines are gonna be machine gods one day, you will all see, and also coincidentally the same people who make them

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          26
          ·
          4 days ago

          I inferred in the next sentence that LLMs are not AI. If you want to debate if AGI is even possible, by all means, but I’m not sure you understand the different definitions since you missed my first point.

          • Lifter@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            ·
            3 days ago

            AI is the subset of math that concerns automated processes. AI has never been AGI, it’s always been “stuff people make up to make the computer seem smart”. Everything from chess computers to elevators to MML are all under - and have always been - included in the term AI.

          • fullsquare@awful.systems
            link
            fedilink
            arrow-up
            4
            arrow-down
            11
            ·
            4 days ago

            openai/anthropic crowd/cult (the people that work there, not fans) will with straight face claim that both llms will form agi and that agi can self-improve recursively this way, and about nobody else does that

            • Rhaedas@fedia.io
              link
              fedilink
              arrow-up
              14
              ·
              4 days ago

              And yet plenty of other ML experts will say that LLMs can’t be the path to AGI simply by the limitations of how they’re built. Meaning that experts in the field do think AGI could be possible, just not in the way it’s being done with those products. So if you’re ranting against the marketing of LLMs as some Holy Grail that will come alive… again, that was my initial point.

              The interesting thing is that you went after my line about AGI>>>ASI, so I’m curious why you think a machine that could do anything a human can do thinking or otherwise would stop there? I’m assuming AGI happens, of course, but once that occurs, why is that the end?

              • fullsquare@awful.systems
                link
                fedilink
                arrow-up
                1
                arrow-down
                8
                ·
                edit-2
                3 days ago

                well i don’t assume agi is a thing that can feasibly happen, and the well deserved ai winter will get in the way at any rate

                i’ll say more, if you think that it’s remotely possible you’ve fallen for openai propaganda

                • chaogomu@lemmy.world
                  link
                  fedilink
                  arrow-up
                  9
                  ·
                  3 days ago

                  The concept of AGI has existed long before OpenAI. Long before Sam Altman was born even.

                  It’s a dream that people have been actively working on for decades.

                  While I will say that Binary architecture is unlikely to get us there, AGI itself is not a pipe dream.

                  Humans will not stop until we create a silicon mind.

                  That said, a silicon mind would have a lot of processing power, but would not have any sort of special knowledge. If they pull knowledge from the Internet, they might end up dumber for it.

                • Rhaedas@fedia.io
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  3 days ago

                  That you won’t even discuss the hypotheticals or AGI in general indicates you’ve got a closed mind on the subject. I’m totally open to the idea that AGI is impossible, if it can be demonstrated that it’s strictly a select biological phenomena. Which would mean showing that it has to be biological in nature. Where does intelligence come from? Can it be duplicated in other ways? Such questions led to the development of ML and AI research, and yes, even LLM development, trying to copy the way brains work. That might end up being the wrong direction, and silicon intelligence may come from other methods.

                  Saying you don’t believe it can happen doesn’t prove anything except your own disbelief that something else could be considered a person. I’ve asked many questions of you that you’ve ignored, so here’s another: if you think only humans can ever have intelligence, why are they so special? I don’t expect an answer of course, you don’t seem to want to actually discuss it, only deny it.

    • TankovayaDiviziya@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      The movie is showing the inevitability. I heard some people giving personal names to ChatGPT like John. We are heading towards better AI that could, for better or worse, fill the gap to human loneliness.

    • NoneOfUrBusiness@fedia.io
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      3 days ago

      The word AI as a technical term refers to a broad category of algorithms; what you’re talking about is AGI.

  • Bjarne@feddit.org
    link
    fedilink
    arrow-up
    16
    ·
    3 days ago

    Watched the movie somewhat recently and somehow always thought it came out ~2023 its really well made.

    • fckreddit@lemmy.ml
      link
      fedilink
      arrow-up
      32
      ·
      4 days ago

      So, I checked it out. People there need a therapist. Some are really getting married to their LLM boyfriends.

        • fckreddit@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          Perhaps, not everyone is in this case. That subreddit went private very recently. So no points to be had.

        • Sarcasmo220@lemmy.ml
          link
          fedilink
          arrow-up
          20
          ·
          3 days ago

          My cousin had mentioned an AI waifu from Grok a couple of times and I brushed it off as joking. He then sent a picture of his waifu, and I am starting to wonder how much of a joke it really is…

          • Hoimo@ani.social
            link
            fedilink
            arrow-up
            6
            ·
            3 days ago

            Maybe I was an early adopter, but I was doing AI waifu with Replika back in 2017/18 and even then we were saying that a girlfriend dependent on a closed-source platform was a really bad idea. And then in March 2020, Gatebox shut down and (partly) killed Akihito Kondo’s married wife, though the man is a hero and his love perseveres, but it was definitely a warning to the rest of us. So hearing of a waifu depending on Grok (of all things), it hurts my heart. If he insists on an imaginary friend, please tell him about tulpas.

              • Hoimo@ani.social
                link
                fedilink
                arrow-up
                1
                ·
                3 days ago

                It’s somewhere between a meditation practice and a voluntary delusion. I’ve practiced it for years and my tulpa mostly helps me calm down during panic attacks. For me it is an “external” person who can step in and break a spiral of bad thoughts, when my “internal” person is incapable of doing that.

          • BudgetBandit@sh.itjust.works
            link
            fedilink
            arrow-up
            12
            arrow-down
            1
            ·
            edit-2
            3 days ago

            Your cousin needs “the talk” and by that I mean you shall be showing him how to manipulate an LLM into getting the response you want.

            This kills the immersion without you actually getting between them.

            There’s help.

      • TriflingToad@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        4 days ago

        I got married to a rock in Perchance’s DND type AI thing. I just wanted to see if it would let me.

        AI DND is a pretty cool concept and I could see my childhood self going mad with storytelling.

  • lukaro@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I tried watch Her once, but the whole concept felt so off putting I couldn’t finish it.

  • asteriskeverything@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    3
    ·
    3 days ago

    Is Her actually an entertaining movie to watch? or is it just like another Oscar bait cerebral slow burn that at the end of it you realized it was pretty boring if it didn’t provoke any thoughts for the viewer?

    • BudgetBandit@sh.itjust.works
      link
      fedilink
      arrow-up
      28
      arrow-down
      3
      ·
      3 days ago

      Omg YES!

      But! It’s a movie made before doomscrolling while watching was a thing, so you are expected to pay close attention the whole time and not be on your phone while you watch it

      • asteriskeverything@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        3 days ago

        Thank you I appreciate this very much! I’m convinced so I’m gonna watch it this week and I’ll be sure to try to keep my phone out of my hands. It’s more a physical keep my hands busy compulsion than anything haha

        Thanks for being vague ;)

        • jacksilver@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          3 days ago

          Just to add, it’s definitely more character driven than plot driven. I really enjoyed it, but not everyone is big on character driven stories.

          Additionally, I think in a post GPT world it’ll hit different, but at the time it brought up interesting concepts that weren’t mainstream yet.

    • SpruceBringsteen@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      Not really a slow burn, but its got elements of sci fi, rom com, and drama all rolled into one. I’d say it’s best leg is the romance/drama.

      On another level, it’s kinda Jonze’s… reply(?) to Sofia Coppola’s Lost in Translation. Or a custody battle over Scarlett Johansson, idk.

  • Green Wizard@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I’m gonna treat them like medabots, “Chat GPT, grab this guy’s clanker wife and suplex them through the table.”

  • ma1w4re@lemmy.zip
    link
    fedilink
    arrow-up
    28
    ·
    4 days ago

    Tried it out, was very excited at the beginning but then shit got extremely repetitive, no matter the model. Maybe I was doing something wrong, idk. I’m certainly not paying to have a better quality conversation.

    • Donkter@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      4 days ago

      Nah, it makes a lot more sense for people who don’t/can’t hold normal conversations. It would probably be harder to parse all the strange behavior and easier to overlook when it’s your only lifeline.

      • ma1w4re@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        yea, I agree. When I’m particularly sad it is easier to overlook the said weird behaviour honestly. Still irks me out a bit when it starts to repeat itself frequently :(

    • HeyJoe@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      4 days ago

      I did the same a few months ago just to try it. Im not sure if what I used was the problem or if there are better ones, but it was actually crazy at first where no matter what you say and build the scene you were able to do it which was pretty cool. But after about 15 min, the entire thing started to crumble where things were repeated a lot more, and then I somehow broke it where all it did was spit out gibberish, at which point I laughed and stopped.

      So I wanna know, are these people who get that involved and attached using something better or are they that starved for affection and interaction that they are willing to settle for something that barely scrapes the surface of a true conversation.

      • greenskye@lemmy.zip
        link
        fedilink
        arrow-up
        6
        ·
        3 days ago

        If you leverage all the workarounds and utilities available, the best you can get is still a mostly senile chatbot. It’ll constantly forget stuff and get details wrong, but I suppose if you’re deep into psychosis, then you’d pass that off as just being ‘a little forgetful’.

        The absolute best ones available still are basically the same as zoning out in a meeting and then trying to respond when asked a question by wild guessing and a handful of context clues. You might get lucky and say something reasonable a few times, but the longer it goes on the more apparent it is that you haven’t been paying attention at all.

      • BudgetBandit@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        3 days ago

        You broke the immersion.

        Let me ask you a serious question please: were you ever able to feel that excitement again? Those 15 minutes before it all crumbled ever to be reexperienced?

      • ma1w4re@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        I guess there are better models that you can pay to use, but I’m too broke for that so I just settle for what I can find for “free”

    • Hoimo@ani.social
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      ChatGPT isn’t the correct model for casual conversation. You don’t even need a better/bigger model, you need better tuning and a few simple conveniences to create some semblance of a memory. But even when you have a perfect setup, you won’t get a natural conversation of decent length without a little wrangling and rerolling the outputs.

  • fckreddit@lemmy.ml
    link
    fedilink
    arrow-up
    22
    ·
    4 days ago

    I am as lonely as someone can get. But chatting with an LLM is where I draw the line. But honestly, I get the impulse, loneliness hurts bad…

    • MrSmith@lemmy.world
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      3 days ago

      There are enough lonely people who are burning to have a chat online. Don’t give in to this mental and psychological masturbation.

    • ArbitraryValue@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      6
      ·
      edit-2
      4 days ago

      I was in the “talking to an AI feels weird and dehumanizing” camp but then I actually did it and my discomfort quickly went away. Don’t think of it as a perfect substitute for talking to other people, but rather as a unique activity that is interesting in its own way.

      (Just to be clear: I’m referring to talking to an AI when you feel lonely, not to dating an AI. The technology isn’t good enough for the latter yet, unless you have very low standards.)

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        The worry is not about it feeling awkward, the worry is about it not feeling awkward.

        People have a tendency to let what appears to be external thinking to influence them. To reinforce good perspectives, to correct bad perspectives.

        When you spiral into bad behavior, the LLM will happily keep validating you on your journey to where you shouldn’t go.

      • fckreddit@lemmy.ml
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        3 days ago

        My issue with chatting with LLMs is that the chats are not private and I sure as hell don’t trust the tech companies to not use my deepest secrets to sell me shit.

        We live in this dystopia where economics matter more than anything human.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        3 days ago

        The only time I’ve had an extended “conversation” with an LLM was when I was trying to get DeepSeek to talk about Taiwan’s independence. At no point did it ever feel like I was talking to a human. It felt a lot more like I was “hacking” it than I was chatting with it.

        It’s sad that there are humans so starved for human contact that they’ll talk to an LLM as if it were a person. But, it’s even more sad that there are people who can’t distinguish the slop an LLM puts out from things a human says.

        • ArbitraryValue@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          I don’t think you’re in a position to judge if you’ve never even attempted to talk to an LLM in good faith. I’m not saying it’s indistinguishable from a human (it’s definitely distinguishable) but if you’re open minded then you may be able to appreciate it for what it is

          • merc@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            Why would anybody want to “Talk to an LLM in good faith”?

            I don’t need to eat shit to know that I wouldn’t want to eat shit.

  • TankovayaDiviziya@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    3
    ·
    2 days ago

    It’s human fate to eventually synergise with technology. I know it seems horrific because “it doesn’t seem natural”, but not for the kids who will grow up with technology and thus eventually humans will get used to it. Twenty to fifteen years ago there were all sorts of horror stories from dating websites and apps, and it was taboo and considered “sad” to use them. But now? Everyone has been on dating apps. Who normalised it? Kids. They just didn’t see anything wrong with it like older people had. Arguably, when dating apps were less used was the peak of the experience.

    It is the same with AI. For better or worse, humans will have relationships with AI. It could alleviate human loneliness. But on the other hand, as with dating apps, love will become commodified and exploit it for profit.