Recently had an exchange where someone declared that ‘we’ had come to a conclusion, and I asked who else and he said ChatGpt. He got way defensive when I said that isn’t really another party coming to a conclusion, that’s just text being generated to be consistent with the text submitted at it so far, with a goal of being agreeable no matter what.
I’ve no idea how this mindset works and persists even as you open up the exact same model and get it to say exactly the opposite opinion.
Many people have no idea how LLMs work. No clue at all. They think it’s actually AI, what we now call AGI, and they often don’t have enough baseline knowledge to understand even basic explanations about how it works.
The rest just are looking for external validation and will ignore anything that doesn’t confirm their biases. These people are nothing new, they’ve just been given a more convenient tool.
Recently had an exchange where someone declared that ‘we’ had come to a conclusion, and I asked who else and he said ChatGpt. He got way defensive when I said that isn’t really another party coming to a conclusion, that’s just text being generated to be consistent with the text submitted at it so far, with a goal of being agreeable no matter what.
I’ve no idea how this mindset works and persists even as you open up the exact same model and get it to say exactly the opposite opinion.
Many people have no idea how LLMs work. No clue at all. They think it’s actually AI, what we now call AGI, and they often don’t have enough baseline knowledge to understand even basic explanations about how it works.
The rest just are looking for external validation and will ignore anything that doesn’t confirm their biases. These people are nothing new, they’ve just been given a more convenient tool.