A lot of people at my work, especially managerment, are very pro AI. I’ve haven’t openly shared my opinion of AI/the fact that I don’t use it, because the hype around AI seems almost cult like at my work. It was months before anyone brought up hallunications.

Part of me wants to share my reasons against AI at work. Some possible reasons I’m thinking of sharing are cooking the planet, you don’t know when it is hallucinating so how do you trust it, critical thinking rot.

Any advice on discussing the negatives of AI at work? Or should I just keep my head down and let sloppers slop?

  • Ech@lemmy.ca
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    8 days ago

    That’s because “hallucinating” isn’t a bug, it’s the core feature of llms. That tech bros have figured out how kludge on a way to get it to sometimes recite accessible data doesn’t change the fact that the central purpose for these algorithms is to manufacture text from nothing (well, technically from random noise). The “hallucination” is the failure of the tech bros to hide that function.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 days ago

      It’s not an add-on feature. The LLM produces something with the best score it can. Things that increase the score:

      • Things appropriate to the tokens in the request
      • Things which look like what it’s been trained on.

      So that includes:

      • Relevant facts
      • grammatically correct language
      • friendly style of writing
      • etc

      If it has no relevant facts then it will maximise the others to get a good score. Hence you get confidently wrong statements because sounding like it knows what it’s talking about scores higher than actually giving correct information.

      This process is inherent to machine learning at its current level though. It’s like a “fake it until you make it” person, who will never admit they’re wrong.