• Signtist@bookwyr.me
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    That’s not haywire. We already know AI makes stuff up and gets stuff wrong all the time. Putting it in an important position doesn’t make it any less likely to make mistakes - this was inevitable.

    • FriendOfDeSoto@startrek.website
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      10
      ·
      3 days ago

      I think that’s throwing out the baby with the bathwater. But regulation and hitting these companies with false advertisement penalties is something we need ASAP. And liability. If the creator of a model can be made liable for damages, that would pump the breaks on this bullshit very hard. Funny how all the so-called AI companies are averse to any of that …

      • Fredselfish@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        3 days ago

        To bad our president just signed an EO basically preventing all of that. But I agree we need regulations on this five years ago.

        • forrgott@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 days ago

          Well, not so much preventing as actively inhibiting. Technically, that order only applies to models used by the federal government, but it does create a perverse business incentive that I suspect is highly likely to have a chilling effect on the industry as a whole.

          But I agree things are accelerating in the wrong direction, and regulations were needed years ago to prevent the upcoming shitshow.

    • SuiXi3D@fedia.io
      link
      fedilink
      arrow-up
      12
      ·
      3 days ago

      It’s prohibitively expensive to get proper therapy, and that’s if your therapist has an opening in the next six months.

        • ahornsirup@feddit.org
          link
          fedilink
          arrow-up
          8
          ·
          2 days ago

          If phrased like that obviously not, but that’s now how those things are marketed. The average person might just stumble upon AI “therapy” while googling normal therapy options, and with the way the media just repeats AI bro talking points that wouldn’t necessarily raise red flags for most “normal” people. Blame the scammer, not the scammed.