• snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    23 hours ago

    So, this is where the researchers are acknowledging that they are choking on the smell of their own farts.

    However, there are still a lot of questions about how these advanced models are actually working. Some research has suggested that reasoning models may even be misleading users through their chain-of-thought processes.

    Bull fucking shit. Making something complex enough to be unpredictable doesn’t mean it is intentional. Reasoning models are not reasoning, they are outputting something that looks like reasoning. There is no intent behind it to mislead or do anything on purpose. It just looks that way because the output is in a format that looks like a person writing down their thoughts.

    • tarknassus@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      21 hours ago

      Some research has suggested that reasoning models may even be misleading users through their chain-of-thought processes.

      Sure, let’s put this loose into the public’s hands. It’ll be fine