In a new paper, the authors urge developers to prioritize research into “chain-of-thought” (CoT) processes, which provide a rare window into how AI systems make decisions.
So, this is where the researchers are acknowledging that they are choking on the smell of their own farts.
However, there are still a lot of questions about how these advanced models are actually working. Some research has suggested that reasoning models may even be misleading users through their chain-of-thought processes.
Bull fucking shit. Making something complex enough to be unpredictable doesn’t mean it is intentional. Reasoning models are not reasoning, they are outputting something that looks like reasoning. There is no intent behind it to mislead or do anything on purpose. It just looks that way because the output is in a format that looks like a person writing down their thoughts.
So, this is where the researchers are acknowledging that they are choking on the smell of their own farts.
Bull fucking shit. Making something complex enough to be unpredictable doesn’t mean it is intentional. Reasoning models are not reasoning, they are outputting something that looks like reasoning. There is no intent behind it to mislead or do anything on purpose. It just looks that way because the output is in a format that looks like a person writing down their thoughts.
Sure, let’s put this loose into the public’s hands. It’ll be fine…
Mage: The Ascension did permanent damage to the programming community.