

15·
6 days agoYou all are missing the forest for the trees! LLMs are just like us in how we think! We’re all just probability generators! No, they can’t think or reason beyond known data sets. Yes, they fail at extrapolating information which is the basic component of reasoning. But you guys don’t get it! They’re just like us and smart!
I didn’t make a strawman.
They’re just like us!
Except…
You wanted to attack LLMs underlying principle of being probabilistic word sequence generators. But that’s it. That’s what they do. They have no understanding outside the context of word order to know that typically if a sentence starts “The quick brown…” the word fox frequently follows that phrase. Therefore, a fox is probably quick and brown. And if something is quick and brown, it might be a fox. LLMs are not intelligent not because they rely on probability.
LLMs are not intelligent because they do not know anything. They repeat patterns in observed data. They do this in an intentionally leaky way to generate new sentences it hasn’t seen before based on context it has seen them in before. Any reference of “thinking” or “learning” is just anthropomorphism or an inaccurate and misleading (though useful) approximation. They have no concept of “correct.” It’s why you can bully them into agreeing with you. They’re dumb.
Look, I’m not going to get any more into this because you used a lot of big, jargony words without any context. Words like “normalize to the tribal opinions”, “RLHF”, “intermodal dissonance”, or the biggest offender “confabulations.” Those would only be used by a person more knowledgeable in the field or a self-fashioned intellectual trying to flex.
If you’re an expert, I offer advice I got in grad school: speak to your target audience. Unfortunately, I can’t engage with most of what you said because I frankly have no fucking clue what you’re saying.