• 0 Posts
  • 2 Comments
Joined 1 year ago
cake
Cake day: August 1st, 2024

help-circle
  • I didn’t make a strawman.

    The criticism of “just probability” falls flat as soon as you recognize current expert consensus is that humans minds are… predictive processors…

    They’re just like us!

    Except…

    Where LLMs struggle adapting to things outside of distribution (not in the training data) they do not have a way to actively update their weights and biases as they contextualize the growing novel context.

    You wanted to attack LLMs underlying principle of being probabilistic word sequence generators. But that’s it. That’s what they do. They have no understanding outside the context of word order to know that typically if a sentence starts “The quick brown…” the word fox frequently follows that phrase. Therefore, a fox is probably quick and brown. And if something is quick and brown, it might be a fox. LLMs are not intelligent not because they rely on probability.

    LLMs are not intelligent because they do not know anything. They repeat patterns in observed data. They do this in an intentionally leaky way to generate new sentences it hasn’t seen before based on context it has seen them in before. Any reference of “thinking” or “learning” is just anthropomorphism or an inaccurate and misleading (though useful) approximation. They have no concept of “correct.” It’s why you can bully them into agreeing with you. They’re dumb.

    Look, I’m not going to get any more into this because you used a lot of big, jargony words without any context. Words like “normalize to the tribal opinions”, “RLHF”, “intermodal dissonance”, or the biggest offender “confabulations.” Those would only be used by a person more knowledgeable in the field or a self-fashioned intellectual trying to flex.

    If you’re an expert, I offer advice I got in grad school: speak to your target audience. Unfortunately, I can’t engage with most of what you said because I frankly have no fucking clue what you’re saying.