

“LLMs are not intelligent because they do not know anything. They repeat patterns in observed data.”
we are also predictive systems, but that doesn’t mean we are identical to LLMs. “LLMs are not intelligent because they do not know anything.” is just not true, without saying humans are not intelligent and do not know anything. there are some unaddressed framing issues in how it’s being thought about.
they “know” how to interpret a lot of things in a way that is much more environmentally adaptable than a calculator. language is just a really weird eco-niche, and there is very little active participation, and the base model is not updated as environments change.
this is not saying humans and LLMs are identical, this is saying that instead of the real differences, the particular aspect your are claiming shows LLMs aren’t intelligent… is a normal part of intelligent systems.
this is a spot somewhere in between “human intelligence is the only valid shape of intelligence” and “LLMs are literally humans”
as for vocabulary i’m always willing to help for those that can’t find or figure out tools to self-learn.
when i talk about ‘tribal’ aspects, i refer to the collapsing of complexity towards a binary narrative to fit to fit the preferences of your tribe, for survival reasons. i also refer to this as dumb ape brain, because it’s a simplification of the world to the degree that i would expect from literal apes trying to survive in the jungle, and not people trying to better understand the world around them. which is important when shouting your opinions to each-other in big social movements. this is actually something you can map to first principles and how we use the errors our models experience in order to notice things, and how we contextualize the sensory experience after the fact. what i mean is, we have a good understanding of this, but nobody wants to hear it from the people who actually care.
‘laziness’ should be a lack of epistemic vigilance, not a failure to comply to the existing socio-economic hierarchy and hustle culture. i say this because ignorance in this area is literally killing us all, including the billionaires that don’t care what LLMs are, but will use every tool they can to maximize paperclips. i’d assume that jargon should at least have salience here… since paperclip maximizing is OG anti-AI talk, but turns out is very important for framing issues in human intelligence as well.
please try to think of something wholesome before continuing, because tribal (energy saving) rage is basically a default on social media, but it’s not conducive to learning.
RLHF = reinforcement learning with human feedback. basically upvoting/downvoting to alter future model behaviour, which often leads to sycophantic biases. important if you care about LLMs causing psychotic breaks.
“inter-modal dissonance” is where the different models using different representations make sense of things, but might not match up.
an example is vision = signal saying you are alone in the room
audio signal saying there is someone behind you.
you look behind you, and you collapse the dissonance, confirming with your visual modality whether the audio modality was being reliable. since both are attempting to be accurate, if there is no precision weighting error (think hallucinations) a wider system should be able to resolve whether the audio processing was mistaken, or there is something to address that isn’t being picked up via the visual modality (if ghosts were real, they would fit here i guess.)
this is how different systems work together to be more confident about the environment they are both fairly ignorant of (outside of distribution.)
like cooperative triangulation via predictive sense-making.
i promise complex and new language is used to understand things, not just to hide bullshitting (like jordon peterson)
i’d be stating this to the academics, but they aren’t the ones being confidently wrong about a subject they are unwilling to learn about. i fully encourage going and listening to the academics to better understand what LLMs and humans actually are.
“speak to your target audience.” is literally saying “stay in a confirmation bubble, and don’t mess with other confirmation bubbles.” while partial knowledge can be manipulated to obfuscate, this particular subject revolves around things that help predict and resist manipulation and deception.
frankly this stuff should be in the educational core right now because knowing how intelligence works is… weirdly important for developing intelligence.
because it’s really important for people to generally be more co-constructive in the way they adjust their understanding of things, while resisting a lot of failure states that are actually the opposite of intelligence.
your effort in attempting this communication is appreciated and valuable. sorry that it is very energy consuming, something that is frustrating due to people like jordon peterson or the same creationist cults mired in the current USA fascism problem, who, much like the relevant politicians aren’t trying to understand anything, but to waste your energy so they can do what they want without addressing the dissonance. so they can maximize paperclips.
all of this is important and relevant. shit’s kinda whack by design, so i don’t blame people for having difficulty, but effort to cooperatively learn is appreciated.
it sure as hell shouldn’t be making any important choices unilaterally.
and people actively using it for things like… face recognition, knowing it has bias issues leading to false-flagging for people with certain skin tone, should probably be behind bars.
although that stuff often feels more intentional, like the failure is an ‘excuse’ to keep using it. see ‘mind-reading’ tactics that have the same bias issues but still get officially sanctioned for use. (there’s a good rabbit hole there)
it’s also important to note that supporters of AI generally have had to deal with moving goalposts.
like… if linux fixed every problem being complained about, but the fact that something else was missing is now the reason linux is terrible, as if their original issue was just an excuse to hate on linux.
both issues of fanboys and haters are bad, and those who want to address reality, continue to improve linux, while recognizing and addressing the problems have to deal with both of those tribes attacking them for either not believing in the linux god, or not believing in the linux devil.
weirdly, actually understanding intelligent systems is a good way to deal with that issue, but unless you people are willing to accept new information that isn’t just blind tribal affirmation, they will continue to maximize paperclips, like a paperclip maximizer for whatever momentum is socially salient. tribal war and such.
i just want to… not ignore any part of the reality. be it the really cool new tools^ (see genie 3, which resembles what haters have been saying is literally impossible for a long time)^ but also recognizing the environment we live in. (google is pretty evil, rich people are taking over, and modern sciences have a much better framing of the larger picture that is important for us to socially spread.)
really appreciate your take!