Tests Show That Top AI Models Are Making Disastrous Errors When Used
I fixed your title, Futurism.
Maybe if your job is to, broadly, create culture it’s a bad idea to use the previous-culture blended slop dispenser.
*LLM models
We don’t have AGI yet. Obviously.
And if LLMs are failing in language areas, something they’re designed for, why the hell are we trying to squeeze them into everything else?
(It’s about money.)
That’s right, AI is a naked blue woman who lives in your head. I played Halo so I’m kind of an expert
LLM are a type of AI. What do you mean?
Is it? Is it AGI or ANI? I don’t think it fits in either. As used in media or discussion AI is referring to LLMs, but that’s just because marketing seized the word due to its familiarity from fiction and ease of use. It’s incomplete and just slang without the middle letter.
LLMs are a subset of AI, but media/marketing kinda would have you believe both that it’s synonymous and it’s not just statistics thrown at words but rather person-like. Both are untrue, of course.
The whole AGI, ASI etc thing made for a neat RPG, eclipse phase, but ultimately it’s just fiction that a souped up ANOVA is intelligent in an animal kinda way rather than intelligent in a worse Roomba kinda way.
It doesn’t exist, so it’s fiction, until it does. Discounting that something can’t happen because it hasn’t yet has never gone well with technology.
My point was that without the middle letter you’re just thrown around jargon to sell the public. The middle letter classifies the intelligence and what it can and can’t do. Would you call LLMs a type of ANI? They don’t really fit because one, they aren’t usually that narrow in handling things unless they’re fine tuned to do so, and more importantly two, there is no intelligence there. So it’s not AI of any sort.
LLMs are a type of AI. Without a middle letter, it’s just regular machine learning. Statistics. Not the same category as stuff that implies personhood at all, and muddying the waters there is not desirable.
Wait, did you mean the headline using AI instead of LLM? Because if so, that would have been what my question tried to ask.
Like how Coca-Cola is fruit.
“when it’s obviously a vegetable.” -GROK
Tests Show That AI Models Are Making Disastrous Errors
Fixed the headline
Geez. Just stop. The money could be spent ending hunger.
Excuse me! LLMs aren’t AI! I played Halo so I’m kind of an expert, and I know AI is a naked blue woman who lives in your head
AI unreliable, and makes errors in every field it’s utilized in? Shocked. I am SHOCKED I say!!!
Ok, not that shocked.
ohhh whaaaat who could’ve seeeeeen it coooooming.