Tech bros love to argue that llms are the future just like the internet or electricity. Of course many of them love Ayn Rand and live in a fictional stupid bubble world but anyways; are they right? I guarantee there were studies showing we are dumber for using the internet, or GPS, or that cars are much worse than horses etc. That can be seen through history.
When it comes down to it, everything is a tool. But I feel ai will mostly be used for evil, where the internet is probably 50/50 evil and good.
Current LLM’s make me think of Theranos. Elizabeth Holmes, as CEO of Theranos, made billions of dollars and publicly stated that their mini-lab could successfully perform 60 tests off a single drop blood. Even though it was a fraud, their machine did work on a handful of tests, but it’s basically impossible to do what she claimed the mini-lab could do with a single drop of blood.
LLM’s, at this point, are very, very good at simple tasks. They are going to eliminate some of the bullshit jobs that exist in the world because of it, but this is an invention that is so buggy they had to create a whole new marketing term so they wouldn’t have to use the word ‘bug’: Hallucinations.
It’s not hallucinating. It’s not thinking. It’s just not capable of doing what the CEO’s say it can, and being that these models have trained on the whole of human knowledge and still can’t perform up to promises, I don’t think the future they’re promising is necessarily going to materialize.
LLM’s are just reasonably well-trained digital parrots.
I wouldn’t even call it a bug. It’s doing exactly as it is trained to do - guess the next word based on training data. If it has no concept of truth/falsehood, how can falsehoods be bugs?