I thought of this recently (anti llm content within)
The reason a lot of companies/people are obsessed with llms and the like, is that it can solve some of their problems (so they think). The thing I noticed, is a LOT of the things they try to force the LLM to fix, could be solved with relatively simple programming.
Things like better searches (seo destroyed this by design, and kagi is about the only usable search engine with easy access), organization (use a database), document management, etc.
People dont fully understand how it all works, so they try to shoehorn the llm to do the work for them (poorly), while learning nothing of value.
First, read my text fully before replying.
But additionally I have a brain and can use it to double check:
In example 1. I just build it blindly because it’s a game and it doesn’t matter if it’s wrong. But it ended up being correct.
In 2. the math result was not far off from my guesstimate and I can confirm later, it was correct.
In 3. it gave me a source and I read the source. Google did not lead me to that source.
When I let LLM write code, I read the code, then I test the code.
It’s weird how there is such a knee jerk hate for a turbo charged word predictor. You’d think there would have been similar mouth frothing at on screen keyboards predicting words.
I see it as a tool that helps sometimes. It’s like an electric drill and craftsmen are screaming, “BUT YOU COULD DRILL OFF CENTER!!!”
The commenter more or less admitted that they have no way of knowing that the algorithm is actually correct.
In your first analogy it would be like if text predictors pulled words from a thesaurus instead of a list of common words.