Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.
The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.
what do you think an autopilot is?
Mild height and bearing corrections.
Someone will be around to say “not real AI”, and I think that’s the wrong way to look at it.
It’s more “real AI” thank the LLM slop companies are desperately trying to make the future
A finely refined model based on an actual understanding of physics and not a glorified Markov chain.
To be fair, that also falls under the blanket of AI. It’s just not an LLM.
No, it does not.
A deterministic, narrow algorithm that solves exactly one problem is not an AI. Otherwise Pythagoras would count as AI, or any other mathematical formula for that matter.
Intelligence, even in terms of AI, means being able to solve new problems. An autopilot can’t do anything else than piloting a specific aircraft - and that’s a good thing.
Not sure why you’re getting downvoted. Well, I guess I do. AI marketing has ruined the meaning of the word to the extent that an if statement is “AI”.
Because they are wrong. Airplane Autopilot is not “one model”, it’s a complex set of systems that take actions based on a trained model. The training of that model used standard ML practices. Sure, it’s a base algorithm, but it follows the same principles. That’s textbook AI.
No one would have debated this pre-LLM. That being said, if I was in the industry, I’d be calling it an algorithm instead of AI, because those out of the know, well, won’t get it.
I’d argue that an artificial intelligence is (usually computational) a system that can mimic an specific behavior that we consider intelligent, deterministic or not, like playing chess, writing text, piloting an aircraft, etc.
And you’d argue wrong here, that is simply not the definition of intelligence.
Extend your logic a bit. Playing an instrument requires intelligence. Is a drum computer intelligent? A mechanical music box?
Yes, the definition of intelligence is vague, but that doesn’t mean you can extend it indefinitely.
I wanna point out three things:
That’s a weak argument without substance. “No, you!” is not exactly a good counter.
Yes, that’s exactly what I’m talking about, which refutes your argument in 1).
That’s a whole different discussion. That intelligence is required to build something has nothing to do with whether the product is intelligent. The fact that you manage to mangle that up so bad is almost worrying.
I don’t know where you’re getting your definitions but you are wrong.
For example, the humble A* Pathfinding Algorithm falls under the domain of AI, despite it being a relatively simple and common process. Even fixing small problems is still considered problem solving.
I’m sorry, but that’s the worst possible conclusion you can get from that paragraph.
Again, think your argument to the end. What would not fall under AI in your world? If A* counts, then literally everything with a simple ‘if’ statement would also count. That’s delusional.
Do actually read the article and the articles linked. Are you really, really implying that a simple math equation, that can be solved by a handful transistors and capacitors if need be, is doing something “typically associated with human intelligence”? Really?
Like, are you seriously saying that everyone in Wikipedia is wrong but you? You’re the only one delusional here.
Believe or not, a bunch of if statements can mimic intelligent behavior, again, not like it’s intelligent, it looks like which is the whole point (that you obviously missed out)
Can text generators solve new problems though?
To a certain extent, yes.
ChatGPT was never explicitly trained to produce code or translate text, but it can do it. Not super good, but it manages some reasonable output most of the time.