Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.
The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.
Wait, we have AI flying planes now?
It took me a while to realize it is an Otto pilot…
I know you’re joking, but for those who don’t, the headline means “startups” and they just wanted to avoid the overused term.
Also, yeah actually it’s far easier to have an AI fly a plane than a car. No obstacles, no sudden changes, no little kids running out from behind a cloud-bank, no traffic except during takeoff and landing, and those systems also can be automated more and more.
In fact, we don’t need “AI” we’ve had autopilots that handle almost all aspects of flight for decades now. The FA-18 Hornet famously has hand-grips by the seat that the pilot is supposed to hold onto during takeoff so they don’t accidentally touch a control.
Conversely, AI running ATC would be a very good thing. To a point.
It’s been technically feasible for a while to handle 99% of what an ATC does automatically. The problem is that you really want a human to step in on those 1% of situations where things get complicated and really dangerous. Except, the human won’t have their skills sharpened through constant use unless they’re handling at least some of the regular traffic.
Trick has been to have the AI do, say, 70% of the job, but having a human step in sometimes. Deciding on when to have a human step in is the hard problem.
what do you think an autopilot is?
Mild height and bearing corrections.
Someone will be around to say “not real AI”, and I think that’s the wrong way to look at it.
It’s more “real AI” thank the LLM slop companies are desperately trying to make the future
A finely refined model based on an actual understanding of physics and not a glorified Markov chain.
To be fair, that also falls under the blanket of AI. It’s just not an LLM.
No, it does not.
A deterministic, narrow algorithm that solves exactly one problem is not an AI. Otherwise Pythagoras would count as AI, or any other mathematical formula for that matter.
Intelligence, even in terms of AI, means being able to solve new problems. An autopilot can’t do anything else than piloting a specific aircraft - and that’s a good thing.
Not sure why you’re getting downvoted. Well, I guess I do. AI marketing has ruined the meaning of the word to the extent that an if statement is “AI”.
Because they are wrong. Airplane Autopilot is not “one model”, it’s a complex set of systems that take actions based on a trained model. The training of that model used standard ML practices. Sure, it’s a base algorithm, but it follows the same principles. That’s textbook AI.
No one would have debated this pre-LLM. That being said, if I was in the industry, I’d be calling it an algorithm instead of AI, because those out of the know, well, won’t get it.
I’d argue that an artificial intelligence is (usually computational) a system that can mimic an specific behavior that we consider intelligent, deterministic or not, like playing chess, writing text, piloting an aircraft, etc.
And you’d argue wrong here, that is simply not the definition of intelligence.
Extend your logic a bit. Playing an instrument requires intelligence. Is a drum computer intelligent? A mechanical music box?
Yes, the definition of intelligence is vague, but that doesn’t mean you can extend it indefinitely.
I wanna point out three things:
That’s a weak argument without substance. “No, you!” is not exactly a good counter.
Yes, that’s exactly what I’m talking about, which refutes your argument in 1).
That’s a whole different discussion. That intelligence is required to build something has nothing to do with whether the product is intelligent. The fact that you manage to mangle that up so bad is almost worrying.
I don’t know where you’re getting your definitions but you are wrong.
For example, the humble A* Pathfinding Algorithm falls under the domain of AI, despite it being a relatively simple and common process. Even fixing small problems is still considered problem solving.
I’m sorry, but that’s the worst possible conclusion you can get from that paragraph.
Again, think your argument to the end. What would not fall under AI in your world? If A* counts, then literally everything with a simple ‘if’ statement would also count. That’s delusional.
Do actually read the article and the articles linked. Are you really, really implying that a simple math equation, that can be solved by a handful transistors and capacitors if need be, is doing something “typically associated with human intelligence”? Really?
Can text generators solve new problems though?
To a certain extent, yes.
ChatGPT was never explicitly trained to produce code or translate text, but it can do it. Not super good, but it manages some reasonable output most of the time.
That’s terrifying, but I don’t see why my regional train can’t drive on AI in the middle of the night.