Please turn JavaScript on

Will AI Be the Death of Software?

Remember the old song “Anticipation?” We need a version for 2026, so let’s call it “Perpetuation”. “Perpetuation is making is making us rich, by keeping the bubble go go-going”. It’s starting to look like the Street, meaning of course hedge funds, have invented a whole new kind of bubble, one we could call an “anti-bubble”. Instead of hyping something to the stars, they diss something to the sewers, and then make money by selling short. Then they make more by buying it back, and selling when it gets back to its natural level. If you can do that with a fable that actually creates a positive, traditional, bubble somewhere else, you get the best of all possible worlds.

So what’s the fable? “AI will be the death of software”. I don’t have a lot of enterprise feedback on this because it’s so recent, but what I have shows a level of disdain and disbelief rare even in these troubled times. “This is the stupidest thing I’ve ever heard,” one enterprise CIO said. Another said “Only somebody who knows nothing about software, nothing about AI, and nothing about tech project justification could possibly come up with anything that dumb.”

The reason why enterprises seem to be so totally negative on this is simple; AI is enormously expensive compared to traditional software when it comes to the stuff we do with traditional software. Transaction processing, generation of reports, even a lot of financial analysis can be done with a very low drain on tech resources. You can do a lot on a personal computer, a tablet, even a smartphone. Could you ask AI what two plus two were? Sure, and how much would it cost to host an entity to answer that? How about a more complex question, like what the diagonal measurement of a square would be? Well, I’m sure there are people who didn’t take geometry or don’t remember it, but even then you’d have to ask what sort of application was driving the question and whether it was worth hosting (or paying for) AI to answer it (the answer, from a guy who used to tutor geometry in high school, is the side of the square times 1.414).

Come on, people! We’re still running COBOL applications written decades ago because the business case for rewriting/converting to a modern language can’t be justified. And we’re supposed to be converting all this to AI? And doing it at the same time that the Street is questioning whether AI is itself a bubble? So expecting AI to eat software sure sounds as stupid as CIOs and others in the enterprise IT world seem to think it is.

But….

There’s usually a grain of truth, of reality, in any good fable. There surely was never an old woman who lived in a shoe, but would having too many children compromise the ability to feed them or discipline them? What’s the kernel in this AI/software fable, if there is one?

IT spending has always been a mixture of “sustain the existing” and “support new”. Back in the 1980s, the two were almost coequal in their contribution. In the 1950s, the majority of the spending was for new stuff, for the obvious reason that not much had been done with IT up to then. Today, a third of enterprises say that their spending in 2026 will be so tipped toward the sustaining mission that “new” things are statistically insignificant. If we continue on that track we’ll see all of IT becoming a commodity, something that generates benefits only by providing a path to cheaper refresh/modernization.

What drives new spending? New benefits. What drives new benefits? It’s not servers or networks, it’s intelligence. Some new way to empower workers and improve productivity, to create sales and boost profit, that sort of thing. All of that has traditionally been the goal of application software. So, the kernel-of-truth question is whether further development of those traditional software benefits will be captured instead by the onrush of AI.

The best answer to this question, I think, comes from the way enterprises see AI agents being applied. As I’ve noted many times, they tend to see agents as software components, things that fit into a workflow that in a flow chart could be drawn up using software components entirely. There are a few things, the enterprises think, that AI could do better than a component written in a modern language (or even one done in COBOL). There are enterprises who also think that you could build AI into an application in a tighter way, perhaps to the extent of building the application around a core of AI agent technology. But, big but, not one single AI-agent-exploring enterprise has ever told me they intended to replace an application with AI. None have even considered it.

What some enterprises doing their diligence on AI agents have told me is that, in general, the response time of an AI component is considerably longer than the latency of a typical software component. One enterprise looked at sticking a governance-and-security AI agent into a transaction flow, and determined that the impact on throughput would be intolerable. “AI sometimes took seconds, where software took milliseconds.”

What’s true here, and important as well as true, is that artificial intelligence is suitable as a replacement for human intelligence, not as a replacement for an automated process. That doesn’t mean that AI can’t be faster then humans, more thorough, more accurate, but that all these comparisons are to humans, not to software that’s either already performing the task, or where the task is really more about throughput than thinking.

The biggest of all questions here is just how much AI could do without essentially empowering software along with it. Certainly the chatbot model of AI isn’t much good for transaction processing. Do we assume that the way AI would kill software is to kill human software development? Not the same thing, of course, but none of the enterprises I chat with think that AI can eliminate even coding, much the complete process of software development.

There are people who still think that humanity’s very existence is threatened by AI. That AI will destroy software is another of those Wall Street fables. In the near term, certainly, it’s far more likely to magnify it. If you want to believe that AI will threaten the human race, then surely software threats are also possible. I’m not building a bunker though, and I don’t think you should either.