Please turn JavaScript on

Enterprises Don’t Think AI Poses a Risk to Software

Last week should prove out a point I’ve made many times, which is that hedge fund behavior manipulates the stock market and investors in general. It should also prove another point, which is that the hype that comes out of a click-here world contributes to this, and a third, which is that the combination of the two distorts our view of tech reality.

Over the weekend, I heard from 114 enterprises who offered their own views on the underlying question that should be raised by last week’s AI/software brouhaha, which is “What impact would AI really have on software?” The impact might be direct, meaning that AI replaces some software, or indirect in that AI makes developing software so easy and cheap that nobody buys it. Or, of course, there might be no impact.

When AI first came along, enterprises’ view was that it was best seen as a tool in improving business analytics. They reasoned that there were two truths; AI had to add business value in a significant way, and AI was good at detecting patterns, trends, and relationships. While even for those enterprises this strategic view has tended to get submerged in specific AI tactics, meaning their “AI agent” vision, enterprises who commented on last week still held to those two truths.

I think that the fact that enterprises have tended to see AI agents very differently from the way we hear about them is related to all of this. For them, it’s hard to see how AI creates significant business value other than by analyzing critical, and highly sensitive, company information. It’s the hidden patterns and relationships in this data that creates real AI opportunity. However, this data is highly sensitive, and it’s the very stuff that enterprises have resisted putting out into the cloud, where its protection is something they can’t assure on their own, or even fully assess.

It’s also true that cloud computing is almost universally seen as more expensive than in-house computing if you’re talking about applications with fairly level load levels. This, combined with the data governance point, has tended to limit cloud value to places where variable load levels are considerable, and so creating in-house resource pools results in an uncomfortable tradeoff between economic efficiency and QoE during peak periods of use. That’s typically front-end elements of applications, particularly those facing customers but also for those that are worker-facing.

OK, then, you can see that enterprises really see valuable AI business missions better suited to internal hosting than to the cloud. That, of course, is not what cloud giants want, because the number of enterprises, the total addressable market or TAM, is limited because as many have said, there are only 500 Fortune-500 companies. If cloud providers want more revenue, they need more average revenue per user (ARPU), so having AI self-hosted messes with their revenue goals.

Nvidia’s CEO says that a capital investment of over $600 billion in AI hosting is clearly justified; enterprises who commented over the weekend say that’s not true. They’re not sure that level of investment can be justified in the next three years, even including spending on self-hosted AI. This, I think, is important because if AI is to displace software then it not only has to build an enormous application mission and business case for itself, it has to justify displacing the investment already made. They note that SaaS, because it’s a cloud service, has a major data governance barrier to spread, and the same would be true for cloud AI that replaced it.

Some who want to continue to perpetuate the AI-eats-software myth say that, OK, the real point isn’t that AI would replace software, but that it would write it, and by doing so make software free, which means that there would be so many competitors because barriers to market entry would fall away, that the price of software would go to zero.

OK. What’s the price of open-source software? How much software do we run today in thin air, versus on data center equipment? How much network spending is needed to connect these facilities, and to connect them to users? What is the cost enterprises are trying to control with cloud computing, software or hosting? Enterprises don’t envision that even if AI coding were to get much better, it would destroy the software business. Those who postulate an impact at the broad level think it might make open-source better; could companies contribute AI coding instead of people?

Then there’s the specific question of the value of AI coding. Will, at some point, the rate of errors in coding created by AI fall to a lower level than the rate created by junior or intermediary-level programmers? Most enterprises say “Yes” to that. But enterprises point out that the actual portion of project cost made up by coding is only about 30%. Will AI code go into production without testing? No, say enterprises. Can AI do software design, including getting requirements from users? No, say enterprises. And while most would qualify these negatives with a presumed five-year timeline, that’s no different from their normal planning horizon. It may be fun to postulate what AI might be in ten years or so, and it may generate clicks, but it won’t generate enterprises’ business plans.

I’ve been a programmer and software architect for decades, and I’ve run major development projects. I’m also a believer in, and user of AI, in a way that reflects both enterprise views and my own experience. I think that AI is capable of improving programmer productivity. I think it’s also capable of doing the kind of programming that “citizen developers” using low/no-code tools have been doing. My own experience with AI has shown me that while it can be used with minimal risk by someone who knows what they’re doing, it’s not trustworthy. Even on some personal research tasks I’ve given it, AI throws fliers that would be a major problem if they weren’t caught. That may improve over time, but it’s a factor now that cements what I hear from enterprises. AI does not pose any threat to software development today, and probably won’t in the next three years or so.

The view that AI won’t eat software doesn’t necessarily mean that software doesn’t have issues, and of course it also doesn’t absolve cloud services from those issues either. Enterprises say that the problem with SaaS, for example, is largely data governance; there’s little willingness to host critical data in the cloud, and similarly little to migrate transaction processing and databases to the cloud for cost reasons. But for software overall, they don’t see AI as being a threat. In fact, say some, AI could be a benefit.

AI, recall, is something enterprises think is most valuable in creating business intelligence. Up to now, that’s been something that was used retrospectively on stored data, but could it be added to online? Could either interactive applications utilize it, or contribute to the base analysis it does? Could AI be part of transactional workflows, or even be a part of the applications that support a prospect’s browsing for product/service information? That would almost certainly mean a revamp of the software framework of those applications, which could be positive for software providers.

AI is software, and AI’s value today is really related to its ability to augment software. It’s a bit of a parasite, yes, but a parasite that kills its host is not successful. Enterprises think this is all hype, and I agree.