Please turn JavaScript on

How Project Justification Policies May Impact AI

How do companies justify tech projects? That’s a question that often comes up in stories, social media, and other forums. There are a lot of views, because the topic isn’t exactly cut and dried, and because different constituencies have different answers; vendors and users, meaning sellers and buyers, investors and employees, and so forth. I’ve dug back into past comments I’ve had on the topic, and also dug into the model of buyer behavior I’d developed over my years as an analyst. Here’s what I’ve found.

Let’s bound the question to my opening one, meaning that we’ll leave out what vendors/sellers might like to see and look instead at the pure buyer perspectives. Let’s create a further boundary and look at the way that financial management (CFOs) and other senior management see it, because it’s their view of a project’s cost/benefit that will determine approval. OK? Then let’s proceed.

We can create a kind of family tree of justifications. At the top, they divide into quantified and intangible, which means that both costs and benefits may be ones that can fall into both categories. The quantified category can be further classified as having direct monetary impact and having indirect, perhaps somewhat suppositional, impact. The intangible category includes outside mandates (regulatory, tax, etc.) and “soft” impacts, such as social policy, outreach, sales/marketing and competitive reaction, etc.

These cost/benefit elements are applied in their own variable framework. A company can be public or private or non-profit, small or large, B2B or B2C. The projects can also be small or large, focused specifically at cost management or revenue augmentation, or perhaps on both. The costs can be expensed, depreciated over a short period, or long-lived. Funds may be drawn from cash flow, borrowed, or raised in an offering of formal corporate debt or stock.

Tech projects, like others, are subject to more justification scrutiny when they’re large, particularly when they’re large enough to impact the bottom line. In those cases, well over 90% of enterprises say that they will have to present hard proof of costs and benefits, and the projects will have to pay off in the form of creating an adequate ROI. Rigorous tests will do things like discounted cash flow analysis, but most will assess the ROI relative to the company’s internal rate of return (IRR). Projects without a significant bottom-line impact may be evaluated on a “payback period”, meaning that the return/benefits have to offset the cost within a specified period of time.

The biggest problem tech planners cite in project approval is that costs tend to be “hard” numbers, known rather precisely, and benefits are usually more subjective or “soft”. Financial benefits are either reductions in cost or increases in sales/revenues, and enterprises say that roughly half the tech projects aimed at cost reduction fail to deliver fully, and almost three-quarters of those aiming at sales/revenue do so. This is why the bigger the project, the greater the risk that there will be a shortfall in justification, meaning an increased risk.

Tech planners tell me that one of their primary goals with new technology is to contain the investment in the initial project, even to the point of doing a proof-of-concept or field trial first. This hardens the soft numbers to the point where senior management is comfortable with them. However, it’s critical to ensure that this limited trials of a tech project match the actual deployment closely enough to allow the results to projected forward confidently.

AI has posed a particular challenge for project justification, enterprises say, for a number of reasons. Foremost is the fact that enterprises don’t understand AI well at all, and know they don’t. One senior enterprise IT planner/architect said “We thought we knew cloud, and it turned out we didn’t know enough. We don’t even think we know AI, so you can see where that’s going!” Many, because generative AI chatbot-like services are available as expensed items, have “learned AI” from those, only to find that it’s not generative, hosted, AI that’s likely to get approved for really helpful projects. One reason the AI agent concept has created real interest and even excitement is that you can relate it to other software projects more readily. I’ve talked in the past about the fact that enterprises divide agent projects into three categories based on how they interact with the business and workers; AI planners point out that this could have been done with AI in general and from the first, but the divisions eluded them until AI got “compartmentalized”.

The nice thing about agent-centric AI projects, enterprises say, is that the relatively specialized agents can be hosted using far fewer resources, at lower costs, than full-bore generative AI. It’s possible, they say, to create what you could call an “agent Raspberry Pi”, a testbed that can run many different agent models, and to use it to trial out multiple agents within any or all the three categories of agent technology. One enterprise said that one rack of GPU-centric servers and a good network connection between them and to the data center switches was all it took.

There seems to be an increase in the number of enterprises who are organizing their approach to AI by organizing how they do trials of AI agent applications. Right now, one in nine have at least some sense of a trial pipeline, and one in fifteen have an established program for managing these trials and converting them into production. One key rule that’s emerged is “don’t let a trial-to-production transition take the trial testbed resources into production”. The risk of the trial pipeline is that the resources for the trial can be viewed in a sense as zero cost. That assumption is false if you presume that the justification has to stand on its own, but some projects get approved because the actual cost of new agent hosting is omitted, since it’s not in the trial. Companies address this (when they have recognized the problem) by mandating that the current cost of resources the trial identifies as necessary be included in the justification, not the cost of the testbed (usually lower, if it was purchased earlier) and not zero or the current depreciation/amortization costs.

While some companies give AI projects a bit of slack when applying ROI targets, some also hold them to a higher standard. The reason is a combination of the pace of AI evolution and their own lack of AI credentials. Obviously, not understanding the technical basis of a technical project means your risk is higher. Less obviously, but equally true, is that if you are buying into something that may be obsolete quickly, you have to at least consider writing down your investment quicker. The test-bed notion of AI may help alleviate both risks. It focuses companies on thinking of AI infrastructure the same way they think of data center infrastructure; a platform. It may also separate the model aspect of AI from the platform aspect, and demonstrate that this independence can reduce risk of obsolescence as AI matures.