Not considering the cost of 'thinking' when paying for AI
Why the trivialization of AI work produces worse projects rather than cheaper ones, and how to buy AI help w
Why the trivialization of AI work produces worse projects rather than cheaper ones, and how to buy AI help when everyone selling it is also using it.
A few weeks ago I sent a buyer a scoped proposal for an implementation project. He came back and told me a second firm had quoted him roughly a tenth of my number, and a third had landed somewhere in between, closer to the low end. He wanted to know why mine was what it was.
It was a fair question. It was also the only question he was asking, and that told me most of what I needed to know about how the project would go if he picked the cheap quote.
The spread he was looking at is real, and it is not just the normal noise of a young market. AI consulting in 2026 runs from about $50 an hour to $600 an hour for what is nominally the same category of work. One pricing analysis this year put the problem plainly: a $40,000 quote and a $400,000 quote are often answering different questions entirely. “AI transformation” means a four-week assessment to one firm and an eighteen-month enterprise build to another, and the buyer usually cannot tell which one he is reading.
But the width of that range is not only a vocabulary problem. A belief has settled into the buying conversation, and it is doing real work on the buyer’s side of the table before scope is ever discussed. The belief is that AI makes this easy now. You can hear it in the way the negotiation opens. It compresses what the buyer thinks the work is worth, and it does so on the basis of a demo he saw, or a weekend project he built, or a number a chatbot gave him.
The failure rate nobody is pricing in
Here is what the trivialization conveniently steps around. The same eighteen months that produced the “AI makes it easy” consensus also produced a remarkably consistent record of failure.
MIT’s State of AI in Business 2025 report found that 95 percent of corporate generative AI pilots delivered no measurable impact on the P&L. S&P Global’s 2025 enterprise survey found that 42 percent of companies had abandoned the majority of their AI initiatives before they reached production, up from 17 percent the year before. Gartner expects more than 40 percent of agentic AI projects to be cancelled by the end of 2027, and attributes the cancellations to escalating costs, unclear business value, and inadequate risk controls.
I want to be fair about the MIT number, because it has been repeated more confidently than it deserves. Its definition of success was narrow, measurable P&L impact inside roughly six months, and it rested on a modest interview base of around fifty executives. Plenty of work that study would score as a failure is producing genuine efficiency gains that simply do not surface in a two-quarter P&L window. The S&P figure is harder to wave away. “Abandoned before production” is not a definitional gray area, and a jump from 17 to 42 percent in a single year is not statistical noise.
The connection worth drawing is that the trivialization and the failure rate are not two separate stories. The belief that the work is easy is what leads a buyer to underfund the parts of it that are not easy, and the parts that are not easy are precisely the parts that decide whether the thing survives contact with production. A buyer who is certain a project is trivial will not pay for the data cleanup, the evaluation layer, the security review, or the maintenance budget. Then the project becomes one of the 42 percent, and everyone agrees, after the fact, that AI did not deliver.
What the discount is actually cutting
When a buyer negotiates down on the logic that AI makes this easy, it is worth being concrete about what he is declining to pay for.
He is declining to pay for expertise, which is the thing that knows which fifth of his problem is genuinely hard before the project starts rather than after. He is declining to pay for infrastructure. He is declining to pay for the cost of tokens, and that one deserves a sentence of its own, because buyers consistently model it wrong. Token cost is not a software license you buy once and then run at zero marginal cost. It behaves like a utility bill. It scales with usage, it recurs every month the system runs, and it never goes to zero. He is declining to pay for an evaluation layer, the thing that tells him when the system is quietly wrong. He is declining to pay for security and compliance controls sized to his actual data. And he is declining to pay for maintenance, even though a deployed AI system drifts, degrades, and sits on a model that will eventually be deprecated underneath it.
One pricing analysis this year estimated that infrastructure, compute, third-party API and token fees, and post-launch maintenance routinely add 20 to 40 percent to a stated project cost. That range is roughly the gap between an honest proposal and a cheap one. The cheap one has not removed those costs. It has just declined to tell you about them yet.
Why asking a chatbot what to pay is a trap
The most common version of the trivialization now arrives pre-loaded. The buyer has already asked a model what he should pay, and he walks into the conversation anchored to its answer.
Think about what the model actually did. It returned a confident, specific-sounding number. It did so with no knowledge of his data quality, his integration surface, his regulatory exposure, or the token economics of his particular workload, which are the four things that move the real number most. It produced a figure anchored to a fiction, and the figure reads as neutral because it came from a machine rather than from a salesperson.
Then the expert who quotes the real number, the one that accounts for the messy data and the compliance load and the recurring cost of running the thing, looks like a profiteer standing next to the chatbot’s clean answer.
Two things make this worse than an ordinary bad anchor. The model has no stake in the outcome. It will not be in the room when the project is abandoned, and it will never be asked to explain why. And its training data is saturated with the same trivialization we are describing, layered on top of a great deal of vendor content marketing. The SEO-optimized “$22-an-hour AI agency” blog post and a disinterested cost estimate look identical to a language model. So the buyer ends up anchored to laundered marketing and calls it research.
If you are a consultant, this is the mechanism that pushes you hardest. You are not negotiating against a competitor. You are negotiating against a number the buyer believes is objective, produced by a system that absorbed the marketing of every firm willing to underprice the work, and that will face no consequence when the underpriced version fails.
Everyone is an AI expert, because everyone is using AI
The buyer’s skepticism, to be clear, is earned. The market is genuinely full of people who are also just using AI and calling it expertise.
Gartner’s term for part of this is “agent washing.” Of the thousands of vendors claiming agentic AI capability, the firm reckoned only around 130 had anything that genuinely qualified. The rest are rebranded chatbots and existing automation tools. The blended-rate trick is everywhere too: the proposal quotes $200 an hour, the fine print defines that as a blend, the partner bills at $500 and two junior developers bill at $100, and the buyer is paying a premium rate for a largely junior team. So is the discovery phase that was sold as three weeks and somehow runs three months with no deliverable, while the firm bills the client to learn about the client’s own business.
So the skepticism is correct. It is simply being spent in the wrong place. It gets spent on price, which is the easiest thing in a proposal to compare and the least informative. It should be spent on scope, on ownership, and on the cost stack.
What winning the price negotiation actually buys
Here is the part buyers do not want to hear. When you win the price negotiation, you do not get the same project for less money. You get a different and smaller project wearing the same name.
You get a leaner team with more junior people on it. You get a thinner testing layer, or none. You get no maintenance line. And you get a vendor whose margin is now thin enough that every change request becomes a fight, because it has to be. That project is materially more likely to be one of the 42 percent that never reaches production.
An abandoned project is not a saved budget. It is the original budget, spent, plus the cost of the months lost, plus the cost of doing the whole thing again with someone else. The discount did not lower the cost of the work. It moved the cost into the future and added interest to it.
The quote that looks like the bargain is, often enough, just the one that has not finished telling you what it costs.
A guide to buying AI help
You cannot fix the variance in this market. You can change how you read it. Here is how to think about what you are paying for, and how to tell a real vendor from someone who is also just holding the same tools you could hold yourself.
What your money actually buys
The fee is the visible part. Underneath it sits senior expertise, which is mostly the ability to identify the hard 20 percent of your problem before the project starts. Underneath it sits infrastructure. Underneath it sits the recurring, usage-scaled cost of tokens, which you should model as an operating expense that continues for as long as the system runs, not as a one-time line item. Underneath it sits an evaluation layer that tells you when the output is wrong. Underneath it sits security and compliance work sized to your data and your industry. And underneath it sits a maintenance budget, because the system will drift and the model beneath it will age.
If a proposal does not name these things, it has not removed them from your life. It has left you to find them on your own, later, at a worse time.
How to read a proposal
Start with scope. Make the vendor define the project in a single sentence you could hand to a stranger in your company and have them understand. If “AI transformation” cannot survive that test, you do not yet know what you are buying.
Ask who actually does the work, by seniority, and whether the rate you were quoted is a blend. Ask explicitly what is not included, naming infrastructure, tokens, integration, and maintenance, and get the answer in writing. Ask about ownership and handoff: when the engagement ends, who holds the code, the infrastructure, the documentation, and the prompts? If the answer is the vendor, you have not bought a system. You have rented one, and the rent does not stop. And ask how you will both know it worked, because a proposal that cannot state its own success metric is asking you to fund a hope.
How to evaluate a vendor when everyone claims to be one
Weigh domain depth more heavily than tool fluency. Everyone has the tools now. Ask a prospective vendor about your business, your regulatory environment, and your failure modes, and listen for whether they tell you things you did not have to tell them first.
Watch for the willingness to say no. A vendor who agrees that every use case on your wishlist is a great fit is selling, not advising. The good ones will tell you which third of the list is not worth doing, and that conversation is worth more than the discount you were chasing.
Treat transparency about the cost stack as a signal. The honest vendor volunteers the 20 to 40 percent that the cheap one leaves out. Ask for references with depth, not a wall of logos: a customer who went live twelve or eighteen months ago, because the failure mode you actually care about shows up well after the demo. And look at how they price. Project-based and value-based pricing align the vendor with your outcome. Pure hourly billing rewards the slow and quietly penalizes the expert who is fast.
Before you sign
A short list of questions, worth asking out loud and writing the answers down:
Can you define this project’s scope in one sentence I could hand to someone outside my company?
Who, by seniority, will actually do the work, and is the quoted rate a blend?
What is explicitly not included in this number?
What will tokens and infrastructure cost to run, per month, after go-live?
How will we both know, six months in, whether this worked?
When the engagement ends, what exactly do I own and operate without you?
Can I speak to a client who went live with you at least a year ago?
And one note on the chatbot. Use a model to prepare for the conversation, by all means. Use it to generate the questions above, to pressure-test a proposal you have been handed, to learn the vocabulary so you are not negotiating blind. Do not use it to set your price anchor. It does not know your data, it does not carry your risk, and it has read far more vendor marketing than you have.
None of this makes AI work expensive for its own sake. Plenty of it is genuinely cheaper than it was three years ago, and plenty of consultants are genuinely overpriced. But a buyer who treats the cheapest quote as a target rather than as a piece of information is not driving a hard bargain. He is volunteering for the 42 percent.


