The first time I left a salary to start a business, in June 2011, I hired a consultant for two months. I negotiated one unusual condition: he’d do the actual work in month one, and spend all of month two documenting every step and answering my questions — until I could do it myself.
I had no choice. My bank account made sure of that.
When I joined my current company a while back, I encountered the opposite logic: roughly 30 full-time employees and a sprawling ecosystem of third-party contractors behaving like permanent staff. At one point I personally had 27 contractors allocated to my account. I flagged that I didn’t need them. They stayed anyway.
Today, our department has around 50 FTEs and 80 contractors. Having worked with these kinds of arrangements before, I can say with confidence: very little of what they do requires deep judgment.
I’ve thought hard about why this persists. Here’s what I keep coming back to:
Most corporate leaders have never run a business. Not a real one. Not one where the budget bleeds into your personal bank account and a bad quarter means a hard conversation with your family. When you’ve only ever spent other people’s money, financial discipline feels optional — because for you, it always has been. “How it was is how it will always be” isn’t laziness. It’s a rational response to never having faced consequences.
Budgets reward spending, not efficiency. Next year’s allocation is based on this year’s burn. In a political environment, the size of the budget you manage is a proxy for status and leverage. Nobody gets promoted for returning money to the business. The incentive is to spend it, defend it, and grow it — whether or not any of it makes sense.
Real AI adoption requires exactly what this kind of culture has been systematically destroying. Doing something genuinely new with AI demands current knowledge, honest assessment, and the willingness to make your own team uncomfortable. Most large organizations don’t just fail to reward that — they actively punish it. And the people paying the price for that are exactly the ones you need right now.
The people most capable of leading AI transformation are usually the ones who’ve been pushed to the margins for not playing the political game — the ones who spent time actually learning instead of managing perceptions. Corporate culture has a word for what the climbers do: visibility. What it actually is, most of the time, is a calculated trade. You give up the hours you could spend building real knowledge, and you spend them making sure the right people know your name. For a long time, that was a reasonable bet, but that trade is coming due. Very soon, a lot of people are going to rudely discover that what they’ve been accumulating isn’t experience… it’s exposure (and not the good kind).
Comfort protects itself. Leaders who’ve achieved security inside a broken system are the last people who want to fix it. Change is a threat to the conditions that made them successful. So they manage it, slow it, and wait for someone else to go first.
Here’s what I find almost funny: most organizations like mine are having loud, expensive conversations about introducing AI into new scopes — new tools, new use cases, new transformation roadmaps — while completely ignoring the most obvious AI opportunity sitting right in front of them: replacing work that never required human judgment in the first place.
We already have FTEs doing automatable work in 2026. That conversation will be uncomfortable, and HR is definitely not ready for it. But third-party contractors we pay an indefinite premium for, year after year, with no exit criteria? That’s a different category — ethically and practically. Nobody owes Accenture a permanent revenue stream.
The move is straightforward: have each manager responsible for a contractor create a detailed workflow map of what that person actually does. Or — and this is the one I actually prefer — ask the contractors to document it themselves. Write down exactly what you do, how you do it, and how long it takes. Yes, some of them will see the writing on the wall. That’s fine. We’re not running a charity.
From there, the paths are obvious. Your internal team identifies which tools automate the highest-volume, lowest-judgment tasks. Or you go further and require the agencies — who bill themselves as AI-forward and charge accordingly — to build the agents themselves. It’s an uncomfortable ask. It’s also exactly what any organization with real budget pressure and actual technology leadership is already doing.
The goal isn’t to blow it up overnight. It’s to reduce the dependency deliberately, quarter by quarter, until you’ve reclaimed the budget, internalized the knowledge, and stopped paying a permanent premium for work that a well-prompted model could do before lunch.
None of this requires a transformation budget or a consulting engagement. The blocker isn’t capability. It isn’t even technology.
It’s that the people who would need to make this decision are the same people who benefit most from nothing changing.
That’s the real AI readiness problem. Not the tools. Not the models. The incentives baked into the org chart — and the leaders who’ve never had to care about them.