Your AI Transformation is Failing.
Your Org Chart is Why.
The first time I left a salary to start a business, in June 2011, I hired a consultant for two months. I negotiated one unusual condition: he’d do the actual work in month one, and spend all of month two documenting every step and answering my questions — until I could do it myself.
I had no choice. My bank account made sure of that.
When I joined my current company a while back, I encountered the opposite logic: roughly 30 full-time employees and a sprawling ecosystem of third-party contractors behaving like permanent staff. At one point I personally had 27 contractors allocated to my account. I flagged that I didn’t need them. They stayed anyway.
Today, our department has around 50 FTEs and 80 contractors. Having worked with these kinds of arrangements before, I can say with confidence: very little of what they do requires deep judgment.
I’ve thought hard about why this persists. Here’s what I keep coming back to:
Most corporate leaders have never run a business. Not a real one. Not one where the budget bleeds into your personal bank account and a bad quarter means a hard conversation with your family. When you’ve only ever spent other people’s money, financial discipline feels optional — because for you, it always has been. “How it was is how it will always be” isn’t laziness. It’s a rational response to never having faced consequences.
Budgets reward spending, not efficiency. Next year’s allocation is based on this year’s burn. In a political environment, the size of the budget you manage is a proxy for status and leverage. Nobody gets promoted for returning money to the business. The incentive is to spend it, defend it, and grow it — whether or not any of it makes sense.
Real AI adoption requires exactly what this kind of culture has been systematically destroying. Doing something genuinely new with AI demands current knowledge, honest assessment, and the willingness to make your own team uncomfortable. Most large organizations don’t just fail to reward that — they actively punish it. And the people paying the price for that are exactly the ones you need right now.
The people most capable of leading AI transformation are usually the ones who’ve been pushed to the margins for not playing the political game — the ones who spent time actually learning instead of managing perceptions. Corporate culture has a word for what the climbers do: visibility. What it actually is, most of the time, is a calculated trade. You give up the hours you could spend building real knowledge, and you spend them making sure the right people know your name. For a long time, that was a reasonable bet, but that trade is coming due. Very soon, a lot of people are going to rudely discover that what they’ve been accumulating isn’t experience… it’s exposure (and not the good kind).
Comfort protects itself. Leaders who’ve achieved security inside a broken system are the last people who want to fix it. Change is a threat to the conditions that made them successful. So they manage it, slow it, and wait for someone else to go first.
Here’s what I find almost funny: most organizations like mine are having loud, expensive conversations about introducing AI into new scopes — new tools, new use cases, new transformation roadmaps — while completely ignoring the most obvious AI opportunity sitting right in front of them: replacing work that never required human judgment in the first place.
We already have FTEs doing automatable work in 2026. That conversation will be uncomfortable, and HR is definitely not ready for it. But third-party contractors we pay an indefinite premium for, year after year, with no exit criteria? That’s a different category — ethically and practically. Nobody owes Accenture a permanent revenue stream.
The move is straightforward: have each manager responsible for a contractor create a detailed workflow map of what that person actually does. Or — and this is the one I actually prefer — ask the contractors to document it themselves. Write down exactly what you do, how you do it, and how long it takes. Yes, some of them will see the writing on the wall. That’s fine. We’re not running a charity.
From there, the paths are obvious. Your internal team identifies which tools automate the highest-volume, lowest-judgment tasks. Or you go further and require the agencies — who bill themselves as AI-forward and charge accordingly — to build the agents themselves. It’s an uncomfortable ask. It’s also exactly what any organization with real budget pressure and actual technology leadership is already doing.
The goal isn’t to blow it up overnight. It’s to reduce the dependency deliberately, quarter by quarter, until you’ve reclaimed the budget, internalized the knowledge, and stopped paying a permanent premium for work that a well-prompted model could do before lunch.
None of this requires a transformation budget or a consulting engagement. The blocker isn’t capability. It isn’t even technology.
It’s that the people who would need to make this decision are the same people who benefit most from nothing changing.
That’s the real AI readiness problem. Not the tools. Not the models. The incentives baked into the org chart — and the leaders who’ve never had to care about them.
***
For what it’s worth, I didn’t just write about this. I said it out loud, internally, in writing. A note on context: this email was written during a period when I’d been quietly labeled a ‘performance problem’ — which, in my experience, is what organizations call people who ask uncomfortable questions out loud. I sent it to my MD. It went nowhere — which, if you’ve read this far, will surprise exactly nobody. Make of that what you will. Here’s the email (text below in case it’s hard to read in the screenshot):

Hi ______,
I want to write this down plainly because the pattern here is obvious: “performance problem” is a convenient label for an independent thinker in an organization that only models the values it proclaims when convenient.
We have 80 contractors (by org chart) supporting a department of about 50 people. I’ve been here long enough to know what most of that contractor work is: process-driven, templated, repeatable work designed by us, executed by them, largely unchanged for years. This is exactly the category of work that AI can now absorb faster, cheaper, and without a recurring external invoice.
Instead, we’re talking about AI workshops, content tools, productivity frameworks, etc. That work is fine. But it’s not the first move. We’re trying to fly before checking whether the runway is clear.
The runway is not clear. There are 80 contractors standing on it, billing us monthly, doing work that a well-configured AI agent could absorb by next quarter.
Here’s how I’d approach it:
- Every manager responsible for contractor spend documents what those contractors actually do — workflow by workflow, task by task.
- If they don’t have time, require the contractors to document it against a standard template.
- Review each workflow for AI suitability (full automation, partial automation, human-required)
- Build a phased plan to reduce external dependency and reallocate spend toward work that requires human judgment.
- Power-move: leverage our contractors’ own AI capabilities to have them actually build the agents to help us achieve this.
This does not require layoffs or a re-org. It requires forward-thinking leadership and basic management discipline.
The reason it hasn’t happened is not that the opportunity is unclear. It’s that our system rewards budget preservation more than outcomes. Next year’s allocation is anchored to this year’s burn, which creates a predictable incentive: spend the budget, protect the spend, and call it operational necessity. The people managing the largest contractor pools are not being pushed to improve ROI. They are being rewarded for administering larger pools. And it so happens that bigger budget = greater sense of self-importance and political status, which creates a legitimate (and self-serving) counter-incentive to ‘values’ we claim to model.
That is not strategy. That is inertia with a budget.
There’s no technology problem here. We have a thinking problem.
We have a stated value of being “[redacted]” What is actually rewarded is agreeing with your boss and protecting headcount. If creative thinking were genuinely valued here, this review would already be underway.
That is a leadership gap, not an AI gap. We are structurally better at preserving systems than questioning whether they should still exist—and AI is exposing that gap fast.
You’ve been great to work with, and I appreciate that we can be honest with each other—which is why I am sharing this with you (to champion, if it suits you) rather than letting it die quietly because nobody said it out loud. I’m not going to spend my own energy pushing this through an organization that has consistently demonstrated it will penalize people who challenge comfortable assumptions. But the business case is real.
Take it, use it, put your name on it. I genuinely don’t care about the credit.
— Steven