AI Autonomy? I Don’t Buy It
I was working on my second article about downsizing when a former colleague reached out, clearly anxious: “What do you think about AI Autonomy?”
My answer: I don’t buy it. At least not yet.
Not because AI isn’t smart enough. The technology is impressive. But because of a more fundamental question that nobody seems to be asking: When AI makes decisions across functional boundaries, who is accountable for the outcome?
AI Autonomy is not a technology problem. It’s an organization design problem.
And until we solve it, AI Autonomy is just a buzzword.
Two Concepts, Often Confused
Before we go further, let’s clarify two terms that people use interchangeably but shouldn’t:
Responsibility: Who executes the task
Accountability: Who owns the outcome
AI can be responsible for many tasks. It can process data, generate forecasts, flag anomalies, and draft reports faster than any human. No question about that.
But can AI be accountable?
In 1979, an IBM training manual stated: “A computer can never be held accountable. Therefore a computer must never make a management decision.”
46 years later, this still holds. AI cannot be fired. AI cannot be sued. AI cannot feel the weight of a decision gone wrong. When things fall apart, a human must answer for it.
The question is: which human?
The Accountability Vacuum
After 17 years in org transformation, I’ve made a career out of using worst case scenarios to force stakeholders to agree on accountability. Who bears responsibility when things go wrong? Trust me, this is the hardest part of OD. The conversations get uncomfortable. People dodge. Fingers point in circles.
Anyone who says “just build a RACI and you’ll be fine” has never actually done this work.
Here’s what I’ve learned:
When AI serves one person, accountability is clear.
If AI helps you write a report, analyze data, or manage your calendar, you are still the owner. AI is your tool. If the report is wrong, it’s on you. Simple.
When AI spans two or more employees or functions, accountability disappears.
The moment AI takes over work that used to involve multiple people or multiple departments, we have a problem. Accountability doesn’t just get “shared.” It evaporates into the gaps between org chart boxes.
Why? Because cross functional accountability was already negotiated, ambiguous, and political before AI entered the picture. Getting Sales and Operations to agree on who owns a forecast number has always been a battle. AI doesn’t solve this. AI exposes it.
A Real Example: Demand Planning
The collaboration between Sales Forecast and Demand Planning is a textbook case.
Company X uses an AI platform for demand planning. The vendor’s marketing claims the platform enables “autonomous scenario planning” and allows planners to “set parameters and let it run.” They promise a 50% improvement in planner efficiency and 15% improvement in forecast accuracy.
Sounds autonomous. Sounds impressive.
But let’s play out the worst case scenario. The AI forecast is wrong. Inventory piles up. The company is sitting on $30 million in unsold product. The CEO wants answers.
What happens?
Sales: “The AI made the forecast, not me.”
Operations: “I only reviewed the exceptions it flagged.”
Supply Chain: “I followed the AI’s recommendation.”
Everyone has a defense. Everyone was just following the process.
Everyone is responsible. Nobody is accountable.
Nobody gets fired. The root cause doesn’t get fixed. Six months later, it happens again.
This is not a hypothetical. I’ve seen versions of this play out in every organization where AI crosses functional lines.
The Cross Functional Threshold
I’ve reviewed every reorganization case I’ve worked on over 17 years.
To this day, I cannot think of a single cross functional process where current AI can be the owner. I can’t even think of a small cross functional task where AI can be the owner.
Note: I said owner. Not that AI can’t complete these tasks. It can. But completing a task is not the same as owning the outcome. Ownership means that when things go wrong, someone answers for it. Someone’s job is on the line. Someone fixes it.
AI can’t do that. And when AI operates across functional boundaries, humans won’t do it either. Because the accountability was never clearly assigned in the first place.
The Absurd Future?
Here’s a thought experiment.
Maybe the future “solution” is to build two AIs. One representing Sales, one representing Supply Chain. Let them negotiate with each other. They can argue about the forecast, escalate disagreements, and reach a “consensus.”
Sound smart?
Think about it: we’ve spent decades trying to fix cross functional collaboration between humans. Now we want to build AI agents that replicate the same dysfunction. Just faster.
We would be automating the argument, not the accountability.
The Bottom Line
I’m not saying AI Autonomy will never work. On the contrary, I’m optimistic about its future. The technology will keep improving. The use cases will expand.
But technology is not the bottleneck. Organization design is.
Before we rush to embrace “autonomous AI,” organizations need to answer a fundamental question: Who owns the outcome when AI operates across functions?
Until that question is answered. Clearly, explicitly, with real consequences attached. So-called AI Autonomy is just automation with a fancier name.
Or to put it more bluntly: an expensive suggestion box.

