AI-Era Layoffs Are Turning Your Company Into a Pre-Made Meal
The restaurant I frequent recently hired a new chef, and their signature vinegar cabbage has become inedible. The cabbage is still crisp, but the vinegar has lost all its layered acidity—just a single flat, sharp note. I know this isn’t simply about one chef’s skill. In an industry swept by pre-made meals, even a place that still insists on cooking everything from scratch can no longer find new hires with solid fundamentals.
Pre-made meals are almost perfect from a business standpoint: fast, low labor cost, consistent quality. The output isn’t stunning, but it’s edible. The cost is equally obvious: an entire generation of chefs has been stripped of the environment they need to grow.
In the past, a chef built integrated skill by cycling through dozens of dishes every day—vinegar cabbage, mapo tofu, red-braised pork, sweet-and-sour ribs. No one was deliberately training him. He was training himself through the breadth of dishes he had to cook. Culinary progress is, at its core, built through that breadth, one tiny iteration at a time: pull the heat back two seconds sooner here, let the vinegar infuse a little deeper there. A chef’s skill isn’t something you “learn” and then hold steady. It’s something you sharpen every single meal.
Pre-made meals skip this entire process. They also dismantle the training ground where young cooks were supposed to level up. When an entire industry accepts “good enough” and stops pursuing excellence, the ceiling of what the industry can produce stalls—or collapses. Because a ceiling is not a static target. It is the dynamic result of hundreds of tiny “let me improve this one more time” acts every day.
The restaurant industry is no exception. The AI-driven layoff decisions your company is making are structurally the same story: capabilities and skills that were once carried along by the system as a natural byproduct are being stripped away, one efficiency optimization at a time.
The Ceiling Blind Spot
When companies run any efficiency optimization—whether adopting AI or switching to pre-made meals—the benchmark is always in the present tense: How much does this save today? How much faster does it make us today?
The core driver behind this benchmark is: speed.
Because customers demand speed and bosses demand speed, anything that clears the minimum bar of “usable” gets accepted. One or two such compromises don’t hurt. But a years-long obsession with speed means nobody ever benchmarks against “potential and future ceiling.”
What will top-tier capability look like in ten years? Where will the real moat be in the second half of the game? Since these questions can’t be quantified in this quarter’s report—and certainly can’t be converted into a year-end bonus—everyone naturally uses the present as their reference system and silently assumes it will remain valid forever. This is what I call the Ceiling Blind Spot.
Where Does the Ceiling Come From?
A sharp pushback here: even if we keep doing everything by hand, we don’t know where the ceiling is either. If nobody can see it, what’s the point of saying AI is blinding us to it?
To answer that, we need to ask a more basic question: where does a ceiling actually come from?
A ceiling is never a static endpoint. It’s a dynamic process, pushed forward by two forces working in tandem.
The first is leaping: a breakthrough brought by new technology. One-shot, discontinuous, impossible to predict with precision.
The second is grinding: the more common, more everyday kind. Through long-term competition and polishing, pushing just slightly beyond what was possible yesterday. Today’s marketing profession looks nothing like the one fifty years ago—not because of some sudden miracle, but because over those fifty years, countless marketers spent every day thinking about how the last campaign could have been better. That’s what accumulated into today.
These two forces don’t operate independently. They are interlocked.
A leaping breakthrough can only take root if it lands on soil that has been cultivated by years of grinding. The first iPhone in 2007 didn’t descend from an alien civilization. It was decades of incremental progress in screens, batteries, and wireless technology finally hitting a critical threshold. Without those decades of grunt work, no one could have conjured that device.
Daily grinding is not the opposite of technological leaping. It is the only precondition that makes leaping possible.
We’ve Turned AI Into a Pre-Made Meal
AI is very fast, and its ability to synthesize is extraordinary. What it synthesizes is not one or two people’s experience but the distilled output of countless human works absorbed through training data. So some will argue that AI output should exceed what any individual human can produce—especially when a small number of exceptionally talented people push AI to its limits, feeding in high-quality input that should raise AI’s ceiling for everyone.
But the training mechanism of current AI models systematically suppresses minority preferences. In academic circles, this is called preference collapse. When the vast majority of users accept “good enough,” even if a tiny minority pushes for excellence, the model optimizes in the direction of “satisfy the vast majority.” The input from the few who pursue the highest standards gets structurally erased in training.
So, beyond speed, AI has not delivered anything that exceeded my imagination. It typically gets me to about 90% of what I had in mind. In rare cases it hits 100% or 105%. But it has never handed me something that made me gasp and say “I had no idea it could be done this way.”
AI is not incapable of breaking through human imagination. AlphaGo’s “Move 37” in its match against Lee Sedol stunned the world’s top Go players. AlphaFold’s protein structures exceeded what biologists had built up over decades of intuition. These are genuine moments where AI broke past human imagination. But those moments came after humans spent years designing extremely sophisticated training environments with clear rules and measurable outcomes, forcing the AI to grow something new on its own. In everyday work, our use of AI is far more straightforward: open the dialog box, type a prompt, glance at the output, think “eh, not perfect, but usable,” and ship it.
AI’s built-in mechanism for suppressing minority preferences, combined with our refusal to provide the depth and friction of real collaboration, has together turned AI into a pre-made meal.
We could have used AI as a sous-chef—in rounds of scrutiny and tearing up the draft, pushing out that last millimeter beyond the ceiling. But we couldn’t be bothered. We wanted speed. We accepted “good enough.” And so the moment that would have made you pound the table—along with that one millimeter of human progress—quietly evaporated without a trace.
AI-Era Layoffs
Today, there’s no need to debate whether to embrace AI. Almost every business leader has already answered “yes.” AI has handed them two especially tempting bullets.
The first bullet is direct task replacement. Especially for jobs that once sounded prestigious—”senior analysis, synthesis, integration”—writing research reports, doing competitive analysis, pulling data. These were the core work of knowledge workers. Now AI does it faster, more consistently, and without complaining or calling in sick.
The second bullet is far more subtle: the evaporation of human coordination. Research from Asana shows knowledge workers spend about 60% of their time on coordination—group chats, alignment, status-chasing, conflict resolution. A large chunk of that coordination was organized around the very analysis tasks AI is now taking over. When the task is gone, the coordination around it has no reason to exist.
So executives see a perfect picture on their dashboard: fewer people, lower coordination costs, output still holding. A supremely satisfying Double Kill.
But hidden inside the second bullet is a lethal cognitive trap.
What AI actually reduces is coordination around codifiable tasks. The uncodifiable coordination—reading nuance in ambiguous situations, managing cross-functional trust, aligning judgment across parties with conflicting interests—has not decreased. In fact, it may have increased, because someone now has to verify AI output at critical checkpoints, resolve logical conflicts between multiple AI-generated artifacts, and clean up failures AI cannot be held accountable for.
When business leaders see “expected total coordination time is down” and use that as the basis for another round of cuts, the people most likely to be collaterally damaged are precisely the ones doing the uncodifiable work.
The Trap of Cross-Level Layoffs
The most extreme version of AI-triggered layoffs is wholesale delayering: eliminating entire junior or middle tiers, and having a handful of senior employees plus AI deliver the output directly.
On current financial statements, this decision looks spectacular. One senior employee plus AI can replace an entire team. Costs drop, speed rises, quality is arguably acceptable.
But the cost is this: once a tier is erased from the organization wholesale, the capabilities it carried vanish with it. And that disappearance is irreversible.
Five years later, when you discover a critical piece is missing and try to hire it back from the outside market, you’ll find you can’t. The candidate pool at that tier has shrunk across the entire market. When every company makes the same “clever” decision, when no company is paying to cultivate that tier, the industry’s talent reservoir dries up.
That’s the moment you realize: the junior roles you replaced with AI weren’t just “cheap labor.” They were the training ground for your organization’s future senior employees.
Without junior roles doing the grunt work, without newcomers climbing the ladder step by step, without young employees developing business instinct through countless small mistakes—when you eventually need a seasoned expert who can operate independently, the market simply cannot grow one.
This is the same mechanism by which pre-made meals have broken the integrated training environment that produces skilled chefs.
The Downward Spiral
By this point, two lines of degradation in the AI era have already begun to interlock.
One is on the output side: AI’s built-in preference collapse, combined with our unwillingness to push back, keeps it locked at “90%.” The other is on the input side: cross-level layoffs eliminate junior and mid-level roles, ensuring the next generation of people who could push AI to 95% or higher will never emerge.
The two lines accelerate each other. The more AI behaves like a pre-made meal, the more reason companies have to cut people. The deeper they cut, the fewer people will exist in the future who could push beyond pre-made-meal-level output. This is not two pieces of bad news running in parallel. This is a self-reinforcing downward spiral.
Keep at Least One Person on Every Layer
After seventeen years in organization design, my operating principle is simple: You can lay people off. But keep at least one person on every team, at every layer.
The first pushback I always hear: “What if that layer was already bloat? I’d identified the middle tier as overgrown. AI arrives, I take the chance to trim it—why keep one?”
It’s a sharp objection and deserves a real answer.
The middle tier in most companies is actually two different things:
Type A: pure headcount bloat. Tiny spans of control, absurdly long reporting lines. Cutting the whole tier is correct. It should have been cut before AI even arrived.
Type B: a load-bearing wall in the chain that passes capabilities down through the organization. The value they carry is simply hard to quantify. They may be mentoring newcomers, doing uncodifiable cross-functional coordination, or serving as the translation layer between execution and strategy.
The fatal problem: Type A and Type B look identical on a financial statement. You can’t tell which is fat and which is bone.
What I oppose is not cutting bloat. Cut it! What I oppose is this: business leaders who can’t distinguish Type A from Type B, seduced by AI’s promise of efficiency, default to treating everything as Type A and uproot Type B along with the rest.
“Keep at least one person per layer” is not about protecting bloat. It is a defensive mechanism—a very cheap insurance policy on your own survival. Real unknown risk—the kind that doesn’t yet have a name or a job description—cannot be addressed with a dedicated project. It can only be absorbed by structural redundancy within the organization. Think of a competitive bodybuilder’s body fat: the lines look perfect at 5%, but a single cold can collapse the system. Moderate redundancy is not inefficiency. It is reserve.
But the one you keep can’t be just anyone. Not the most obedient, the cheapest, or the best at polishing status reports. It has to be the node—the person who can mentor newcomers, make judgment calls in ambiguous situations, and hold a conversation with both the executive and the intern. They are the ember of that layer, not its leftover.
If a team can’t even identify such a person, then its problem isn’t “can AI replace us.” It’s something much deeper: organizational tissue death. And AI cannot cure that.
There’s an even more radical objection: AI will eventually deliver full cross-functional automation. Organizational layers will disappear entirely. Why keep anyone? In a previous article, I used Sun’s Decision Authority Matrix to demonstrate that the “Civilian × Autonomous Process” domain—fully autonomous AI execution across departments—is structurally unimplementable in commercial settings for the foreseeable future, because halting authority cannot be exercised, the cost of halting is unbearable, and accountability cannot be assigned. (The full argument is in “AI Runs Entire Kill Chains in War. In Business, It Can’t Even Run a Supply Chain.“) Here, I’ll state only the conclusion: the chains that actually run in the real world are still held up by organizational structure and people. “Keep one per layer” is not an obsolete recommendation. It is the only executable defense against error.
This Isn’t a Benefit for Employees. It’s an Insurance Premium for Shareholders.
Perhaps you’ll say I’m arguing on behalf of employees. Why should shareholders pay for unknown risk? Isn’t keeping these people just dead weight?
Exactly the opposite.
What shareholders buy is not just this year’s financial performance. It is the long-term viability of the company. Organizational redundancy is not a gift to employees. It is a premium shareholders are paying on their own future. A company trimmed too “clean” may see short-term share price appreciation, but its resilience and evolutionary capacity have been severely overdrawn.
There is a classic agency problem buried here. Management has a bounded tenure and a powerful incentive to convert long-term corporate risk into a beautiful short-term financial result during their time in the seat. In this cross-level-layoff frenzy, shareholders are ultimately the victims. Shareholders who actually care whether the company will be alive in five or ten years should not permit management to drain the life-saving cushion as if it were just fat.
The Organization Is More Reliable Than Any Individual
My bottom line: the organization is always more reliable than any individual.
People quit, get sick, have bad days. Organizational structures don’t. A well-designed organization keeps its capabilities even when a specific seat changes hands—because those capabilities are embedded in the workflow, the layer-by-layer mentorship, the cross-function collaboration. They don’t live inside any single person’s head.
But cross-level layoffs in the AI era are doing the exact opposite.
On the surface, the organization is “slimming down.” What’s actually happening: capabilities that should be held and preserved by organizational structure are being concentrated into the hands of a few surviving senior employees. At the cost of long-term organizational stability, the personal value and indispensability of a few individuals are being pushed to a level that brings the organization itself no benefit.
That isn’t slimming down. That is privatizing the organization’s capabilities into the heads of a handful of people.
When a top hotel’s star concierge leaves, the hotel doesn’t just lose an employee. It loses his high-end network, his sharp judgment, the trust he built with clients over years. Because the hotel never designed a mechanism to retain these things, assets that should have belonged to the company walked out the door with him.
Cross-level AI layoffs are replicating this process across the entire white-collar world. The organization thinks it’s using AI to streamline. It’s actually mortgaging its core capabilities to the handful of people still standing.
Years later, when business stalls, key people defect, or a major incident hits, no one will trace it back to that “AI-driven cost optimization” decision. Just as no one today traces bad vinegar cabbage back to that first bag of pre-made meals. Pre-made meals aren’t perfect, but they’re edible and fast, so the boss went with them. Once the habit of “aiming at passing” sets in, the capabilities that can only grow through slow repetition—like a chef’s muscle memory for heat—quietly evaporate, one at a time.
I Don’t Care If AI Reduces Jobs
My motivation in giving these warnings to companies is not an attempt to save any particular employee’s job.
Companies are, at their core, the machine that keeps the supply of capability flowing through society. If that machine starts cannibalizing itself in the name of efficiency, then I—as the machine’s ultimate end user, as someone who has to eat every day—will eventually pay the price.
So here’s where I actually stand:
I don’t particularly care whether AI reduces job opportunities. Technology has always moved forward. Jobs have always been created and destroyed. That is not a new problem.
What I care about is this: walking into a restaurant and being unable to get a plate of vinegar cabbage that actually has flavor. Buying a bottle of shower gel online and not being able to find a single bottle with decent packaging. Calling any company’s customer service and reaching someone who has lost the ability to solve any problem that falls outside the SOP.
Will AI replace jobs? That’s a problem for business leaders and macroeconomists.
But will the baseline of commercial civilization turn to sand? Will the quality of everyday life for all of us slide? Those are questions for every single one of us.
This is the final installment of a three-part series on AI-era layoffs. The first two are “GenAI Is Cutting People It Can’t Replace” and “AI Didn’t Kill the Best Concierges. It Killed the Path to Becoming One.”

