I Thought the Metaverse Was Zuckerberg’s Worst Call. Then He Made AI Usage a KPI.
According to reporting by the Wall Street Journal, Meta now partly evaluates employee performance based on AI usage. Internal targets obtained by media outlets paint an even more specific picture: some engineering teams are aiming for 80% adoption of general AI tools among mid-to-senior engineers, 55% of code changes assisted by AI agents, and targets for 65% of engineers to have over 75% of their committed code AI-assisted.
Meta’s public messaging has been more measured. A spokesperson emphasized that the focus is on the impact AI creates, not just how often it’s used. But when your performance review is tied to quantified AI adoption targets, the distinction between “encouraged” and “required” gets very thin on the ground.
I’ve spent 17 years designing performance and accountability systems for organizations. When I read this, my first reaction wasn’t outrage. It was recognition. I’ve seen this pattern before. Not the AI part. The organizational behavior underneath it.
I understand the desperation
Let’s be fair about where Meta is coming from.
The company has accumulated roughly $80 billion in operating losses through its Reality Labs division since 2020, chasing a Metaverse that attracted almost no users and became a global punchline. Layoffs have hit the company in waves: approximately 21,000 jobs cut across 2022 and 2023, 600 from the AI unit in October 2025, around 1,000 from Reality Labs in January 2026, and another 700 in March 2026. And despite aggressive moves in the AI space, Meta has yet to produce a consumer-facing AI product that competes head-to-head with ChatGPT or Claude. The open-source Llama models have earned respect in the developer community, but in the race for mass-market adoption, Meta is not yet in the leading pack.
Capital markets are watching. After the Metaverse debacle, investors need to see that Meta has a credible next act. Embedding AI into performance reviews sends a clear signal: this company is all in on AI. Every employee, every workflow, every output.
I get it. The signal needed to be sent. But desperate people make desperate moves, and desperate moves are rarely attractive.
The KPI itself is a textbook mistake
Evaluating employees on whether they use a specific tool is process-oriented management. It measures compliance, not performance. It tells you that someone opened an application. It tells you nothing about whether the work got better.
Every manager has faced a version of this choice. Your team is behind schedule. You can say “everybody stay late tonight.” Or you can say “I need this on my desk by 9am tomorrow.” The first controls the process. The second sets the outcome. They might lead to the same result, but they are fundamentally different management approaches. The first tells people how to work. The second tells people what to deliver, and trusts them to figure out the how.
“Use AI” is the equivalent of “stay late.” It prescribes a method instead of raising a standard.
If you want a presentation built in one hour instead of four, set that as the expectation. If someone hits it with AI, great. If someone hits it without AI, equally great. You got what you asked for. But under Meta’s system, the person who used AI and took four hours scores higher on their review than the person who delivered in one hour without it. You’re not rewarding performance. You’re rewarding obedience.
And it gets worse. When you incentivize tool usage, employees start forcing AI into workflows where it adds no value, or actively degrades quality, just to check a box. The organization doesn’t get more productive. It gets more performative. People optimize for the metric, not for the outcome. This is Goodhart’s Law playing out in real time: when a measure becomes a target, it ceases to be a good measure.
Even if all-in is the right strategy, this is the wrong execution
Let’s set aside whether the KPI is smart or stupid. Even if Meta’s leadership genuinely believes the entire company needs to adopt AI, the way they’re going about it is a change management failure.
You don’t open a company-wide transformation with the most drastic move available. A sweeping KPI overhaul that affects nearly 79,000 employees, rolled out before you’ve demonstrated a flagship product that justifies the shift, is enormous organizational risk. You’re asking the entire company to reorganize how they work around a technology whose internal value proposition hasn’t been fully proven yet.
Effective change management sequences the pressure. You start with pilots. You let early adopters demonstrate value. You build internal case studies. You create pull, not push. What Meta did is all push, no pull. “Use this tool or your review suffers” is coercion, not transformation.
The irony is hard to miss. If you wanted to signal all-in commitment to a new technology direction, there are gentler ways to do it. You could rebrand the entire company to show your conviction. Wait. They already tried that with the Metaverse.
The broader pattern is more alarming than the KPI
This decision doesn’t exist in isolation. Over the past year, Meta has made a series of moves that, viewed together, should concern anyone who thinks about organizational health for a living.
In June 2025, Meta appointed 27-year-old Alexander Wang as its first-ever Chief AI Officer, leading the newly formed Superintelligence Labs. Wang is a genuinely accomplished founder who built Scale AI into a $29 billion company. But the appointment was not without friction. Yann LeCun, widely known as one of the “godfathers of AI” and a long-time leader of Meta’s AI research, left the company in November 2025. In a Financial Times interview, he called Wang “young” and “inexperienced,” noting a lack of background in how research is actually practiced. Within months of Superintelligence Labs launching, at least eight employees departed, including a twelve-year veteran who joined Anthropic and researchers recruited from OpenAI who returned after less than a month.
The organization has been flattened aggressively, with reports of manager-to-engineer ratios reaching 1:50 in some AI teams. Internal tensions have emerged as newly hired AI talent reportedly commands compensation packages that dwarf those of existing staff, with some long-tenured employees threatening to leave.
None of these moves is inherently wrong in isolation. Promoting younger talent injects energy. Flattening hierarchies can accelerate decisions. Paying market rates for scarce AI talent is rational. But doing all of them simultaneously, while also overhauling performance evaluation criteria and continuing to execute wave after wave of layoffs, dramatically shrinks the margin for error.
When you restructure your leadership, flatten your organization, revamp your evaluation system, and cut headcount all at the same time, you are making one very specific bet: that the new configuration will deliver transformative results fast enough to justify the upheaval. In Meta’s case, that means producing a flagship AI product that wins the mass market. Not an internal tool. Not a research paper. A product that ordinary people choose to use every day.
I’ve written before that every serious player in the international AI arena needs a competitive LLM. That logic applies at the enterprise level. If you’re going to restructure your entire organization around AI, you need something to show for it. As of today, Meta doesn’t have that. And without it, all this organizational disruption is just disruption.
The question that concerns me most
Here’s where my OD instincts kick in.
I don’t believe that nobody inside a 79,000-person organization realized that process-oriented KPIs are a bad idea. This is not an obscure insight. It’s covered in any introductory management course. Any competent HR professional would flag it. Any experienced people manager would feel the wrongness intuitively.
So why did it happen?
There are only a few possible explanations. People raised objections and were overruled. Or nobody dared to speak up. Or the decision was made unilaterally, without meaningful consultation.
Any of these is far more concerning than the KPI itself.
If this were a client engagement, a single bad KPI wouldn’t keep me up at night. Bad metrics get fixed. But the fact that a decision this obviously flawed passed through an organization of that size without being stopped? That’s the signal that makes me want to start examining the power structure. Who holds decision rights? Who has veto power? Is there a functioning feedback loop between frontline managers and executive leadership? Or has the organization reached a point where the CEO’s conviction overrides every institutional check?
A bad KPI can be fixed in a week. A broken power structure takes years to repair, if it gets repaired at all.
The cow is alive. But the conditions matter.
Meta is a cash cow. That’s not in dispute. The advertising business generated over $200 billion in revenue in 2025. The company has resources that most organizations can only dream of.
But cash reserves don’t make an organization immune to structural decay. They just extend the timeline. You can make bad decisions for longer before the consequences become visible. The Metaverse proved this: it took years and $80 billion before the market forced a course correction.
There’s an ancient Chinese military proverb from the Zuo Zhuan, written over 2,500 years ago: “The first drumbeat brings full morale. The second, it fades. The third, it’s gone.” The Metaverse was Meta’s first drumbeat. It was bold, it was loud, and it failed. The AI pivot is the second drumbeat. The organization is already fatigued, the talent pool is churning, and the market is skeptical. If this one doesn’t land, there won’t be enough morale left for a third.
Each cycle erodes something that money can’t easily rebuild: the trust of talented people who have to decide whether this is a company worth committing their careers to.
A cash cow is still a living organism. If the conditions aren’t right, the cow dies.
What I’d tell any CEO considering the same move
Raise your output standards. Shorten your deadlines. Demand higher quality. Then let your people figure out how to get there.
Some will use AI. Some won’t. Some will use it for certain tasks and skip it for others. That’s not a problem. That’s professional judgment. And professional judgment is exactly what you should be measuring, rewarding, and protecting.
The moment you start evaluating people on which tools they used instead of what they delivered, you’ve stopped managing performance and started managing compliance. AI doesn’t change the fundamental principles of good management. It just gives people one more way to get the job done.
Measure the outcome. That’s it.


