Who's Playing Chess, and Who's a Piece? The Power Game of AI Governance
How the New Delhi AI Impact Summit Exposed the Global Fight for AI Influence
Opening: The Broken Chain
February 19, 2026. New Delhi AI Impact Summit.
Modi lined up the political and business leaders onstage, pulling them into a row, hands raised high, linked together. Sundar Pichai played along. The others played along. The entire stage played along. Except between Sam Altman and Dario Amodei, where the chain broke. The two men each raised a fist, with an awkward gap between them.
The scene reminded me of the oath ceremony before the Quarter Quell in the second Hunger Games film. A row of elite tributes, hand in hand, performing unity for the audience. But everyone knows that once the arena gates open, those hands will let go, and they will become each other’s opponents.
The logic of the entire summit was almost identical.
First Cut: The Power Structure in a Photo Op
Start with who’s standing onstage: Altman, Amodei, Pichai — three American tech company CEOs. Then Modi, the host.
This is today’s global AI governance in miniature: the people with real technological capability stand center stage, while those seeking a voice in the conversation set up the venue.
India currently has no globally influential frontier model, no homegrown large language model, no advanced chip manufacturing capability. But Modi organized the most lavish AI summit in history, drawing representatives from over 100 countries, 20 heads of state, and 45 ministerial delegations. This was not a technology conference. It was an auction for a seat at the table.
You might ask: why have India, the EU, and so many Global South countries suddenly developed such a fervent interest in AI governance and AI ethics?
At its core, this is a profound case of FOMO — Fear of Missing Out.
For the past several decades, the global geopolitical landscape has looked like a corporate org chart frozen in place for half a century. Apart from a very small number of countries like China, which clawed their way into a different reporting line and title through brutally competitive performance, most countries’ positions have remained fixed. What tier of supplier you are, which regional execution unit you belong to — all of it was written in stone long ago.
But the arrival of AI means the international order is about to undergo a total reorganization.
For most mid-sized countries, the barriers of foundational compute and frontier models have already locked them out of the main table. But AI governance, ethical standards, and compliance frameworks remain in a “power vacuum.” So they are making a desperate bet: using rules and discourse power to hedge against their technological disadvantage.
Anyone who has worked in organizational transformation will recognize this scene immediately. It’s like a company about to establish a “New Business Transformation Steering Committee,” and every previously marginalized mid-sized business unit head is terrified: if they don’t muscle their way onto this committee during this reorg, they will spend the next twenty years as “compute vassals” in the new power structure.
The most common strategy for gaining a bigger voice on such a committee? Host a cross-functional summit. Get the CEO and the heads of the major business units to come sit in your conference room. You may not have the strongest performance numbers, but you now have a photo of every key stakeholder in your meeting room. That photo is your political capital.
India is doing exactly the same thing. The only difference is the platform has shifted from a conference room to the world stage.
Even more telling was how China showed up. India formally invited China to the summit for the first time. China came — but sent a Vice Minister from the Ministry of Science and Technology. The US sent CEO-level figures like Altman, Dario, and Pichai. China sent a vice minister. The gap in seniority is itself a diplomatic statement: I’m giving you face, but I don’t consider this a big deal.
Chinese domestic commentary was even more blunt. Analysts from Guancha noted that India was hoping to elevate its own status and diplomatic influence by inviting both AI superpowers simultaneously. The consensus on Weibo was clear: this was India’s bid for “AI visibility,” not a breakthrough.
Second Cut: Are You a Player or a Referee?
At its core, the global AI competition between nations can be reduced to a single question: are you a player or a referee?
This question matters because most countries haven’t figured out the answer. Or more precisely, they want to be both.
The EU wants to be the referee: the AI Act is the world’s first comprehensive AI regulation, effective since August 2024, and its four-tier risk classification has become a de facto global standard. But it also wants to be a player: heavily backing Mistral AI, pushing “Sovereign AI,” and emphasizing at every opportunity that “we have technological capability too.” The problem is that referee and player are inherently conflicting roles. Your credibility as a rule-maker comes precisely from the fact that you don’t play in the game. The moment you step onto the pitch, your whistle stops working.
India has the same problem. On one hand, it’s proclaiming “Welfare for All, Happiness for All,” rushing to seize the moral high ground. On the other, it’s desperately courting Altman and Pichai for investment and factories. Are you trying to regulate them or please them? When your summit accepts hundreds of billions of dollars in AI investment commitments, do you still have standing to tell those investors “your AI product is non-compliant”?
I’ve seen this kind of role confusion countless times in corporate life. The most classic example is an HR department that wants to be “the voice of employees” while simultaneously executing management’s layoffs. Both roles matter, but you cannot play them at the same time. The moment you sit across from employees at the termination table, your credibility as their advocate drops to zero. The EU and India’s predicament is fundamentally the same: role ambiguity is not flexibility. It is the destruction of credibility.
The US and China have no such problem.
The US knows it is the most important player. According to the Stanford AI Index (Ecosystem Graphs), 109 of the 149 foundation models released in 2023 were American (roughly 73%). By installed IT power capacity, the US and China together account for approximately 70% of global data center capacity (2024). The Trump administration outright revoked Biden’s AI executive order. The signal could not be clearer: I am not the referee. I am the player — the biggest player on the planet.
China doesn’t pretend either. It’s rapidly iterating on models (DeepSeek, Qwen, and others), going all-in on standards (algorithm registration systems, the Global AI Governance Initiative), and building discourse power across the Global South. The roadmap is unambiguous: I’m catching up, I know I’m catching up, and there is no second path.
What these two countries share is this: neither of them is anxious. Because they know who they are.
The EU and India are anxious precisely because their size has created an illusion: “My market is big enough; my population is large enough — maybe I can try to be both.” This illusion is the most dangerous strategic trap, because in AI governance, the most dangerous thing is not lacking capability. It is not knowing who you are.
Third Cut: Brzezinski’s Chessboard
The reason the EU and India fall into this structural illusion of “wanting both roles” is that they have not seen through, or refuse to acknowledge, the true nature of the American-led power chessboard.
Before discussing who is a player and who is a referee, there is a more fundamental question: how does America actually view its “allies”?
Though somewhat dated, Brzezinski’s 1997 book The Grand Chessboard remains remarkably relevant today. The book’s most devastating element is not its geopolitical analysis — it is its vocabulary. These words aren’t insults. They are an explanation: alliances can also be hierarchies.
Brzezinski classified countries within the American system into three tiers: vassals, tributaries, and barbarians.
Note his choice of taxonomy: he could easily have used “allies,” “partners,” and “competitors” — the standard diplomatic lexicon. But in his most candid passages, he reached for imperial vocabulary, not diplomatic vocabulary.
Here is the original text: “Past empires based their power on a hierarchy of vassals, tributaries, protectorates, and colonies, with those on the outside generally viewed as barbarians. To some degree, that anachronistic terminology is not altogether inappropriate for some of the states currently within the American orbit.”
A former US National Security Advisor, describing America’s alliance system in the language of feudal empires. This is not mockery. This is the ultimate honesty.
He even wrote explicitly that Britain and Japan can no longer be considered “geostrategic players,” because their policy space operates within the framework preset by the United States.
Now let’s translate the 1997 chessboard to the 2026 AI battlefield:
In 1997, vassals were defined by security dependence — you needed America’s military umbrella, so you were a vassal.
In 2026, vassals are defined by technology dependence — you need America’s LLMs, chips, and cloud infrastructure, so you are a vassal.
The medium of dependence has shifted from military alliances and deployments to compute, chips, cloud, and model ecosystems. But the power structure is identical.
Think about NATO. Nominally an alliance. In reality, America plus a group of followers. Does the EU have an independent strategy within NATO? No. But they’ll tell you “we are equal partners.” AI alliances follow the same logic: join an American-led AI alliance, and you’ve acknowledged the hierarchy. Don’t join, and you’re excluded. Neither path leads to becoming a major player.
In the corporate world, I call this the “illusion of integration.” A regional headquarters thinks it’s running localization strategy, but the real decisions come from the global HQ. You think you’re a partner. You’re actually an execution arm. The biggest decision a regional HQ gets to make is how to execute HQ’s decision.
The EU and India’s role in AI alliances is identical to that regional HQ.
Here’s the irony: the more alliances you join, the less you matter. In any alliance where the US is present, everyone else is a supporting character. You’re not there because you have unique value. You’re there because the alliance needs to pad the headcount to look “multilateral.” It’s like a cross-functional project team with twenty names on the roster, but only two or three making actual decisions. Everyone else exists so the meeting minutes can claim “broad participation.”
The EU and India are not content being supporting characters. But therein lies the contradiction: the more diverse the alliances you join or initiate, the less clear your leading role in any of them becomes.
So are the EU and India permanently condemned to be chess pieces, or forever stuck as “regional headquarters”?
No. They absolutely have a chance to become a true pole of power. But the prerequisite is brutal. They must accept this:
Rules are not the source of power. At best, they amplify it.
A regional HQ that wants to overtake global HQ was never going to get there by writing the company’s compliance manual, or by hosting an annual offsite where all the senior leaders fly in. The only path is to break free from technological dependence on HQ, build an unassailable core product, and physically drag the center of gravity toward yourself.
In AI, the cost is even higher. The scaling laws of large models and the tens-of-billions-of-dollars compute threshold mean this is an intensely centralized game. “Small and beautiful” may still have survival space in vertical applications, sovereign contexts, and low-cost deployment — but at the frontier of general capability, it is not enough to get you a seat at the main table. To sit at that table, you must enter a brutal, cash-burning, no-exit contest.
Do the EU and India want to be major players? Absolutely. But they are unwilling to pay the price. What they want is not to become real players. What they want is a front-row seat without getting any blood on their clothes.
Unfortunately, in the arena of great power competition, you cannot win the Hunger Games in a freshly dry-cleaned suit by writing rules. You either fight in the mud on the field, or you sit quietly in the stands and accept being reorganized by fate.
Fourth Cut: Who Gets to Be the Referee?
Since most countries are either unwilling or unable to get into the mud, the question becomes: if player and referee are mutually exclusive, who qualifies to be the referee?
But before answering that, we need to ask a more fundamental question: do AI superpowers actually need a referee?
The fact is, the US and China are already talking directly. In May 2024, the two countries held their first intergovernmental AI dialogue in Geneva. In August, US National Security Advisor Sullivan visited Beijing and met with Wang Yi; both sides agreed to continue AI cooperation talks. In November, President Xi and President Biden reached a substantive consensus at the APEC summit: both agreed to maintain human control over the decision to use nuclear weapons.
Sullivan himself put it bluntly in a January 2026 essay: “As the world’s only two AI superpowers, the United States and China need to engage one another directly to address these dangers.”
In other words, great powers don’t necessarily need a middleman. Just as two business unit heads in a corporation can pick up the phone and sort things out directly, without HR relaying messages.
So where does the referee’s value actually lie? Not in passing messages between superpowers. The referee exists to protect countries that are neither China nor the US — to give them a framework for not being crushed between the two. Just as HR’s real purpose is not to relay messages between executives, but to protect the small departments and ordinary employees caught in the crossfire.
Once we understand who the referee actually serves, the qualifying conditions turn out to be quite demanding. You need good relationships with both the US and China, no significant conflicts or confrontations with either in current international politics, and — most critically — you must voluntarily give up the ambition of becoming a global superpower.
That is the real cost. The prerequisite for being a referee is that you don’t play.
So who can be this referee?
Your first thought might be Singapore. Fair enough — Singapore has AI Verify, the world’s first AI governance testing tool, and SEA-LION, a Southeast Asian language model. It has walked the tightrope between the US and China for decades. But Singapore’s relationship with China is not as smooth as it appears. On the South China Sea, Singapore leans American. Fundamentally, it remains part of the US security architecture. China knows this.
The candidate that truly fits the criteria is the Middle East. The Middle East can serve as referee precisely because it navigates the US-China dual-track system with ease.
The Gulf states occupy an extraordinarily unique position. The US provides the security umbrella; military bases are stationed there. But simultaneously, Saudi Arabia and the UAE have seen their relationships with China warm rapidly in recent years: Huawei’s extensive involvement in 5G infrastructure, key nodes along the Belt and Road Initiative. The 2023 Saudi-Iran handshake brokered in Beijing was a landmark moment in Chinese diplomacy.
The UAE in particular is a master of playing both sides. The Falcon model was built with its own money and its own strategy, beholden to neither side.
You might ask: if the UAE built Falcon with its own money, and Singapore built SEA-LION, doesn’t that make them players? Their official line is “differentiation through Arabic or Southeast Asian language focus.” But anyone who knows the industry understands this is PR spin. Today’s top American models (think ChatGPT or Gemini) handle Chinese and Arabic with ease. The so-called “minority language moat” crumbles in the face of absolute compute superiority.
But this actually proves how clear-eyed they are about wanting to be referees.
In a corporate setting, a support function like IT also builds its own internal tools. That doesn’t mean it’s trying to replace the core business units and go to war. The Middle East and Singapore maintain their own open-source models not to compete with the US and China for global market share, but for two reasons: first, to keep sovereign data from leaving their borders; second, to hold a defensive bargaining chip (BATNA: Best Alternative to a Negotiated Agreement) when negotiating technology imports from the US-China giants. A referee needs to know how to kick a ball, just enough to avoid being fooled by the superstars on the pitch. This is defensive infrastructure, not an offensive weapon.
Because of this clearly, the Middle East has retained an advantage that neither the EU nor India possesses: it does not aspire to be a global superpower. The Gulf states know their weight class, so they don’t fall into the “both roles” trap. They positioned themselves from day one as connectors, brokers, and hubs. Dubai’s entire city identity is built on this: “I don’t produce. I connect.”
In organizational change, when two major business units need to negotiate a restructuring, the ideal project lead is often not an external consultant (too expensive and needs too much context), but someone from a supporting function with no direct stake in either BU — say, someone from Strategy or Finance. Their authority doesn’t come from “I understand the business better than you.” It comes from “I have no competitive relationship with you.” The Middle East’s role in global AI governance is the same. You don’t need to be the most technically sophisticated. You need all parties to trust that you won’t play favorites.
I saw this “referee role” made tangible in Abu Dhabi. There, I got into a WeRide autonomous vehicle — Chinese technology, hailed through Uber — an American platform, driving on the roads of a Gulf state. US and Chinese technology coexisting seamlessly on Middle Eastern soil. No conflict. No picking sides. This is what a referee should look like.
Closing: Major Players Always Decide Alone
Back to that broken chain on the New Delhi stage.
Sam Altman said afterward: “I was sort of confused.” Classic Altman — smooth, leaving himself an exit, chalking up the awkwardness to “not understanding the choreography.”
Dario Amodei said nothing.
That is the difference between two kinds of people. One explains why he didn’t cooperate. The other doesn’t feel an explanation is necessary.
Running Super Bowl ads mocking OpenAI. Telling global political and business leaders at Davos that exporting chips to China is equivalent to selling nuclear weapons to North Korea. Refusing to hold a competitor’s hand in front of the Indian Prime Minister. Every one of Dario’s actions points to the same logic: I do not need to participate in your ceremony to prove my standing.
This is the fundamental difference between a major player and every other role. Major players don’t join alliances. They don’t need group photos to confirm their position. They don’t need to shake everyone’s hand to earn recognition.
In the corporate world, when have you ever seen a truly powerful CEO who needs to attend every industry summit and join every alliance organization to maintain influence? Never. Real power is precisely this: you can choose not to show up.
Someone joked on social media: “When AGI? The day Dario and Sam hold hands.”
In other words: never. Because the relationship between major players was never hand in hand.
In my next article, I will bring the AI Decision Rights discussion from the world stage back inside the enterprise. In your company, who has the authority to decide what AI can and cannot do? The answer to that seemingly technical question is, in fact, an organizational politics question.
OD Behind the Curtain


