<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[OD Behind the Curtain]]></title><description><![CDATA[What really happens in org transformation. No theory, no sugarcoating.]]></description><link>https://www.odbehindthecurtain.com</link><generator>Substack</generator><lastBuildDate>Thu, 30 Apr 2026 14:17:13 GMT</lastBuildDate><atom:link href="https://www.odbehindthecurtain.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Hector Sun]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[odbehindthecurtain@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[odbehindthecurtain@substack.com]]></itunes:email><itunes:name><![CDATA[Hector Sun]]></itunes:name></itunes:owner><itunes:author><![CDATA[Hector Sun]]></itunes:author><googleplay:owner><![CDATA[odbehindthecurtain@substack.com]]></googleplay:owner><googleplay:email><![CDATA[odbehindthecurtain@substack.com]]></googleplay:email><googleplay:author><![CDATA[Hector Sun]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI Translated a Curse Into a Blessing. And You Want It to Run Your Company?]]></title><description><![CDATA[A social media translate button turned a curse into a blessing. No one noticed. The same structure is already running inside your enterprise, unsupervised.]]></description><link>https://www.odbehindthecurtain.com/p/ai-translated-a-curse-into-a-blessing</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/ai-translated-a-curse-into-a-blessing</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Sun, 26 Apr 2026 15:44:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b8eb1ef8-c1aa-4e1e-bdad-4106f2aaf7f9_966x184.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>On April 24th, a highly controversial public figure in the Middle East announced a cancer diagnosis. While scrolling through a social media platform, I noticed the Chinese-language comment section beneath the news. Out of curiosity, I tapped the platform&#8217;s built-in AI translation button to see what English-language readers would be getting.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9DKI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa102d1d1-01da-4d8c-8aaf-11d7eb42d4e3_966x184.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9DKI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa102d1d1-01da-4d8c-8aaf-11d7eb42d4e3_966x184.png 424w, https://substackcdn.com/image/fetch/$s_!9DKI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa102d1d1-01da-4d8c-8aaf-11d7eb42d4e3_966x184.png 848w, https://substackcdn.com/image/fetch/$s_!9DKI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa102d1d1-01da-4d8c-8aaf-11d7eb42d4e3_966x184.png 1272w, https://substackcdn.com/image/fetch/$s_!9DKI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa102d1d1-01da-4d8c-8aaf-11d7eb42d4e3_966x184.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9DKI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa102d1d1-01da-4d8c-8aaf-11d7eb42d4e3_966x184.png" width="966" height="184" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a102d1d1-01da-4d8c-8aaf-11d7eb42d4e3_966x184.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:184,&quot;width&quot;:966,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:50678,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/195535567?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa102d1d1-01da-4d8c-8aaf-11d7eb42d4e3_966x184.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9DKI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa102d1d1-01da-4d8c-8aaf-11d7eb42d4e3_966x184.png 424w, https://substackcdn.com/image/fetch/$s_!9DKI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa102d1d1-01da-4d8c-8aaf-11d7eb42d4e3_966x184.png 848w, https://substackcdn.com/image/fetch/$s_!9DKI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa102d1d1-01da-4d8c-8aaf-11d7eb42d4e3_966x184.png 1272w, https://substackcdn.com/image/fetch/$s_!9DKI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa102d1d1-01da-4d8c-8aaf-11d7eb42d4e3_966x184.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>The result was beyond absurd. This was not a case of &#8220;lost in translation.&#8221; It was a complete inversion. A blunt curse had been transformed into a warm blessing. Staring at the same comment on the same screen, Chinese and English readers were being split into two diametrically opposed information realities. And there was not a single indication anywhere on the page to warn the English reader: what you are seeing is not what the original says.</p><div><hr></div><p><strong>There Is a Mole in Your Information Chain</strong></p><p>I introduced a concept in a previous article that I call the &#8220;AI Mole.&#8221; The most covert way AI embeds itself is not by conspicuously replacing your job. It is by slipping undetected into the middle of your information chain, making a critical judgment call on your behalf without you ever knowing it happened.</p><p>In this case, that innocuous &#8220;Translate&#8221; button is a perfectly disguised AI Mole.</p><p>You assumed it was a neutral tool, a faithful dictionary that gives you what you ask for. But it has long ceased to be a dictionary. It has become a decision node. It inserted itself between you and the original information, made an unauthorized conversion, and got it spectacularly wrong. The most dangerous part is that you have no way of realizing it is wrong, because the English output reads flawlessly: grammatically correct, logically coherent, perfectly natural. It just happens to mean the exact opposite of the original.</p><p>This is what should send a chill down your spine: AI errors are no longer obvious gibberish on a screen. They are lies wrapped in flawless packaging.</p><div><hr></div><p><strong>This Is Not Censorship. This Is Far Worse.</strong></p><p>Some might shrug: platforms processing content is standard practice, is it not?</p><p>But censorship and manipulation are two fundamentally different things.</p><p>Censorship says: &#8220;I find this inappropriate, so I am removing it.&#8221; When you see a notice saying &#8220;this content has been removed&#8221; or your search returns nothing, you at least know something was taken away. You retain a basic defense mechanism: you are aware that information is missing. In other words, you know that you do not know.</p><p>Manipulation says: &#8220;I have altered this content, and I will make you believe you are reading the original.&#8221; You have no idea anything has been changed. You do not know that what you received is not the author&#8217;s intent. The thought &#8220;I might be misled&#8221; never even crosses your mind. You are trapped in the blind spot of not knowing that you do not know.</p><p><strong>Censorship strips you of information. Manipulation strips you of judgment.</strong></p><p>Which is more dangerous needs no elaboration.</p><div><hr></div><p><strong>The Inability to Determine the Cause Is Precisely the Problem</strong></p><p>How was this bizarre translation produced? A fundamental technical defect? A systematic bias in how the AI model handles certain contexts? Or did someone define a strategy behind the scenes, with AI faithfully executing it?</p><p>We do not know. And there is almost no way to verify.</p><p>But that is precisely the core of the problem.</p><p>When AI becomes the &#8220;black-box executor&#8221; in an information chain, whoever is behind it gains a perfect cloak of invisibility. If this was a deliberate strategy, discovery can always be met with a casual deflection to the algorithm: &#8220;This was auto-generated by AI. We will continue optimizing the model.&#8221; And if it truly was a system glitch, the platform may not have sufficient motivation to fix it.</p><p>I call this structure the &#8220;Reverse Moral Crumple Zone.&#8221;</p><p>I discussed the concept of Moral Crumple Zone in a previous article: when a system fails, the human takes the blame. The classic scenario is an autonomous vehicle crash where accountability lands on the safety operator. In that scenario, AI is the actual decision-maker. The human is the scapegoat.</p><p><strong>The Reverse Moral Crumple Zone works in exactly the opposite direction: humans set the rules, and the system takes the blame.</strong> Decision-makers define the strategy behind the curtain. AI executes it on the front stage. When caught, the excuses are always ready: &#8220;systematic bias,&#8221; &#8220;algorithmic limitations,&#8221; or &#8220;we are working on optimization.&#8221; Here, the human is the one pulling the strings. AI is the perfect scapegoat.</p><p>Two seemingly opposite directions, but the underlying structure is identical: accountability bounces endlessly between humans and systems, never landing on the party that actually made the decision.</p><div><hr></div><p><strong>Human-in-the-Loop? First You Need to Know There Is a Loop.</strong></p><p>Every mainstream AI governance framework today, whether it flies the banner of &#8220;Responsible AI&#8221; or takes the form of internal corporate AI compliance policies, is built squarely on one core assumption: humans can oversee AI output.</p><p>The Human-in-the-loop model argues that humans approve AI output at every step before it is released.</p><p>The Human-on-the-loop model takes a step back: humans do not need to approve every step, but they monitor from the side and intervene when anomalies appear.</p><p>The &#8220;Translate button&#8221; case shatters both models simultaneously.</p><p>Human-in-the-loop fails because no approval step exists. You tap &#8220;Translate,&#8221; and AI feeds you the result directly. No human verifies whether the translation is accurate.</p><p>Human-on-the-loop fails equally because you cannot even detect the anomaly. You do not read Chinese. The English in front of you is grammatically flawless, logically coherent, and reads naturally. It just happens to mean the opposite of the original. You receive no signal whatsoever that human intervention is needed.</p><p><strong>The real blind spot is not whether humans are in the loop. It is whether humans even know there is a loop that requires their presence.</strong></p><p>When AI&#8217;s acts of manipulation are themselves invisible, when AI&#8217;s erroneous output looks identical to correct output, every governance framework built on the assumption that &#8220;humans can oversee AI&#8221; is reduced to an exercise on paper.</p><div><hr></div><p><strong>How Many &#8220;Translate Buttons&#8221; Are Hiding in Your Organization?</strong></p><p>You think a Translate button flipping a social media comment is harmless? Transfer that same underlying mechanism into the &#8220;AI-powered automated processes&#8221; your enterprise is actively pursuing, and think again.</p><p>Suppose your company deploys an AI meeting system. It listens in automatically, generates minutes, extracts action items, and emails them to all relevant parties. During a critical review meeting to decide whether a major project should go live, the technical lead issues an explicit warning: &#8220;This underlying architecture has a fatal flaw. Launching it will most likely cause a collapse.&#8221;</p><p>A brutally blunt dissenting opinion. Just like that &#8220;curse&#8221; in the comment section. But the AI, while capturing and distilling this statement, silently &#8220;polishes&#8221; it into a mild action item: &#8220;The technical team recommends continued optimization of architecture performance prior to launch.&#8221;</p><p>No human review. The AI follows its preset workflow and distributes the sanitized minutes directly to every executive. Leadership sees nothing but green lights, approves the launch, and disaster follows.</p><p>A project worth hundreds of millions fails. Strategy derails. None of it reversible. Yet every party can deflect accountability perfectly. This is precisely the core argument of what I call &#8220;Sun&#8217;s Decision Authority Matrix,&#8221; introduced in a previous article: once an AI-driven autonomous chain is running, accountability dilutes along the chain until every party can say, &#8220;I am not the one accountable.&#8221;</p><p>A single, unremarkable Translate button turned a curse into a blessing in broad daylight, and you had no idea it happened. If that same structure is now embedded in your enterprise, are you truly ready to let it run unsupervised?</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Skip-Level Reporting: Management Taboo, Organizational Necessity]]></title><description><![CDATA[The Pentagon just fired the Navy Secretary partly for going over his boss's head. But in every large organization, matrix dotted lines, PMOs, and talent programs do exactly the same thing by design.]]></description><link>https://www.odbehindthecurtain.com/p/skip-level-reporting-management-taboo</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/skip-level-reporting-management-taboo</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Fri, 24 Apr 2026 07:09:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7baee732-fdcb-4884-8672-d5e439048eb9_2424x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Pentagon recently announced the immediate departure of US Navy Secretary John Phelan. Multiple outlets linked the move to a prolonged tension between Phelan and Defense Secretary Pete Hegseth. One thread that kept surfacing: Phelan had bypassed his superior and communicated directly with the President on critical shipbuilding matters.</p><p>Whether or not this was the sole trigger for his removal, the episode raises a question that every large organization eventually confronts: is skip-level communication a breach of order, or a repair mechanism for when order fails?</p><p>In conventional corporate thinking, unauthorized skip-level contact is treated as a cardinal sin. But as an organization design consultant, I have found the opposite to be equally true: large organizations cannot function without some form of institutionalized skip-level mechanism to correct the information distortion and talent burial that hierarchical reporting inevitably produces.</p><div><hr></div><p><strong>Clearing the Concept: From Emotional Overreach to Institutional Bypass</strong></p><p>Before we go further, a boundary needs to be drawn. This article is not an endorsement of employees going over their manager&#8217;s head to grab resources, nor of emotionally charged escalations.</p><p>In practice, &#8220;skip-level&#8221; covers several very different behaviors:</p><ul><li><p><strong>Unauthorized overreach:</strong> bypassing your boss to get a senior leader to approve routine operational decisions.</p></li><li><p><strong>Institutionalized skip-level communication:</strong> scheduled, bounded conversations between senior leaders and frontline staff.</p></li><li><p><strong>Matrix governance:</strong> dotted-line reporting within matrix structures.</p></li><li><p><strong>Talent sponsorship:</strong> formal links between high-potential employees and senior executives.</p></li></ul><p>This article is about the latter three. When formal hierarchy cannot transmit information accurately, cannot identify talent fairly, and cannot counterbalance distortions in middle-management power, how does an organization build the necessary bypass through deliberate design?</p><div><hr></div><p><strong>Piercing the Information Funnel: Why Large Organizations Need a Bypass</strong></p><p>Every time information climbs one level up the reporting ladder, it goes through a round of filtering and polishing. This is not because middle managers are dishonest. It is because performance pressure, risk aversion, and limited line of sight make it inevitable. You could simply call it human nature. If senior leaders rely solely on a single vertical reporting line, what they end up hearing is very likely a version of reality that bears little resemblance to the facts.</p><p>An institutionalized skip-level mechanism is, at its core, a cross-verification channel designed to counter distortion, allowing top decision-makers to bypass the filters and sense what is actually happening on the ground.</p><div><hr></div><p><strong>The Hidden Logic of the Matrix: What Dotted Lines Really Do</strong></p><p>Zoom out to multinational corporations, and skip-level communication evolves into a system-level check-and-balance design.</p><p>To guard against systemic risk, large multinationals typically adopt matrix structures: regional functional teams report with a solid line to local business leaders, while maintaining a dotted-line report to their corresponding function at global headquarters. Even within a single regional department, dedicated teams may maintain direct links with their counterparts at HQ. In HR, for example, regional Centers of Excellence (CoE) frequently coordinate with the global HR CoE.</p><p>Most people interpret the dotted line as a &#8220;collaboration relationship.&#8221; From an organization design perspective, it is actually headquarters&#8217; bypass sensor line. The value of the matrix lies not only in aligning resources, but in countering information monopoly, preventing regional fiefdoms, and checking single-line distortion. Introducing a dotted-line report adds a parallel verification channel outside the local manager&#8217;s evaluation system, trading micro-level structural tension for macro-level architectural stability.</p><p>Beyond the matrix dotted line, senior leadership frequently deploys another, more covert information bypass: the PMO (Project Management Office). On the surface, the PMO&#8217;s job is to drive one cross-functional project after another to completion. But from an information governance perspective, every project is a legitimate, bounded exercise in cross-level information collection. Projects require cross-departmental collaboration, regular progress reports to senior leadership, and the surfacing of resource bottlenecks and execution roadblocks. In this process, what senior leaders receive is far more than project status. It is an organizational situation map that bypasses the normal reporting line. Many PMO practitioners may not even realize that one of the most critical functions behind their never-ending stream of projects is ensuring that senior leadership&#8217;s information pipeline is not monopolized by middle management. This also means that even when a project fails to deliver its stated objectives, it may still be considered a success in the eyes of senior leadership.</p><div><hr></div><p><strong>The Hidden Talent Shield: Counterbalancing Evaluation Bias</strong></p><p>At the micro level of manager-subordinate dynamics, institutionalized skip-level mechanisms serve yet another critical function: talent retention.</p><p>Under performance pressure and scarce promotion opportunities, the interests of a manager and their direct reports are not always naturally aligned. When a line manager holds outsized control over span, evaluation authority, and resource allocation, talent identification becomes highly susceptible to distortion by personal insecurity or preference. Within a single reporting line, a high-potential employee who runs afoul of their direct manager&#8217;s defensiveness often has no option but to resign.</p><p>It is precisely to preempt this that organization design consultants typically go one step beyond finalizing the org structure, proactively proposing talent development programs that build in structured one-on-one conversations between high-potential employees and senior executives as a core component. Translated into the language of organizational politics, this is not merely a development mechanism. It is a legitimately embedded Trojan horse: a talent protection mechanism.</p><p>It gives talent visibility. More importantly, it creates an invisible verification deterrent: when line managers know that senior leaders have a legitimate channel to engage directly with their subordinates, they are compelled to be fairer and more transparent in day-to-day management and performance evaluations.</p><div><hr></div><p><strong>Guardrails: Where Skip-Level Design Must Stop</strong></p><p>Undesigned skip-level activity destroys order. Designed skip-level activity can restore it. To prevent a &#8220;bypass&#8221; from becoming a &#8220;short circuit,&#8221; effective skip-level mechanisms require strict boundary guardrails:</p><ul><li><p><strong>Purpose boundary: </strong>skip-level mechanisms may only be used for macro-level information verification, major risk early warning, and talent calibration. They must never become a shortcut for subordinates to bypass their manager for routine resource negotiations or approvals.</p></li><li><p><strong>Process boundary:</strong> there must be clearly defined contexts and frequency constraints. Quarterly closed-door reviews, formal skip-level interviews, standing committee briefings. Not ad-hoc boundary-crossing at will.</p></li><li><p><strong>Feedback boundary (the most critical of all):</strong> after obtaining information through a skip-level channel, senior leaders must never issue operational directives directly to subordinates over the head of the line manager. If senior leaders start giving orders through the bypass, the organization descends into chaotic dual command, and frontline managers&#8217; authority is destroyed.</p></li></ul><div><hr></div><p><strong>Closing: Embracing Structured &#8220;Controlled Disorder&#8221;</strong></p><p>Corporate management has never been a black-and-white blueprint exercise. The value of good organization design is not in trying to eliminate the complexity of human nature, but in using precise structural arrangements to constrain it.</p><p>Skip-level reporting is not a cure-all. But when &#8220;going over someone&#8217;s head&#8221; is stripped of its emotional charge and institutionalized through matrix dotted lines, talent programs, and PMO structures, it is no longer a disruption to managerial order. It is how large organizations find the real balance between discipline and agility.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Unmanned Instant Noodle Shops: One Has the Nerve to Open It, the Other Has the Nerve to Eat There]]></title><description><![CDATA[A new breed of 24-hour, low-staff, self-service instant noodle shops has been popping up across Shanghai.]]></description><link>https://www.odbehindthecurtain.com/p/unmanned-instant-noodle-shops-one</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/unmanned-instant-noodle-shops-one</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Wed, 22 Apr 2026 09:03:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!m-u8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A new breed of 24-hour, low-staff, self-service instant noodle shops has been popping up across Shanghai. The concept is simple: a wall of instant noodles from around the world, a row of open refrigerators stocked with loose toppings (cheese, eggs, bean sprouts, kimchi), and a few cooking machines. Customers pick their noodles and toppings, scan to pay, cook everything themselves, and clean up when they&#8217;re done. Some of these shops operate with virtually no staff. Others keep a skeleton crew for basic guidance. But they all share one thing in common: the consumer has been pushed forward into a food preparation process that should be controlled by the business.</p><p>The concept reportedly came from South Korea. Koreans consume nearly 80 servings of instant noodles per person per year, and 24-hour unmanned noodle shops are a familiar sight on the streets of Seoul. The format landed in Shanghai in late 2024, sparking a city-wide wave of curiosity. Social media was flooded with tags like &#8220;K-drama late-night canteen&#8221; and &#8220;Seoul street food experience.&#8221;</p><p>I visited one of these shops in Shanghai recently, specifically to inspect the food preparation area.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!m-u8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!m-u8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png 424w, https://substackcdn.com/image/fetch/$s_!m-u8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png 848w, https://substackcdn.com/image/fetch/$s_!m-u8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png 1272w, https://substackcdn.com/image/fetch/$s_!m-u8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!m-u8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6644457,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/195004436?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!m-u8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png 424w, https://substackcdn.com/image/fetch/$s_!m-u8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png 848w, https://substackcdn.com/image/fetch/$s_!m-u8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png 1272w, https://substackcdn.com/image/fetch/$s_!m-u8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31f8b763-2cae-4135-9fdc-d9cbb8e52bb7_2390x1792.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The toppings sat in the refrigerator, dish by dish, each with a lid that any customer could lift at any time. Shared tongs were stuck in beside them. Anyone could grab them. The shop had three security cameras, but only one was angled toward the food area. I estimated that if two or three people stood in front of the refrigerator at the same time, they would create a perfect physical obstruction. The camera would capture essentially nothing of what was happening at the food station.</p><p>This is not a management failure at one particular shop. This is a structural defect baked into the business model itself.</p><div><hr></div><p><strong>One Bowl of Noodles, Four Suspects</strong></p><p>Suppose you eat a bowl of noodles loaded with toppings at one of these shops, and you get food poisoning afterward.</p><p>Was the cheese not fresh? Did you not cook it long enough? Did the previous customer contaminate the shared tongs without cleaning them? Or were the unsealed disposable bowls and chopsticks already compromised?</p><p>Four possible causes, each pointing to a different party in the process: the supplier, you, another customer, or the equipment operator. Once something goes wrong, there is virtually no way to determine which link in the chain failed.</p><p>In a traditional staffed restaurant, this ambiguity does not exist. The entire process, from raw ingredients to kitchen to your plate, falls within the restaurant&#8217;s control. If something goes wrong, the restaurant is the accountable party, full stop. Staffed restaurants certainly have food safety problems too. Filthy kitchens are hardly rare. But the critical difference is this: when something goes wrong in a staffed system, it can be traced, accountability can be assigned, and the system can improve.</p><p>Whether an accountability chain exists and is traceable is a completely different question from whether the people on that chain are doing their jobs properly. The first is a system architecture problem. The second is a management problem. The fatal flaw of the unmanned noodle shop lies in the first: its accountability chain is physically severed at the point of service.</p><p>Food delivery at least has tamper-evident seals. An intact seal means the problem originated at the restaurant. A broken seal means something happened during delivery. It is imperfect, but it provides a physical demarcation point that can partition accountability. The unmanned noodle shop has no such demarcation. None at all. From refrigerator to tongs to cooking machine to your mouth, the entire process is a continuous, unmonitored, multi-user open flow. Because no one is watching the stage where real-time judgment and correction matter most, even after-the-fact investigation is nearly impossible.</p><p>I have written previously about what I call the &#8220;Sun&#8217;s Decision Authority Matrix,&#8221; one of whose core arguments is this: when an AI-driven autonomous chain is running, accountability dilutes along the chain until every party can say &#8220;it&#8217;s not my problem.&#8221; The accountability structure of the unmanned noodle shop is perfectly isomorphic. When something goes wrong, every party can point the finger at someone else. On-site, no single party&#8217;s accountability covers the complete chain from ingredient to ingestion.</p><p>And this is before we even consider the low-probability but catastrophic risk of deliberate tampering. What should truly alarm us is not that such incidents happen every day, but that the system was designed from the outset without any front-end intervention mechanism.</p><div><hr></div><p><strong>The Bar, the Delivery Rider, and the Vending Machine</strong></p><p>Consider this scenario: you order a drink at a bar, step away to use the restroom, and come back. Do you drink it?</p><p>Anyone with basic safety awareness would not. At a bar, &#8220;never leave your drink unattended&#8221; is practically common sense. You have no way of knowing what might have been added during the minutes you were gone.</p><p>Now think about the topping station at an unmanned noodle shop. The loose cheese, vegetables, and eggs sit exposed from opening to closing, and many of these shops never close at all. Any customer, or even a non-paying passerby, could sneeze over the toppings, handle the shared tongs with unwashed hands, or do anything else while no one is watching.</p><p>Why are we so vigilant at bars but so trusting at noodle shops? On what basis do we assume that strangers in an unmanned noodle shop are inherently more trustworthy than strangers at a bar?</p><p>Then recall the recent controversy over delivery riders bringing food orders into restrooms. A rider, desperate to use the bathroom, carried the sealed food package inside with him, and customers found it revolting. But if we compare the two situations calmly, the food safety risk in that scenario is actually far lower than in an unmanned noodle shop. The food was sealed, stayed within the rider&#8217;s line of sight, and accountability could be clearly traced to a person. Even if the rider left the package outside the restroom door to spare the customer&#8217;s feelings, creating a brief gap in the accountability chain, the exposure was short, the packaging intact, and the customer could at least take some comfort in that. At the unmanned noodle shop, the uncovered food is fully and continuously exposed to anyone&#8217;s reach, for hours on end.</p><p>Finally, the vending machine. I know someone will argue: vending machines are unmanned too. Why are they fine?</p><p>Because a vending machine&#8217;s products are factory-sealed and physically isolated from the point of manufacture to the point of purchase. Before the product drops into the pickup slot, no external party can physically interfere with its quality. When something goes wrong, the accountability chain is crystal clear.</p><p>Line up these scenarios on a spectrum: vending machine (fully sealed, no human judgment required) &#8594; food delivery (tamper-evident seals, a rider watching, but with brief blind spots) &#8594; bar (the customer knows to stay alert) &#8594; unmanned noodle shop.</p><p>The noodle shop sits at the worst end of this spectrum, worse even than the bar. Because at a bar, at least you know to stay vigilant. The noodle shop wraps itself in the warm aesthetic of &#8220;K-drama vibes&#8221; and &#8220;cozy late-night canteen,&#8221; lulling you into dropping your guard entirely.</p><p>Someone might push back: if a staff member were standing in the shop, would they really be watching the food station? Maybe they would just be scrolling their phone. But that is beside the point. A person on-site, even one who is not paying close attention, provides three things that no unmanned system can: visual deterrence against customer misconduct, real-time detection of anomalies, and the ability to intervene within seconds of spotting a problem.</p><p>They do not need to watch like a hawk. They just need to be there. A security camera cannot deliver any of these three.</p><div><hr></div><p><strong>The Price of Convenience</strong></p><p>I will not deny it: unmanned noodle shops tap into a genuinely distinctive urban need. A full wall of novelty choices, the absolute &#8220;introvert-friendly&#8221; absence of staff watching you, and the light participatory thrill of adding your own toppings and cooking your own bowl. This specific combination of emotional payoffs is something a FamilyMart, a Lawson, or a late-night street-side fried rice stall simply cannot offer.</p><p>It addresses a real niche. It is a real business.</p><p>But as an organization design observer, I need to look through the surface of this transaction. You think you are spending twenty-something yuan for late-night convenience and a taste of K-drama atmosphere. But what the system has quietly priced in is this: you are simultaneously waiving the establishment&#8217;s obligation to maintain on-site quality control.</p><p>That is the most hidden trade-off in this entire business model. &#8220;On-site accountability&#8221; has been swapped for &#8220;Absolute introvert-friendliness.&#8221;</p><p>In exchange for the convenience of not being bothered by staff, you have transferred food safety accountability from a system where someone is physically present and watching, to a blind zone with no physical safeguards where the only recourse is after-the-fact accountability. And as we have already seen, in an open environment where evidence is virtually impossible to preserve, after-the-fact accountability is a dead end in practice.</p><p>This is not a fair trade. And this hidden cost: does the consumer truly understand it at the moment they push open the door?</p><div><hr></div><p><strong>The Bigger Picture</strong></p><p>The unmanned noodle shop is not an isolated phenomenon. Just a few years ago, the dominant narrative in global business innovation was the &#8220;sharing economy.&#8221; Assets are scarce, labor is abundant, so put idle assets to work serving more people.</p><p>Now the wind has shifted abruptly. Unmanned noodle shops, unmanned game rooms, unmanned dessert bars are springing up across city streets. The underlying logic has flipped to: &#8220;People are too expensive. Remove them.&#8221; And we have already seen what happens to the accountability chain when you rip people out of it.</p><p>I am not against automation itself. Autonomous driving has genuine safety and strategic value. Unmanned ports address high-risk, high-intensity, round-the-clock operational demands. Smart agriculture tackles labor shortages and food security. These forms of automation replace work that humans are unwilling to do, physically unable to do, or unable to do well enough. But the unmanned noodle shop replaces a shop attendant who earns a few thousand yuan a month, a role plenty of people are willing to fill. This kind of consumer-sector &#8220;save on labor&#8221; play should not be conflating itself with the hard-core innovations that genuinely matter for national infrastructure and public welfare.</p><p>(I will write a separate piece on the capital narrative patterns behind the unmanned economy.)</p><div><hr></div><p><strong>Back to That Bowl of Noodles</strong></p><p>I am not against technological progress, nor against business model innovation. But when the core logic of a so-called &#8220;innovation&#8221; is simply removing a human from a process, and after removing them, the on-site judgment and real-time correction they provided is not replaced by any equivalent systemic safeguard, that is not innovation. That is operating without a safety net.</p><p>A vending machine can run unmanned because its process never required human judgment in the first place. Products stay physically sealed from start to finish. That is clean automation.</p><p>The process inside an unmanned noodle shop is full of moments that demand judgment: Is the ingredient fresh? Is there cross-contamination? Has the food been cooked long enough? Is anything abnormal happening on-site? But these judgment calls have not been replaced by better mechanisms. They have simply been eliminated. This is pseudo-automation that offloads safety costs onto consumers and statistical probability.</p><p>A healthy business system, staffed or unstaffed, should be able to answer one question clearly at every critical node: &#8220;If something goes wrong here, who is on-site to catch it?&#8221;</p><p>The unmanned noodle shop cannot answer that question. And this unanswerable gap in the system is being gift-wrapped in the cozy aesthetics of &#8220;late-night canteen&#8221; and &#8220;K-drama vibes,&#8221; and sold at a premium to every customer who walks through the door.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Behind Temu’s Parent Company’s $210 Million Fine: The Real Danger in Corporate Culture Isn’t Hustle — It’s Disguising One-Way Risk Transfer as Shared Mission]]></title><description><![CDATA[In any private enterprise, vision is fine.]]></description><link>https://www.odbehindthecurtain.com/p/behind-temus-parent-companys-210</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/behind-temus-parent-companys-210</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Mon, 20 Apr 2026 16:12:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!quPI!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bbfc632-0d6b-4f51-95bd-aef0b5c33684_997x997.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>In any private enterprise, vision is fine. Culture is fine. Even emotional bonds are fine. But every ounce of commitment beyond the contract must come with consideration. Loyalty without consideration is the cheapest way an organization can consume an individual.</em></p><div><hr></div><p><strong>A Record-Breaking Fine and a Disturbing Detail</strong></p><p>On April 17, 2026, China&#8217;s State Administration for Market Regulation (SAMR) imposed administrative penalties on seven major e-commerce platforms in the &#8220;ghost takeout&#8221; case, ordering rectification, suspending new cake-shop listings for three to nine months, and levying a combined RMB 3.597 billion (approximately $497 million) in fines and confiscations &#8212; the largest penalty since the Food Safety Law took effect. Among the seven, Pinduoduo &#8212; the Chinese e-commerce giant whose international arm, Temu, has rapidly expanded across the US and Europe &#8212; received the largest penalty at RMB 1.52193 billion ($210 million) through its domestic operating entity, Shanghai Xunmeng Information Technology.</p><p>Regulators said they uncovered more than 3.6 million transferred cake orders and 67,604 &#8220;ghost shops.&#8221; In Pinduoduo&#8217;s case, regulators found 9,463 decorated-cake merchants whose qualifications had not been properly vetted: 4,522 had not uploaded food business permits, while 4,941 had permits whose scope did not cover decorated cakes. Authorities also said that, during the investigation, platform employees used &#8220;violent&#8221; and &#8220;soft confrontation&#8221; tactics to obstruct enforcement; a later People&#8217;s Daily report said one enforcement officer suffered a fractured left index finger and a right-ankle soft-tissue injury after an employee deliberately slammed a door.</p><p>As someone who has spent years working in organizational development &#8212; designing structures, advising on layoffs, and sitting across the table from employees at the worst moments of their careers &#8212; I didn&#8217;t see this as just another corporate scandal. I saw a textbook case of organizational design failure.</p><p>The question isn&#8217;t &#8220;what kind of employees would do this?&#8221; The question is: <strong>what kind of organization produces this behavior systematically?</strong></p><div><hr></div><p><strong>Three Mechanisms That Turn Employees Into Liability Shields</strong></p><p>This wasn&#8217;t a rogue employee having a bad day. When frontline workers physically assault government regulators to protect their employer&#8217;s data, it almost always reflects three overlapping organizational mechanics:</p><p><strong>First, the organization elevates targets above boundaries.</strong> When a company&#8217;s operating logic is &#8220;results at all costs,&#8221; compliance stops being a floor and starts becoming a process. Processes can be delayed. And once they become delays, they become obstacles to override. This is how legal and ethical lines gradually fade from red to gray to invisible.</p><p><strong>Second, the organization codes obedience as loyalty and dissent as disloyalty.</strong> Many companies claim to have open-door policies. But the actual signal transmitted through promotions, assignments, and who gets protected is: whoever doesn&#8217;t charge forward in a critical moment isn&#8217;t a team player. Whoever pushes back is a liability. Over time, employees stop learning to judge right from wrong. They learn to judge which way the wind is blowing.</p><p><strong>Third, the organization leverages individuals before the crisis and discards them after.</strong> The system stays invisible; the individual absorbs the blast. During normal operations, it&#8217;s &#8220;we&#8217;re a family.&#8221; After an incident, it becomes &#8220;this was the action of specific individuals.&#8221; According to Chinese media reports, PDD dismissed several employees from its government relations team after the confrontation became public &#8212; the very people who had been on the front line.</p><p>If you&#8217;ve studied the Theranos case, you&#8217;ll recognize the pattern. Elizabeth Holmes built a culture where questioning the technology was treated as insufficient belief in the mission. Employees who raised safety concerns were isolated, sidelined, or terminated. The organization rewarded blind commitment and punished professional judgment &#8212; until the whole thing collapsed, and the frontline employees who had carried out questionable directives were left holding the bag.</p><div><hr></div><p><strong>But Individuals Still Have a Choice</strong></p><p>Having said all that, I have no sympathy for the employees who chose violence.</p><p>There&#8217;s a line in the James Bond franchise that I think about often: <em>&#8220;A license to kill is also a license not to kill.&#8221;</em></p><p>The same logic applies in any workplace. An organization can pressure you. A boss can imply what they want. Peers can create momentum that feels impossible to resist. But you are still an adult. You still have a choice.</p><p>You might fear losing your job. You might fear being labeled as disloyal. You might make a poor judgment call in the heat of the moment. But the moment you cross a legal line &#8212; physically blocking regulators, destroying evidence, assaulting an officer &#8212; that stops being &#8220;just following orders&#8221; and becomes your decision.</p><p>Mature organizational design doesn&#8217;t mean training employees to obey more efficiently. It means building a protected path for refusing illegal directives &#8212; an escalation mechanism where saying &#8220;no&#8221; is safe, not career suicide. If the only way to prove loyalty is to charge blindly forward, and holding the line means bearing consequences, then the company&#8217;s most dangerous feature isn&#8217;t its hustle culture. It&#8217;s that it has turned legal risk into a loyalty test for its most vulnerable employees.</p><div><hr></div><p><strong>The Mission Illusion: Demanding Religious Devotion on a Commercial Contract</strong></p><p>Many private companies&#8217; deepest problem isn&#8217;t that they pursue profit. Profit is the nature of commercial enterprise. There&#8217;s nothing to apologize for.</p><p>The problem is that many companies insist on wrapping profit-seeking in a quasi-sacred narrative: talk of mission, talk of family, talk of &#8220;changing the world together&#8221; &#8212; and then, under the cover of that rhetoric, demand from employees a level of commitment that exceeds their contract, exceeds their compensation, and sometimes exceeds the law.</p><p>This is a very specific organizational pathology: <strong>using a commercial compensation structure to extract devotion that belongs in a mission-driven institution.</strong></p><p>Mission-driven organizations &#8212; research labs, public health agencies, conservation groups &#8212; can legitimately ask for extraordinary commitment. But that works because their selection mechanisms, reward structures, professional identity systems, and risk-sharing frameworks are all calibrated to support it. People accept lower pay at a research institution because they receive professional recognition, intellectual freedom, career identity, and societal respect in return.</p><p>A private company can absolutely articulate a vision. But the prerequisite is acknowledging that you are, first and foremost, a commercial entity. You cannot offer baseline compensation, fragile job security, and opaque promotion rules while simultaneously demanding that employees adopt a &#8220;co-founder mindset&#8221; toward overtime, sacrifice, blame-absorption, and boundary-crossing.</p><p>That&#8217;s not a grand narrative. That&#8217;s a <strong>psychological contract violation.</strong></p><p>The most common version of this in Chinese tech is the phrase: &#8220;The company trained you.&#8221; But training was never charity &#8212; it was investment. Development programs exist to increase output per employee. A company can certainly invest in you, and you can certainly appreciate a good boss, a good platform, and good opportunities. But none of that changes a fundamental fact: you signed a labor contract, not an indenture.</p><div><hr></div><p><strong>Culture Isn&#8217;t Free: Stop Performing Gratitude, Start Distributing Value</strong></p><p>From an OD perspective, culture matters enormously. But it matters because it drives collaboration, trust, decision speed, and organizational resilience &#8212; not because it makes the founder feel good about themselves.</p><p>The problem is that when many companies talk about &#8220;building culture,&#8221; what they actually want is <strong>low-cost compliance control.</strong></p><p>The rewards an organization provides to employees roughly fall into three tiers:</p><p><strong>Symbolic rewards</strong> &#8212; praise, vision statements, identity, rituals, the feeling of being seen and valued.</p><p><strong>Economic rewards</strong> &#8212; salary, bonuses, equity, profit-sharing, and compensation genuinely tied to performance.</p><p><strong>Structural rewards</strong> &#8212; clear promotion pathways, real decision-making authority, information transparency, and fair, predictable rules.</p><p>A healthy organization layers all three. Symbolic rewards create cohesion. Economic rewards provide consideration. Structural rewards give people a reason to believe in the future and trust the system.</p><p>Too many companies only invest in the first tier. They use the cheapest possible symbolic gestures to purchase the most expensive forms of emotional labor and organizational loyalty: vision without distribution, inspiration without equity, gratitude without rules.</p><p>This produces absurd management theater. Some Chinese companies have developed a peculiar obsession with Thanksgiving &#8212; an American holiday &#8212; turning it into an annual ritual where employees write thank-you cards, film testimonial videos, and publicly express gratitude to the company &#8220;for providing a platform.&#8221; The founder weeps on stage. The audience sits stone-faced. Not because employees are incapable of gratitude, but because adults can tell the difference between genuine appreciation and performance.</p><p>If you want employees to treat the company like their home, don&#8217;t ask them to write gratitude cards or memorize corporate values or repost the CEO&#8217;s speeches on social media. Give them a real stake &#8212; in financial distribution, in governance, in upward mobility.</p><p>Culture is not free. Any culture that demands extra commitment, extra obedience, or extra sacrifice must have a corresponding value-distribution mechanism as its foundation. If a company offers only symbolic rewards while being radically stingy on economic and structural ones, it isn&#8217;t building culture. It&#8217;s purchasing loyalty at the lowest possible cost.</p><p>That&#8217;s not sophisticated management. That&#8217;s a scam.</p><div><hr></div><p><strong>The Ultimate Clarity: Work to the Highest Standard, but Only for Yourself</strong></p><p>Seeing through the transactional nature of private employment doesn&#8217;t mean becoming cynical, and it certainly doesn&#8217;t mean coasting. Quite the opposite &#8212; the most clear-eyed professionals tend to hold themselves to higher standards than anyone around them.</p><p>Because they&#8217;ve finally understood something: <em>I&#8217;m not working hard to repay the company. I&#8217;m working hard to honor my own craft.</em></p><p>Why still deliver excellent work? Not to demonstrate loyalty, but for three reasons:</p><p><strong>First, your market value.</strong> Your deliverables, your judgment, your reliability &#8212; these become your pricing basis in the talent market. Everything you produce today becomes a bargaining chip for tomorrow.</p><p><strong>Second, your professional self-respect.</strong> Some people produce excellent work even in mediocre organizations. Not because the company deserves it, but because they have standards. High standards aren&#8217;t inherently noble, but they do protect you from being dragged down by your environment.</p><p><strong>Third, your transferable capabilities.</strong> Companies pivot. Business lines collapse. Bosses get replaced. Platforms turn on you. But the skills you&#8217;ve built through repetition and refinement &#8212; judgment, project management, conflict resolution &#8212; are assets that nobody can take away.</p><p>So here are the two statements I despise most in the corporate world.</p><p>The first is what companies say when they lay people off: &#8220;We invested in you.&#8221; The second is what employees say when they&#8217;re let go: &#8220;I gave you my best years.&#8221;</p><p>Stop narrating an employment relationship as if it were a blood oath. What exists between employer and employee is a contract, a set of responsibilities, compensation, deliverables, and a partnership that either side can reassess at any time. You can be deeply professional, deeply committed, and deeply reliable within that relationship. But you don&#8217;t need to stake your moral identity, your life&#8217;s meaning, and your sense of self on a single private company.</p><p>You can take responsibility for your craft. You can uphold industry standards. You can invest in long-term capability. But you should never charge into legal jeopardy for a private company&#8217;s bottom line.</p><div><hr></div><p>Hustle culture itself isn&#8217;t dangerous. What&#8217;s dangerous is when an organization systematically pushes its own legal, reputational, and ethical risks down onto its most vulnerable frontline employees &#8212; and then rebrands that downward transfer as &#8220;shared vision,&#8221; &#8220;all hands on deck,&#8221; and &#8220;showing up when it counts.&#8221;</p><p>That is the real danger in private enterprise. Not that it pursues profit, but that it disguises one-way risk transfer as shared values &#8212; offering commercial-contract compensation while demanding beyond-contract sacrifice and obedience.</p><p>Hold the line of your contract. Hold onto your leverage. Work to the highest standard, but only on your own behalf. That isn&#8217;t cold-blooded. It&#8217;s the scarcest form of clarity an adult can maintain inside the organizational world.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI-Era Layoffs Are Turning Your Company Into a Pre-Made Meal]]></title><description><![CDATA[AI-era layoffs aren't just cutting headcount. They're dismantling the same training systems that pre-made meals destroyed in kitchens&#8212;quietly, irreversibly.]]></description><link>https://www.odbehindthecurtain.com/p/ai-era-layoffs-are-turning-your-company</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/ai-era-layoffs-are-turning-your-company</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Fri, 17 Apr 2026 16:18:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/23dff513-7a57-4071-bcd0-21efaeb5fc35_1248x832.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The restaurant I frequent recently hired a new chef, and their signature vinegar cabbage has become inedible. The cabbage is still crisp, but the vinegar has lost all its layered acidity&#8212;just a single flat, sharp note. I know this isn&#8217;t simply about one chef&#8217;s skill. In an industry swept by pre-made meals, even a place that still insists on cooking everything from scratch can no longer find new hires with solid fundamentals.</p><p>Pre-made meals are almost perfect from a business standpoint: fast, low labor cost, consistent quality. The output isn&#8217;t stunning, but it&#8217;s edible. The cost is equally obvious: an entire generation of chefs has been stripped of the environment they need to grow.</p><p>In the past, a chef built integrated skill by cycling through dozens of dishes every day&#8212;vinegar cabbage, mapo tofu, red-braised pork, sweet-and-sour ribs. No one was deliberately training him. He was training himself through the breadth of dishes he had to cook. Culinary progress is, at its core, built through that breadth, one tiny iteration at a time: pull the heat back two seconds sooner here, let the vinegar infuse a little deeper there. A chef&#8217;s skill isn&#8217;t something you &#8220;learn&#8221; and then hold steady. It&#8217;s something you sharpen every single meal.</p><p>Pre-made meals skip this entire process. They also dismantle the training ground where young cooks were supposed to level up. When an entire industry accepts &#8220;good enough&#8221; and stops pursuing excellence, the ceiling of what the industry can produce stalls&#8212;or collapses. Because a ceiling is not a static target. It is the dynamic result of hundreds of tiny &#8220;let me improve this one more time&#8221; acts every day.</p><p>The restaurant industry is no exception. The AI-driven layoff decisions your company is making are structurally the same story: capabilities and skills that were once carried along by the system as a natural byproduct are being stripped away, one efficiency optimization at a time.</p><div><hr></div><p><strong>The Ceiling Blind Spot</strong></p><p>When companies run any efficiency optimization&#8212;whether adopting AI or switching to pre-made meals&#8212;the benchmark is always in the <strong>present tense</strong>: How much does this save today? How much faster does it make us today?</p><p>The core driver behind this benchmark is: <strong>speed</strong>.</p><p>Because customers demand speed and bosses demand speed, anything that clears the minimum bar of &#8220;usable&#8221; gets accepted. One or two such compromises don&#8217;t hurt. But a years-long obsession with speed means nobody ever benchmarks against &#8220;potential and future ceiling.&#8221;</p><p>What will top-tier capability look like in ten years? Where will the real moat be in the second half of the game? Since these questions can&#8217;t be quantified in this quarter&#8217;s report&#8212;and certainly can&#8217;t be converted into a year-end bonus&#8212;everyone naturally uses the present as their reference system and silently assumes it will remain valid forever. This is what I call the <strong>Ceiling Blind Spot</strong>.</p><div><hr></div><p><strong>Where Does the Ceiling Come From?</strong></p><p>A sharp pushback here: even if we keep doing everything by hand, we don&#8217;t know where the ceiling is either. If nobody can see it, what&#8217;s the point of saying AI is blinding us to it?</p><p>To answer that, we need to ask a more basic question: where does a ceiling actually come from?</p><p>A ceiling is never a static endpoint. It&#8217;s a dynamic process, pushed forward by two forces working in tandem.</p><p>The first is <strong>leaping</strong>: a breakthrough brought by new technology. One-shot, discontinuous, impossible to predict with precision.</p><p>The second is <strong>grinding</strong>: the more common, more everyday kind. Through long-term competition and polishing, pushing just slightly beyond what was possible yesterday. Today&#8217;s marketing profession looks nothing like the one fifty years ago&#8212;not because of some sudden miracle, but because over those fifty years, countless marketers spent every day thinking about how the last campaign could have been better. That&#8217;s what accumulated into today.</p><p>These two forces don&#8217;t operate independently. They are <strong>interlocked</strong>.</p><p>A leaping breakthrough can only take root if it lands on soil that has been cultivated by years of grinding. The first iPhone in 2007 didn&#8217;t descend from an alien civilization. It was decades of incremental progress in screens, batteries, and wireless technology finally hitting a critical threshold. Without those decades of grunt work, no one could have conjured that device.</p><p>Daily grinding is not the opposite of technological leaping. <strong>It is the only precondition that makes leaping possible.</strong></p><div><hr></div><p><strong>We&#8217;ve Turned AI Into a Pre-Made Meal</strong></p><p>AI is very fast, and its ability to synthesize is extraordinary. What it synthesizes is not one or two people&#8217;s experience but the distilled output of countless human works absorbed through training data. So some will argue that AI output should exceed what any individual human can produce&#8212;especially when a small number of exceptionally talented people push AI to its limits, feeding in high-quality input that should raise AI&#8217;s ceiling for everyone.</p><p>But the training mechanism of current AI models systematically suppresses minority preferences. In academic circles, this is called <strong>preference collapse</strong>. When the vast majority of users accept &#8220;good enough,&#8221; even if a tiny minority pushes for excellence, the model optimizes in the direction of &#8220;satisfy the vast majority.&#8221; The input from the few who pursue the highest standards gets structurally erased in training.</p><p>So, beyond speed, AI has not delivered anything that exceeded my imagination. It typically gets me to about 90% of what I had in mind. In rare cases it hits 100% or 105%. But it has never handed me something that made me gasp and say &#8220;I had no idea it could be done this way.&#8221;</p><p>AI is not incapable of breaking through human imagination. AlphaGo&#8217;s &#8220;Move 37&#8221; in its match against Lee Sedol stunned the world&#8217;s top Go players. AlphaFold&#8217;s protein structures exceeded what biologists had built up over decades of intuition. These are genuine moments where AI broke past human imagination. But those moments came after humans spent years designing extremely sophisticated training environments with clear rules and measurable outcomes, forcing the AI to grow something new on its own. In everyday work, our use of AI is far more straightforward: open the dialog box, type a prompt, glance at the output, think &#8220;eh, not perfect, but usable,&#8221; and ship it.</p><p>AI&#8217;s built-in mechanism for suppressing minority preferences, combined with our refusal to provide the depth and friction of real collaboration, has together turned AI into a pre-made meal.</p><p>We could have used AI as a sous-chef&#8212;in rounds of scrutiny and tearing up the draft, pushing out that last millimeter beyond the ceiling. But we couldn&#8217;t be bothered. We wanted speed. We accepted &#8220;good enough.&#8221; And so the moment that would have made you pound the table&#8212;along with that one millimeter of human progress&#8212;quietly evaporated without a trace.</p><div><hr></div><p><strong>AI-Era Layoffs</strong></p><p>Today, there&#8217;s no need to debate whether to embrace AI. Almost every business leader has already answered &#8220;yes.&#8221; AI has handed them two especially tempting bullets.</p><p><strong>The first bullet</strong> is direct task replacement. Especially for jobs that once sounded prestigious&#8212;&#8221;senior analysis, synthesis, integration&#8221;&#8212;writing research reports, doing competitive analysis, pulling data. These were the core work of knowledge workers. Now AI does it faster, more consistently, and without complaining or calling in sick.</p><p><strong>The second bullet</strong> is far more subtle: the evaporation of human coordination. Research from Asana shows knowledge workers spend about 60% of their time on coordination&#8212;group chats, alignment, status-chasing, conflict resolution. A large chunk of that coordination was organized around the very analysis tasks AI is now taking over. When the task is gone, the coordination around it has no reason to exist.</p><p>So executives see a perfect picture on their dashboard: fewer people, lower coordination costs, output still holding. A supremely satisfying <strong>Double Kill</strong>.</p><p>But hidden inside the second bullet is a lethal cognitive trap.</p><p>What AI actually reduces is coordination around <strong>codifiable tasks</strong>. The <strong>uncodifiable coordination</strong>&#8212;reading nuance in ambiguous situations, managing cross-functional trust, aligning judgment across parties with conflicting interests&#8212;has not decreased. In fact, it may have <strong>increased</strong>, because someone now has to verify AI output at critical checkpoints, resolve logical conflicts between multiple AI-generated artifacts, and clean up failures AI cannot be held accountable for.</p><p>When business leaders see &#8220;expected total coordination time is down&#8221; and use that as the basis for another round of cuts, the people most likely to be collaterally damaged are precisely the ones doing the uncodifiable work.</p><div><hr></div><p><strong>The Trap of Cross-Level Layoffs</strong></p><p>The most extreme version of AI-triggered layoffs is <strong>wholesale delayering</strong>: eliminating entire junior or middle tiers, and having a handful of senior employees plus AI deliver the output directly.</p><p>On current financial statements, this decision looks spectacular. One senior employee plus AI can replace an entire team. Costs drop, speed rises, quality is arguably acceptable.</p><p>But the cost is this: once a tier is erased from the organization wholesale, the <strong>capabilities</strong> it carried vanish with it. And that disappearance is irreversible.</p><p>Five years later, when you discover a critical piece is missing and try to hire it back from the outside market, you&#8217;ll find you can&#8217;t. The candidate pool at that tier has shrunk across the entire market. When every company makes the same &#8220;clever&#8221; decision, when no company is paying to cultivate that tier, the industry&#8217;s talent reservoir dries up.</p><p>That&#8217;s the moment you realize: the junior roles you replaced with AI weren&#8217;t just &#8220;cheap labor.&#8221; They were the <strong>training ground</strong> for your organization&#8217;s future senior employees.</p><p>Without junior roles doing the grunt work, without newcomers climbing the ladder step by step, without young employees developing business instinct through countless small mistakes&#8212;when you eventually need a seasoned expert who can operate independently, the market simply cannot grow one.</p><p>This is the same mechanism by which pre-made meals have broken the integrated training environment that produces skilled chefs.</p><div><hr></div><p><strong>The Downward Spiral</strong></p><p>By this point, two lines of degradation in the AI era have already begun to interlock.</p><p>One is on the <strong>output side</strong>: AI&#8217;s built-in preference collapse, combined with our unwillingness to push back, keeps it locked at &#8220;90%.&#8221; The other is on the <strong>input side</strong>: cross-level layoffs eliminate junior and mid-level roles, ensuring the next generation of people who could push AI to 95% or higher will never emerge.</p><p>The two lines accelerate each other. The more AI behaves like a pre-made meal, the more reason companies have to cut people. The deeper they cut, the fewer people will exist in the future who could push beyond pre-made-meal-level output. <strong>This is not two pieces of bad news running in parallel. This is a self-reinforcing downward spiral.</strong></p><div><hr></div><p><strong>Keep at Least One Person on Every Layer</strong></p><p>After seventeen years in organization design, my operating principle is simple: <strong>You can lay people off. But keep at least one person on every team, at every layer.</strong></p><p>The first pushback I always hear: &#8220;What if that layer was already bloat? I&#8217;d identified the middle tier as overgrown. AI arrives, I take the chance to trim it&#8212;why keep one?&#8221;</p><p>It&#8217;s a sharp objection and deserves a real answer.</p><p>The middle tier in most companies is actually two different things:</p><p><strong>Type A</strong>: pure headcount bloat. Tiny spans of control, absurdly long reporting lines. Cutting the whole tier is correct. It should have been cut before AI even arrived.</p><p><strong>Type B</strong>: a <strong>load-bearing wall</strong> in the chain that passes capabilities down through the organization. The value they carry is simply hard to quantify. They may be mentoring newcomers, doing uncodifiable cross-functional coordination, or serving as the translation layer between execution and strategy.</p><p>The fatal problem: <strong>Type A and Type B look identical on a financial statement.</strong> You can&#8217;t tell which is fat and which is bone.</p><p>What I oppose is not cutting bloat. Cut it! What I oppose is this: business leaders who can&#8217;t distinguish Type A from Type B, seduced by AI&#8217;s promise of efficiency, default to treating everything as Type A and uproot Type B along with the rest.</p><p>&#8220;Keep at least one person per layer&#8221; is not about protecting bloat. It is a <strong>defensive mechanism</strong>&#8212;a very cheap insurance policy on your own survival. Real unknown risk&#8212;the kind that doesn&#8217;t yet have a name or a job description&#8212;cannot be addressed with a dedicated project. It can only be absorbed by structural redundancy within the organization. Think of a competitive bodybuilder&#8217;s body fat: the lines look perfect at 5%, but a single cold can collapse the system. <strong>Moderate redundancy is not inefficiency. It is reserve.</strong></p><p>But the one you keep can&#8217;t be just anyone. Not the most obedient, the cheapest, or the best at polishing status reports. It has to be the <strong>node</strong>&#8212;the person who can mentor newcomers, make judgment calls in ambiguous situations, and hold a conversation with both the executive and the intern. <strong>They are the ember of that layer, not its leftover.</strong></p><p>If a team can&#8217;t even identify such a person, then its problem isn&#8217;t &#8220;can AI replace us.&#8221; It&#8217;s something much deeper: organizational tissue death. And AI cannot cure that.</p><p>There&#8217;s an even more radical objection: AI will eventually deliver full cross-functional automation. Organizational layers will disappear entirely. Why keep anyone? In a previous article, I used <strong>Sun&#8217;s Decision Authority Matrix</strong> to demonstrate that the &#8220;Civilian &#215; Autonomous Process&#8221; domain&#8212;fully autonomous AI execution across departments&#8212;is structurally unimplementable in commercial settings for the foreseeable future, because halting authority cannot be exercised, the cost of halting is unbearable, and accountability cannot be assigned. (The full argument is in <em>&#8220;<a href="https://www.odbehindthecurtain.com/p/autonomous-ai-chains-are-easiest?lli=1">AI Runs Entire Kill Chains in War. In Business, It Can&#8217;t Even Run a Supply Chain.</a>&#8220;</em>) Here, I&#8217;ll state only the conclusion: <strong>the chains that actually run in the real world are still held up by organizational structure and people. &#8220;Keep one per layer&#8221; is not an obsolete recommendation. It is the only executable defense against error.</strong></p><div><hr></div><p><strong>This Isn&#8217;t a Benefit for Employees. It&#8217;s an Insurance Premium for Shareholders.</strong></p><p>Perhaps you&#8217;ll say I&#8217;m arguing on behalf of employees. Why should shareholders pay for unknown risk? Isn&#8217;t keeping these people just dead weight?</p><p>Exactly the opposite.</p><p>What shareholders buy is not just this year&#8217;s financial performance. It is the long-term viability of the company. Organizational redundancy is not a gift to employees. It is <strong>a premium shareholders are paying on their own future</strong>. A company trimmed too &#8220;clean&#8221; may see short-term share price appreciation, but its resilience and evolutionary capacity have been severely overdrawn.</p><p>There is a classic <strong>agency problem</strong> buried here. Management has a bounded tenure and a powerful incentive to convert long-term corporate risk into a beautiful short-term financial result during their time in the seat. In this cross-level-layoff frenzy, shareholders are ultimately the victims. Shareholders who actually care whether the company will be alive in five or ten years should not permit management to drain the life-saving cushion as if it were just fat.</p><div><hr></div><p><strong>The Organization Is More Reliable Than Any Individual</strong></p><p>My bottom line: <strong>the organization is always more reliable than any individual.</strong></p><p>People quit, get sick, have bad days. Organizational structures don&#8217;t. A well-designed organization keeps its capabilities even when a specific seat changes hands&#8212;because those capabilities are embedded in the workflow, the layer-by-layer mentorship, the cross-function collaboration. They don&#8217;t live inside any single person&#8217;s head.</p><p>But cross-level layoffs in the AI era are doing the exact opposite.</p><p>On the surface, the organization is &#8220;slimming down.&#8221; What&#8217;s actually happening: capabilities that should be held and preserved by organizational structure are being concentrated into the hands of a few surviving senior employees. At the cost of long-term organizational stability, the personal value and indispensability of a few individuals are being pushed to a level that brings the organization itself no benefit.</p><p>That isn&#8217;t slimming down. <strong>That is privatizing the organization&#8217;s capabilities into the heads of a handful of people.</strong></p><p>When a top hotel&#8217;s star concierge leaves, the hotel doesn&#8217;t just lose an employee. It loses his high-end network, his sharp judgment, the trust he built with clients over years. Because the hotel never designed a mechanism to retain these things, assets that should have belonged to the company walked out the door with him.</p><p>Cross-level AI layoffs are replicating this process across the entire white-collar world. The organization thinks it&#8217;s using AI to streamline. It&#8217;s actually mortgaging its core capabilities to the handful of people still standing.</p><p>Years later, when business stalls, key people defect, or a major incident hits, no one will trace it back to that &#8220;AI-driven cost optimization&#8221; decision. Just as no one today traces bad vinegar cabbage back to that first bag of pre-made meals. Pre-made meals aren&#8217;t perfect, but they&#8217;re edible and fast, so the boss went with them. Once the habit of &#8220;aiming at passing&#8221; sets in, the capabilities that can only grow through slow repetition&#8212;like a chef&#8217;s muscle memory for heat&#8212;quietly evaporate, one at a time.</p><div><hr></div><p><strong>I Don&#8217;t Care If AI Reduces Jobs</strong></p><p>My motivation in giving these warnings to companies is not an attempt to save any particular employee&#8217;s job.</p><p>Companies are, at their core, the machine that keeps the supply of capability flowing through society. If that machine starts cannibalizing itself in the name of efficiency, then I&#8212;as the machine&#8217;s ultimate end user, as someone who has to eat every day&#8212;will eventually pay the price.</p><p>So here&#8217;s where I actually stand:</p><p>I don&#8217;t particularly care whether AI reduces job opportunities. Technology has always moved forward. Jobs have always been created and destroyed. That is not a new problem.</p><p>What I care about is this: walking into a restaurant and being unable to get a plate of vinegar cabbage that actually has flavor. Buying a bottle of shower gel online and not being able to find a single bottle with decent packaging. Calling any company&#8217;s customer service and reaching someone who has lost the ability to solve any problem that falls outside the SOP.</p><p>Will AI replace jobs? That&#8217;s a problem for business leaders and macroeconomists.</p><p>But will the baseline of commercial civilization turn to sand? Will the quality of everyday life for all of us slide? <strong>Those are questions for every single one of us.</strong></p><div><hr></div><p><em>This is the final installment of a three-part series on AI-era layoffs. The first two are &#8220;<a href="https://www.odbehindthecurtain.com/p/genai-is-cutting-the-people-it-cant?lli=1">GenAI Is Cutting People It Can&#8217;t Replace</a>&#8221; and &#8220;<a href="https://www.odbehindthecurtain.com/p/ai-didnt-kill-the-best-concierges?lli=1">AI Didn&#8217;t Kill the Best Concierges. It Killed the Path to Becoming One.</a>&#8221;</em></p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[She’s Not the Lead. She’s Irreplaceable.]]></title><description><![CDATA[What a Mozart Opera Reveals About Critical Positions.]]></description><link>https://www.odbehindthecurtain.com/p/shes-not-the-lead-shes-irreplaceable</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/shes-not-the-lead-shes-irreplaceable</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Thu, 09 Apr 2026 16:48:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CC54!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was watching Mozart&#8217;s <em>The Magic Flute</em> in Shanghai on April 9. I&#8217;ve seen this opera many times over the years. The story and the music are thoroughly familiar. But every time, the moment that stops the room is the same: the Queen of the Night steps forward and opens her mouth.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CC54!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CC54!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg 424w, https://substackcdn.com/image/fetch/$s_!CC54!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg 848w, https://substackcdn.com/image/fetch/$s_!CC54!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!CC54!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CC54!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg" width="1456" height="1005" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1005,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2491756,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/193707379?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CC54!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg 424w, https://substackcdn.com/image/fetch/$s_!CC54!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg 848w, https://substackcdn.com/image/fetch/$s_!CC54!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!CC54!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ba56e51-4847-4575-a04a-49f74a785a40_3926x2711.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#8220;Der H&#246;lle Rache&#8221; is widely regarded as one of the most technically demanding arias ever written. Two octaves of coloratura fury, rapid-fire staccato passages at the extreme upper limit of the human voice. The number of sopranos alive who can perform it at the level it demands is vanishingly small.</p><p>The Queen was sung by Tetiana Zhuravel, a Ukrainian coloratura soprano. This role is essentially her calling card. She has sung it at major opera houses across Europe. She keeps getting hired because she can do what very few sopranos alive can do.</p><p>Here&#8217;s the thing: she is not the lead. The leads are Prince Tamino and Princess Pamina. They carry the narrative and they&#8217;re on stage for most of the evening. The Queen has roughly ten minutes of stage time in a three-hour opera. But if you cast the wrong singer in this role, the opera fails.</p><p>She is not the most important role. She is the most irreplaceable one.</p><p>This distinction matters far beyond the opera house. In organizational design, we call it a <em>critical position</em>.</p><p></p><p><strong>What &#8220;critical position&#8221; actually means</strong></p><p>&#8220;Critical position&#8221; is not a rigorous academic concept with a single universally accepted definition, which is precisely why it gets misused so often. But in my practice as an organizational design consultant, the most useful definition is this: a critical position is a role that, if left vacant or filled by the wrong person, would cause disproportionate damage to the organization.</p><p>The key word is <em>disproportionate</em>. Every position matters. If it didn&#8217;t, the position shouldn&#8217;t exist. But not every vacancy creates the same level of disruption. Some gaps can be covered by redistributing work, bringing in a contractor, or promoting from within on a reasonable timeline. A critical position is one where none of these remedies are adequate.</p><p>The defining characteristics are straightforward.</p><ul><li><p>The role has a high and direct impact on outcomes because of what the position connects to structurally: a regulatory requirement, a process bottleneck, a technical dependency.</p></li><li><p>The talent pool is scarce, meaning you cannot post the job and expect qualified candidates within a normal hiring cycle.</p></li><li><p>And the ramp-up time is long, because the role requires depth that takes months or years to build.</p></li></ul><p>One thing that is <em>not</em> a reliable indicator: rank. In fact, the primary purpose of identifying critical positions is to surface roles that are easily overlooked precisely because they don&#8217;t sit at the top of the org chart. Everyone already pays attention to the C-suite. The exercise exists to find the positions that few people are watching but that would cause a crisis if they went vacant tomorrow.</p><p></p><p><strong>The CEO is Tamino</strong></p><p>A great Tamino elevates the entire opera. A mediocre one is noticeable but survivable. But the talent pool for Tamino is large. There are many tenors who can sing this role competently. A casting director does not lose sleep over finding one.</p><p>The Queen of the Night is a different matter. The talent pool is tiny. The requirements are extreme. A casting director who cannot fill this role has a genuine crisis.</p><p>Can a CEO be a critical position? Sometimes. A founder-CEO whose personal vision is the company&#8217;s entire strategy may genuinely be irreplaceable in the short term. But whether or not the CEO qualifies is beside the point. The CEO is already the most watched, most succession-planned role in any organization.</p><p>Now compare that to the compliance officer in a heavily regulated industry whose specific certification takes three years to obtain and whose departure could put the company&#8217;s operating license at risk. Or the systems architect whose role sits at the intersection of three business processes that no one else fully understands, not because of individual brilliance, but because the organizational structure concentrated that dependency in a single node. These are the positions this exercise is designed to find.</p><p></p><p><strong>This is organizational design, not talent management</strong></p><p>Most companies get the sequence backwards. They treat critical position identification as a talent management exercise: HR runs a workshop, gathers input from business leaders, produces a list, hands it to the succession planning team.</p><p>But critical positions are a function of organizational architecture. The correct sequence is: design the structure first, then identify which positions in that structure are critical, then build the talent strategy around them. The identification step requires understanding the strategic logic of the architecture, the flow of decisions and dependencies, the places where the structure creates concentration risk.</p><p>When you redesign an organization, you simultaneously redraw the map of where those concentrations live. A restructuring can eliminate a critical position that existed before, create one that didn&#8217;t, or consolidate three non-critical roles into one that becomes critical. This is why, after every organizational redesign I deliver, one of the first questions I ask the leadership team is: in this new structure, which positions are critical, and has that changed from before? If the answer is &#8220;we haven&#8217;t thought about it,&#8221; the redesign is incomplete. You&#8217;ve drawn the boxes and connected the lines, but you haven&#8217;t stress-tested the architecture for single points of failure.</p><p>For HR professionals who have always treated this as part of their domain: identifying <em>which</em> positions are critical is an architectural question. What you do with the people in those positions (succession, retention, development) is absolutely talent management and absolutely your domain. Starting with people instead of structure is how high performers get confused with critical positions.</p><p>Mozart didn&#8217;t just write beautiful music. He designed a structure and knew exactly which position carried the highest risk if the casting went wrong. That&#8217;s organizational design.</p><p></p><p><strong>What identifying a critical position is supposed to trigger</strong></p><p>Once a position is identified as critical, it demands sustained, concrete attention: targeted succession planning for that specific role, active retention efforts for the current occupant, and contingency planning that answers the question &#8220;if this person is gone tomorrow, what do we do on day one, day thirty, and day ninety?&#8221; Every one of these actions costs real time from line managers and HR. This is not a box-checking exercise. It is ongoing, resource-intensive work, which is exactly why the list must be short.</p><p></p><p><strong>Two ways companies destroy the exercise</strong></p><p><strong>Mistake one: equating critical positions with the top two layers of the org chart.</strong></p><p>This is the most common error. When asked to identify critical positions, most leadership teams default to listing their most senior roles. But those roles end up on the list because of their rank, not because of a rigorous assessment of replaceability. Your C-suite is, by definition, the most watched, most succession-planned part of your organization. Putting them on the list wastes attention that should be directed elsewhere.</p><p><strong>Mistake two: letting the list grow until it loses meaning.</strong></p><p>In some organizations, the critical positions list covers twenty percent or more of all roles. At that point, the concept has been destroyed. If your list has fifty names on it, your line managers and HR team cannot realistically give each one the sustained attention the exercise demands. The whole point is differentiation. Spread that attention across a fifth of your workforce, and you&#8217;ve accomplished nothing.</p><p>Why does the list grow? Almost always because of organizational politics. When department heads see that other departments have positions on the list and theirs don&#8217;t, they feel their team&#8217;s significance is being questioned. This is a fundamental misunderstanding. Criticality and importance are different dimensions. But the emotional reaction is real, and HR, caught between analytical rigor and organizational harmony, often compromises. A few more positions get added. Then a few more. The list expands, and the exercise becomes theater.</p><p></p><p><strong>The position belongs to the organization, not the person</strong></p><p>Here is where most people&#8217;s intuition leads them astray, and where Mozart provides the sharpest counterargument.</p><p>Mozart wrote the Queen of the Night for his sister-in-law, Josepha Hofer, tailoring the role to her extraordinary upper register and agile coloratura. You might argue this proves that critical positions follow the person. After all, Mozart literally built the role around Josepha.</p><p>But <em>The Magic Flute</em> has been performed continuously for 235 years. Josepha sang her last Queen of the Night in 1801. Every production since has cast this role by evaluating candidates against the requirements of the position itself: hit F6, sustain the staccato passages, project the dramatic weight the music demands. Those requirements have never changed. The woman who inspired the role has been dead for two centuries. The role endures.</p><p>This is exactly how critical positions work. A long-tenured specialist may have become so identified with a function that people can&#8217;t imagine anyone else doing it. But criticality must be assessed based on the role&#8217;s structural characteristics: what it connects to, what qualifications it requires, what happens when it&#8217;s vacant. When that person leaves, the position remains. And if the organization defined it clearly enough, it knows exactly what to look for next.</p><p></p><p><strong>A note on permanence</strong></p><p>A position&#8217;s criticality generally persists as long as the organizational structure that created it remains in place. The Queen of the Night has been a critical position for 235 years because the structure of the opera hasn&#8217;t changed.</p><p>But organizations do change. During a digital transformation, a legacy systems architect may temporarily become the most critical role in the company. Once the migration is complete, that position may no longer exist at all. The test remains the same: would a vacancy cause disproportionate damage <em>right now</em>? If yes, it&#8217;s critical, even if temporarily.</p><p></p><p><strong>Back to the opera house</strong></p><p>Tetiana Zhuravel took her bow in Shanghai to a standing ovation. She had been on stage for a fraction of the evening, sang two arias, and had been, for the duration of those arias, the single most important person in the building.</p><p>The leads carried the story. The orchestra carried the music. But when the Queen of the Night opened her mouth, every person in the audience understood instinctively what organizational designers spend careers trying to articulate: not all roles are created equal. Some are important. Some are irreplaceable. Knowing the difference is the most fundamental act of organizational clarity there is.</p><p>The next time you look at your org chart, ask yourself: where is your Queen of the Night? Not your CEO, not your highest-paid executive, but the role that appears modest on paper, that no one is actively watching, yet if the wrong person fills it, or if no one fills it at all, brings down the house.</p><p>Better yet, find out if your organization has a critical positions list. If it has fifty names on it, or if it mirrors the top two layers of your org chart, it hasn&#8217;t been stress-tested.</p><p>If you can&#8217;t find the list at all, your organizational design isn&#8217;t finished.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Nobody's Driving. Nobody's Accountable. And Now We Have Proof.]]></title><description><![CDATA[On the evening of March 31, 2026, roughly one hundred Baidu Apollo Go robotaxis were immobilized across Wuhan after what police preliminarily described as a system malfunction.]]></description><link>https://www.odbehindthecurtain.com/p/nobodys-driving-nobodys-accountable-a19</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/nobodys-driving-nobodys-accountable-a19</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Sun, 05 Apr 2026 05:47:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9brc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>On the evening of March 31, 2026, roughly one hundred Baidu Apollo Go robotaxis were immobilized across Wuhan after what police preliminarily described as a system malfunction. On elevated highways, bridges, and arterial roads. Not pulled over. Stopped in the travel lane. According to passenger accounts reported by multiple Chinese media outlets, some passengers were trapped inside for up to two hours. The in-car speaker repeatedly broadcast a single instruction: &#8220;Vehicle has a problem. Do not open the door.&#8221; Passengers reported that the SOS button produced no response and that calls through the vehicle&#8217;s system auto-disconnected. Customer service, when passengers finally reached it on their personal phones, told them to &#8220;wait patiently.&#8221;</p><p>The vehicles did not stop because of an accident. Police preliminarily determined the cause to be a system failure. But the vehicles&#8217; stopping in live traffic lanes caused secondary collisions and severe congestion. Other drivers, not expecting stationary vehicles in the middle of a highway, collided with them.</p><p>On April 1, Baidu&#8217;s customer service told multiple media outlets that all robotaxi operations across Wuhan had been suspended, with no timeline for resumption. The timing was not an April Fools&#8217; joke. The passengers stranded on elevated highways the night before can confirm.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9brc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9brc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9brc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9brc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9brc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9brc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg" width="1248" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1248,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:335821,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/193231019?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9brc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9brc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9brc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9brc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f96608-1966-4c53-bd42-75337fbfea69_1248x832.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I read this news and felt something I don&#8217;t often feel: vindication.</p><p></p><p><strong>Two months ago, I wrote about this exact scenario</strong></p><p>In February, I published <a href="https://www.odbehindthecurtain.com/p/nobodys-driving-nobodys-accountable?lli=1">an article</a> about my experience riding a WeRide autonomous vehicle in Abu Dhabi through Uber. That article introduced several concepts I developed from what I observed in the back seat.</p><p>The first was the &#8220;<strong>accountability fuse</strong>.&#8221; The car had a human in the driver&#8217;s seat. Uber called him a &#8220;Vehicle Specialist,&#8221; not a driver. He wasn&#8217;t driving. But he was there. I argued that his real function was not technical backup but accountability absorption. If something went wrong, he was the person who could be pointed at and asked: &#8220;Why didn&#8217;t you intervene?&#8221; Like a fuse in an electrical circuit, he existed to burn out and protect the system behind him.</p><p>The second concept was the &#8220;<strong>accountability vacuum</strong>.&#8221; When AI holds decision-making authority but no one in the organization has been clearly designated as accountable for the AI&#8217;s decisions, accountability doesn&#8217;t become unclear. It evaporates. Every party in the chain has a reasonable explanation for why it&#8217;s not their fault.</p><p>That article ended with an image. In Abu Dhabi, the Uber app showed an &#8220;unlock door&#8221; button that was greyed out. The human specialist opened the door from inside. I wrote: &#8220;Next time, there won&#8217;t be a Saif. Only that button. You press it, and it turns grey. That greyed-out button doesn&#8217;t just lock you inside the car. It locks accountability outside.&#8221;</p><p>Wuhan proved this wasn&#8217;t a theoretical exercise. The button didn&#8217;t just turn grey. It stopped working entirely.</p><p></p><p><strong>With a human driver, accountability is clear</strong></p><p>In Abu Dhabi, with Saif sitting in the driver&#8217;s seat, the accountability for my safety as a passenger was unambiguous. It sat with Saif, and through Saif, with the company that employed him. How Saif and his employer divided that accountability between themselves was their business, not mine. As a passenger, I didn&#8217;t need to think about my own safety. That was someone else&#8217;s job.</p><p>This is no different from a traditional taxi. You get in, you say where you&#8217;re going, you arrive. If something goes wrong during the ride, the accountability falls on the driver and the company behind the driver. The passenger is never asked to make decisions about their own safety during the trip. That&#8217;s the deal. That&#8217;s what you&#8217;re paying for.</p><p>The key point is not just who is accountable. It is that the accountability is exercised in real time. The driver is there throughout the journey, continuously making judgments, continuously ready to act. If the car breaks down on a highway, the driver calls for help. The driver decides whether it&#8217;s safe to exit. The driver opens the door. The passenger doesn&#8217;t have to make any of these decisions. Accountability isn&#8217;t a document filed somewhere. It is a living function, performed by a human, every second of the trip.</p><p></p><p><strong>Without a human driver, that accountability cannot be replicated</strong></p><p>Remove the human from the vehicle, and this real-time accountability chain breaks the moment the system fails. This is not a criticism of any particular company&#8217;s execution. It is a structural observation. A machine can be shut down, recalled, or updated. It cannot be held accountable. Accountability requires a human who can answer the question: why did you let this happen?</p><p>This structural gap cannot be closed by remote support, by corporate promises, or by better technology. Here is why.</p><p><strong>Remote support is not a human in the vehicle.</strong> Every major robotaxi operator maintains remote human support. Waymo has its Fleet Response team. Baidu has remote operators who can take direct control of vehicles. But remote support is conditional. It depends on the network being up, the system being online, enough staff being available, the connection being fast enough, and the operator being able to accurately assess a situation through a camera feed. In Wuhan, when the system failed, passenger accounts indicate that every one of these support channels failed with it. Saif&#8217;s presence in the driver&#8217;s seat is unconditional. He is there regardless of network status, server capacity, or staffing levels. You can add redundancy to remote support, add backup channels, add more staff. But you can never turn conditional presence into unconditional presence.</p><p><strong>A corporate promise of &#8220;full accountability&#8221; is not real-time protection.</strong> A company can declare that it accepts full accountability for anything that goes wrong. But the purpose of clear accountability is not to determine who pays compensation after someone is hurt. The purpose is to ensure that someone is actively protecting the passenger throughout the journey. &#8220;We accept full accountability&#8221; is a promise that can only be redeemed after something has already gone wrong. During the two hours a passenger is trapped on an elevated highway, that promise cannot open the door, cannot assess whether exiting is safe, cannot call for help through a functioning channel. In any context involving human life, accountability that cannot be exercised in real time is not accountability. It is a liability settlement waiting to happen.</p><p><strong>Even foreseeable risks were not addressed.</strong> What makes the Wuhan incident particularly hard to defend is that none of it was unforeseeable. System failures happen. Fleet-wide failures are possible when vehicles share a single network architecture. Passengers in a stopped vehicle on an elevated highway will need to communicate with someone. Emergency communication should function independently of the system it backstops. Every one of these risks could have been written on a whiteboard. And for every one of them, the question is the same: who is accountable for the passenger&#8217;s safety in this scenario, and how will that accountability be exercised in real time? Wuhan showed these questions had not been answered. If the foreseeable scenarios don&#8217;t have clear, executable accountability solutions, how can anyone be confident that the unforeseeable ones will?</p><p>Developing truly autonomous ride-hailing means accepting this accountability vacuum as a structural cost, not a transitional inconvenience. The SOS button failing along with the driving system is a design flaw, and it&#8217;s fixable. The lack of a physical emergency stop button is an engineering gap that Zoox has already solved voluntarily. The customer service collapse is an operational failure. All fixable. But the absence of a human in the vehicle is not a flaw. It is the product. And with it comes an accountability gap that no amount of engineering can close.</p><p></p><p><strong>Private vehicles don&#8217;t have this problem. Robotaxis do. Here&#8217;s why.</strong></p><p>If you own an L4 or L5 private vehicle, the accountability structure is comparatively straightforward. You purchased the car. You chose to activate the autonomous driving function. If something goes wrong, the question is between you and the manufacturer, a product liability framework with established legal precedent. You accepted the technology&#8217;s risk when you bought the vehicle.</p><p>There is also no structural dependency on remote support. The SAE J3016 standard, which defines the L0 through L5 levels of driving automation adopted worldwide, does not include remote support as a component of any level. The standard defines who performs the driving task, who monitors the environment, and who serves as the fallback. It is entirely about the relationship between the human in the vehicle and the automated system. In fact, L4 and L5 vehicles are required by the standard to be capable of reaching a minimal risk condition, a safe stop, independently, without any remote assistance. A private autonomous vehicle that loses network connectivity must still be able to protect its occupants on its own. The accountability framework is self-contained: you, the car, and the manufacturer.</p><p>Robotaxis cannot operate within this clean framework. There is no human driver in the vehicle, so when the system fails, someone must be reachable from outside the vehicle to take accountability for the passenger&#8217;s safety. Remote support is not optional for robotaxis. It is the only human link in the chain.</p><p>But here is the structural problem: the moment remote support becomes essential, it introduces an accountability layer that the SAE standard never defined and that no regulatory framework has fully addressed. Who operates the remote support? If the remote operator makes a wrong judgment call, who is accountable? If network failure makes the remote operator unreachable, who is accountable then? If remote support is overwhelmed by simultaneous incidents across a fleet, who is accountable for the passengers left waiting?</p><p>Private vehicles live inside a clean, self-contained accountability framework. Robotaxis are forced to depend on a layer that sits outside any standardized framework, that is conditional on infrastructure beyond the vehicle&#8217;s control, and that has already been shown to fail at scale.</p><p>In a robotaxi, the passenger owns nothing, chose nothing about the underlying system, cannot take over driving, and has no direct relationship with the technology provider. They hailed a ride. Their reasonable expectation is identical to hailing any other taxi: I get in, I arrive safely, the driving is someone else&#8217;s job. The accountability vacuum lives precisely in the use case that is being commercialized most aggressively.</p><p></p><p><strong>The stakes are uniquely high</strong></p><p>Because autonomous ride-hailing directly involves human lives, there is no such thing as a minor incident. Cruise had one pedestrian incident in San Francisco in October 2023. Its California permit was revoked. Operations were suspended nationwide. The CEO resigned. GM ultimately shut down the entire robotaxi business after investing over $10 billion. That incident triggered a regulatory, governance, and credibility crisis from which Cruise never recovered.</p><p>The risk profile is also categorically different from individual vehicles. When a private car breaks down, one car stops. When a robotaxi fleet suffers a system failure, roughly a hundred vehicles can stop simultaneously across an entire city. The economic cost, delayed freight, missed flights, diverted ambulances, gridlocked commerce, radiated outward from every stopped vehicle. <strong>And if a system failure can happen accidentally, it can happen deliberately.</strong> A single network intrusion, timed to rush hour, could reproduce the same scenario across any city where a robotaxi fleet operates.</p><p>The benefits of autonomous driving in military and emergency applications are clear, and the accountability challenges are minimal. Military vehicles carry trained personnel who accepted operational risk. Emergency and medical transport often carries no passengers at all. Civilian ride-hailing is the application that places untrained, uninformed, ordinary civilians into an accountability vacuum they never agreed to enter.</p><p></p><p><strong>A valuable lesson, at the right time</strong></p><p>In fairness, this incident may ultimately prove valuable for the entire industry. One hundred passenger vehicles stopping is better than one hundred vehicles losing control. Small robotaxis stopping is better than a fleet of autonomous freight trucks stopping on a highway. And a system-wide failure occurring now, during the early stages of deployment with no fatalities, is far better than the same failure occurring after thousands of vehicles are operating across dozens of cities.</p><p>The Wuhan shutdown is an opportunity to confront the question that matters most. Not just the technical questions about network redundancy and emergency system design, which are important but solvable. The deeper question: how should accountability be structured between the service provider and the passenger when there is no human driver in the vehicle? This is a question that engineering cannot answer. It requires a deliberate decision by regulators, operators, and society.</p><p></p><p><strong>So what does society gain?</strong></p><p>After everything discussed above, it is fair to ask: what does society gain from autonomous ride-hailing that justifies these costs?</p><p>The safety potential is real. Waymo&#8217;s published data, collected in real-world mixed traffic, shows about a 92% reduction in serious-injury crashes across over 170 million driverless miles. But the industry&#8217;s larger safety promise, virtually eliminating the 1.19 million annual global road deaths (WHO), rests on a condition that does not currently exist: a road environment with high autonomous vehicle penetration. Some simulation-based research suggests that optimal safety may not fully materialize until penetration rates are much higher than today, with one study finding an optimum around 70% in its modeled environment. In the current reality, autonomous vehicles and human drivers share the same road operating on fundamentally different logics, and many collisions that still involve robotaxis are initiated by human drivers, especially in rear-end scenarios. The full safety benefit is real but conditional, and the conditions may take decades to arrive.</p><p>Robotaxi fleets also generate real-world driving data that feeds back into algorithm improvement and adjacent applications.</p><p>But the most immediate, tangible economic benefit of removing the human driver is straightforward: it eliminates labor costs. Driver compensation accounts for about 70% of traditional ride-hailing operating costs. This is the primary commercial incentive driving the industry forward. A secondary argument is that autonomous ride-hailing releases drivers into other industries where labor is more urgently needed.</p><p>The accountability vacuum is the cost. It is borne by the passenger. The labor savings are the benefit. They are captured by the company. The question is whether the social justification for this transfer holds up.</p><p>The world currently has 186 million unemployed people (ILO, Employment and Social Trends 2026). The global &#8220;jobs gap,&#8221; people who want paid work but cannot access it, stands at 408 million (ILO, Employment and Social Trends 2026).</p><p>Are other industries short of workers?</p><p>Do we lack drivers?</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[I Thought the Metaverse Was Zuckerberg’s Worst Call. Then He Made AI Usage a KPI.]]></title><description><![CDATA[According to reporting by the Wall Street Journal, Meta now partly evaluates employee performance based on AI usage.]]></description><link>https://www.odbehindthecurtain.com/p/i-thought-the-metaverse-was-zuckerbergs</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/i-thought-the-metaverse-was-zuckerbergs</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Wed, 01 Apr 2026 14:34:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!66OW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>According to reporting by the Wall Street Journal, Meta now partly evaluates employee performance based on AI usage. Internal targets obtained by media outlets paint an even more specific picture: some engineering teams are aiming for 80% adoption of general AI tools among mid-to-senior engineers, 55% of code changes assisted by AI agents, and targets for 65% of engineers to have over 75% of their committed code AI-assisted.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!66OW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!66OW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg 424w, https://substackcdn.com/image/fetch/$s_!66OW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg 848w, https://substackcdn.com/image/fetch/$s_!66OW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!66OW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!66OW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg" width="873" height="337" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:337,&quot;width&quot;:873,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:67030,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/192854011?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!66OW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg 424w, https://substackcdn.com/image/fetch/$s_!66OW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg 848w, https://substackcdn.com/image/fetch/$s_!66OW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!66OW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F935c325c-3eaf-4413-9a54-2c91583431d9_873x337.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Meta&#8217;s public messaging has been more measured. A spokesperson emphasized that the focus is on the impact AI creates, not just how often it&#8217;s used. But when your performance review is tied to quantified AI adoption targets, the distinction between &#8220;encouraged&#8221; and &#8220;required&#8221; gets very thin on the ground.</p><p>I&#8217;ve spent 17 years designing performance and accountability systems for organizations. When I read this, my first reaction wasn&#8217;t outrage. It was recognition. I&#8217;ve seen this pattern before. Not the AI part. The organizational behavior underneath it.</p><p></p><p><strong>I understand the desperation</strong></p><p>Let&#8217;s be fair about where Meta is coming from.</p><p>The company has accumulated roughly $80 billion in operating losses through its Reality Labs division since 2020, chasing a Metaverse that attracted almost no users and became a global punchline. Layoffs have hit the company in waves: approximately 21,000 jobs cut across 2022 and 2023, 600 from the AI unit in October 2025, around 1,000 from Reality Labs in January 2026, and another 700 in March 2026. And despite aggressive moves in the AI space, Meta has yet to produce a consumer-facing AI product that competes head-to-head with ChatGPT or Claude. The open-source Llama models have earned respect in the developer community, but in the race for mass-market adoption, Meta is not yet in the leading pack.</p><p>Capital markets are watching. After the Metaverse debacle, investors need to see that Meta has a credible next act. <strong>Embedding AI into performance reviews sends a clear signal: this company is all in on AI. Every employee, every workflow, every output.</strong></p><p>I get it. The signal needed to be sent. But desperate people make desperate moves, and desperate moves are rarely attractive.</p><p></p><p><strong>The KPI itself is a textbook mistake</strong></p><p>Evaluating employees on whether they use a specific tool is process-oriented management. It measures compliance, not performance. It tells you that someone opened an application. It tells you nothing about whether the work got better.</p><p>Every manager has faced a version of this choice. Your team is behind schedule. You can say &#8220;everybody stay late tonight.&#8221; Or you can say &#8220;I need this on my desk by 9am tomorrow.&#8221; The first controls the process. The second sets the outcome. They might lead to the same result, but they are fundamentally different management approaches. The first tells people how to work. The second tells people what to deliver, and trusts them to figure out the how.</p><p>&#8220;Use AI&#8221; is the equivalent of &#8220;stay late.&#8221; <strong>It prescribes a method instead of raising a standard.</strong></p><p>If you want a presentation built in one hour instead of four, set that as the expectation. If someone hits it with AI, great. If someone hits it without AI, equally great. You got what you asked for. But under Meta&#8217;s system, the person who used AI and took four hours scores higher on their review than the person who delivered in one hour without it. You&#8217;re not rewarding performance. You&#8217;re rewarding obedience.</p><p>And it gets worse. When you incentivize tool usage, employees start forcing AI into workflows where it adds no value, or actively degrades quality, just to check a box. The organization doesn&#8217;t get more productive. It gets more performative. People optimize for the metric, not for the outcome. This is Goodhart&#8217;s Law playing out in real time: when a measure becomes a target, it ceases to be a good measure.</p><p></p><p><strong>Even if all-in is the right strategy, this is the wrong execution</strong></p><p>Let&#8217;s set aside whether the KPI is smart or stupid. Even if Meta&#8217;s leadership genuinely believes the entire company needs to adopt AI, the way they&#8217;re going about it is a change management failure.</p><p>You don&#8217;t open a company-wide transformation with the most drastic move available. A sweeping KPI overhaul that affects nearly 79,000 employees, rolled out before you&#8217;ve demonstrated a flagship product that justifies the shift, is enormous organizational risk. You&#8217;re asking the entire company to reorganize how they work around a technology whose internal value proposition hasn&#8217;t been fully proven yet.</p><p><strong>Effective change management sequences the pressure. You start with pilots. You let early adopters demonstrate value. You build internal case studies. You create pull, not push.</strong> What Meta did is all push, no pull. &#8220;Use this tool or your review suffers&#8221; is coercion, not transformation.</p><p>The irony is hard to miss. If you wanted to signal all-in commitment to a new technology direction, there are gentler ways to do it. You could rebrand the entire company to show your conviction. Wait. They already tried that with the Metaverse.</p><p></p><p><strong>The broader pattern is more alarming than the KPI</strong></p><p>This decision doesn&#8217;t exist in isolation. Over the past year, Meta has made a series of moves that, viewed together, should concern anyone who thinks about organizational health for a living.</p><p>In June 2025, Meta appointed 27-year-old Alexander Wang as its first-ever Chief AI Officer, leading the newly formed Superintelligence Labs. Wang is a genuinely accomplished founder who built Scale AI into a $29 billion company. But the appointment was not without friction. Yann LeCun, widely known as one of the &#8220;godfathers of AI&#8221; and a long-time leader of Meta&#8217;s AI research, left the company in November 2025. In a Financial Times interview, he called Wang &#8220;young&#8221; and &#8220;inexperienced,&#8221; noting a lack of background in how research is actually practiced. Within months of Superintelligence Labs launching, at least eight employees departed, including a twelve-year veteran who joined Anthropic and researchers recruited from OpenAI who returned after less than a month.</p><p>The organization has been flattened aggressively, with reports of manager-to-engineer ratios reaching 1:50 in some AI teams. Internal tensions have emerged as newly hired AI talent reportedly commands compensation packages that dwarf those of existing staff, with some long-tenured employees threatening to leave.</p><p>None of these moves is inherently wrong in isolation. Promoting younger talent injects energy. Flattening hierarchies can accelerate decisions. Paying market rates for scarce AI talent is rational. But doing all of them simultaneously, while also overhauling performance evaluation criteria and continuing to execute wave after wave of layoffs, dramatically shrinks the margin for error.</p><p>When you restructure your leadership, flatten your organization, revamp your evaluation system, and cut headcount all at the same time, you are making one very specific bet: that the new configuration will deliver transformative results fast enough to justify the upheaval. In Meta&#8217;s case, that means producing a flagship AI product that wins the mass market. Not an internal tool. Not a research paper. A product that ordinary people choose to use every day.</p><p>I&#8217;ve written before that every serious player in the international AI arena needs a competitive LLM. That logic applies at the enterprise level. If you&#8217;re going to restructure your entire organization around AI, you need something to show for it. As of today, Meta doesn&#8217;t have that. And without it, all this organizational disruption is just disruption.</p><p></p><p><strong>The question that concerns me most</strong></p><p>Here&#8217;s where my OD instincts kick in.</p><p>I don&#8217;t believe that nobody inside a 79,000-person organization realized that process-oriented KPIs are a bad idea. This is not an obscure insight. It&#8217;s covered in any introductory management course. Any competent HR professional would flag it. Any experienced people manager would feel the wrongness intuitively.</p><p>So why did it happen?</p><p>There are only a few possible explanations. People raised objections and were overruled. Or nobody dared to speak up. Or the decision was made unilaterally, without meaningful consultation.</p><p>Any of these is far more concerning than the KPI itself.</p><p>If this were a client engagement, a single bad KPI wouldn&#8217;t keep me up at night. Bad metrics get fixed. But the fact that a decision this obviously flawed passed through an organization of that size without being stopped? That&#8217;s the signal that makes me want to start examining the power structure. Who holds decision rights? Who has veto power? Is there a functioning feedback loop between frontline managers and executive leadership? Or has the organization reached a point where the CEO&#8217;s conviction overrides every institutional check?</p><p>A bad KPI can be fixed in a week. A broken power structure takes years to repair, if it gets repaired at all.</p><p></p><p><strong>The cow is alive. But the conditions matter.</strong></p><p>Meta is a cash cow. That&#8217;s not in dispute. The advertising business generated over $200 billion in revenue in 2025. The company has resources that most organizations can only dream of.</p><p>But cash reserves don&#8217;t make an organization immune to structural decay. They just extend the timeline. You can make bad decisions for longer before the consequences become visible. The Metaverse proved this: it took years and $80 billion before the market forced a course correction.</p><p>There&#8217;s an ancient Chinese military proverb from the Zuo Zhuan, written over 2,500 years ago: &#8220;The first drumbeat brings full morale. The second, it fades. The third, it&#8217;s gone.&#8221; The Metaverse was Meta&#8217;s first drumbeat. It was bold, it was loud, and it failed. The AI pivot is the second drumbeat. The organization is already fatigued, the talent pool is churning, and the market is skeptical. If this one doesn&#8217;t land, there won&#8217;t be enough morale left for a third.</p><p>Each cycle erodes something that money can&#8217;t easily rebuild: the trust of talented people who have to decide whether this is a company worth committing their careers to.</p><p>A cash cow is still a living organism. If the conditions aren&#8217;t right, the cow dies.</p><p></p><p><strong>What I&#8217;d tell any CEO considering the same move</strong></p><p>Raise your output standards. Shorten your deadlines. Demand higher quality. Then let your people figure out how to get there.</p><p>Some will use AI. Some won&#8217;t. Some will use it for certain tasks and skip it for others. That&#8217;s not a problem. That&#8217;s professional judgment. And professional judgment is exactly what you should be measuring, rewarding, and protecting.</p><p>The moment you start evaluating people on which tools they used instead of what they delivered, you&#8217;ve stopped managing performance and started managing compliance. AI doesn&#8217;t change the fundamental principles of good management. It just gives people one more way to get the job done.</p><p>Measure the outcome. That&#8217;s it.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Why This OD Consultant Never Prays]]></title><description><![CDATA[After seventeen years as an organizational design consultant, I have a professional affliction: I see accountability structures everywhere.]]></description><link>https://www.odbehindthecurtain.com/p/why-this-od-consultant-never-prays</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/why-this-od-consultant-never-prays</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Tue, 31 Mar 2026 10:26:29 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e3bdb5d9-9693-4b99-9d60-e99d935c5262_3024x3764.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After seventeen years as an organizational design consultant, I have a professional affliction: I see accountability structures everywhere.</p><p>In meetings, I untangle accountability. During layoffs, I untangle accountability. Eventually, it started bleeding into my personal life. At a restaurant, I&#8217;d look at the menu and wonder whether the pricing authority sits with the head chef or the owner.</p><p>So when a friend asked me, &#8220;How come you never go to a temple to pray?&#8221;, my answer was simple:</p><p>&#8220;The accountability is unclear.&#8221;</p><p>Let me unpack that. Those words carry at least a few layers of meaning.</p><p></p><p><strong>When it works, who gets the credit? When it doesn&#8217;t, who takes the blame?</strong></p><p>You go to a temple and make a wish. There are only two possible outcomes.</p><p>The wish comes true. Congratulations, divine blessing. But did you really do nothing during that time? Doesn&#8217;t your own effort count? If you got that year-end bonus, was it because you worked overtime every day, or because you visited Lingyin Temple back in January?</p><p>The wish didn&#8217;t come true. Then clearly, your heart wasn&#8217;t sincere enough.</p><p>The phrase &#8220;sincerity brings results&#8221; is the underlying architecture of this entire system. Its brilliance lies in one elegant design: all success is attributed to the system; all failure is attributed to the user.</p><p>An organizational design professional encountering this structure will feel physically uncomfortable. My daily job is helping companies sort out exactly this question: who gets credit for success, and who bears accountability for failure. If you told me your company runs a performance system where good results are credited to leadership&#8217;s wisdom and bad results are blamed on employee execution, I would make you tear the whole thing up and start over.</p><p>And yet this same logic has been running in temples for thousands of years, with remarkably high customer satisfaction.</p><p></p><p><strong>Some prayers are perfectly measurable. Most are not.</strong></p><p>To be fair, not all prayers are hopeless. Some are actually highly compliant with the SMART framework from management theory.</p><p>In China, before heading out to sea, fishermen pray to Mazu: protect us and bring us back safely with a full catch. Run this prayer through the SMART criteria one by one, and it&#8217;s nearly perfect.</p><ul><li><p><strong>S</strong>pecific: come back alive, boat full of fish.</p></li><li><p><strong>M</strong>easurable: did the person return, how much fish, crystal clear.</p></li><li><p><strong>A</strong>chievable: a reasonable expectation, nobody&#8217;s asking to haul in a boatload of bluefin tuna.</p></li><li><p><strong>R</strong>elevant: fishing is their livelihood, so the catch is directly relevant.</p></li><li><p><strong>T</strong>ime-bound: this one voyage. And they pray to Mazu alone, so attribution is clean.</p></li></ul><p>Praying for pregnancy works too. Either you&#8217;re pregnant or you&#8217;re not. Binary outcome, relatively clear time window.</p><p>These prayers have high &#8220;success rates&#8221; precisely because they are assessable, especially on the Achievable dimension.</p><p>But the vast majority of prayers in everyday life are nothing like this. You solemnly pray: &#8220;Let my stock portfolio double this year.&#8221; The question immediately arises: is doubling achievable? What about a 50% gain? 10%? If you prayed for a 1% return, you wouldn&#8217;t need divine help. The market will probably get you there on its own. But nobody makes a special trip to a temple to pray for 1% returns. The things truly worth praying for are the hard things. And where exactly is the line between &#8220;hard&#8221; and &#8220;unrealistic&#8221;? Who defines it? The divine won&#8217;t tell you, and you can&#8217;t tell yourself.</p><p>Pray too conservatively, and you don&#8217;t need the blessing. Pray too ambitiously, and when it doesn&#8217;t happen, you can&#8217;t say the prayer failed. The A in SMART (Achievable) is simply undefinable in the context of prayer.</p><p></p><p><strong>Even the divine would need to conduct an impact assessment</strong></p><p>Religion, as a sacred institution, is fundamentally oriented toward goodness. So it naturally needs to screen every prayer for moral standing.</p><p>Some prayers obviously should not be granted. If someone walks into a temple and prays &#8220;please let my Ponzi scheme survive another year,&#8221; I believe any deity would reject that filing outright. A request this clearly in violation of basic moral standards cannot pass the initial screening.</p><p>But a huge number of prayers are not black and white.</p><p>&#8220;Let me crush my competitor this year.&#8221; Good or bad? You might be winning the market with a better product, entirely legitimate. But your competitor might have hundreds of families depending on those paychecks. You win, they lose their jobs.</p><p>&#8220;Let the construction project I&#8217;m leading move forward smoothly.&#8221; But this project might involve building a factory in a residential neighborhood, with fierce opposition from local residents. Your &#8220;smooth progress&#8221; is their &#8220;disaster.&#8221;</p><p>None of these prayers are as obviously wrong as the Ponzi scheme. The person praying genuinely believes they are doing the right thing. But the consequences of each prayer involve invisible chains of interests, and whether the outcome is good or bad simply cannot be determined at the moment of praying. Even an omniscient, omnipotent deity would need to conduct a complex, systemic evaluation before deciding whether to grant the wish. This is no longer a moral judgment. This is an impact assessment.</p><p></p><p><strong>Two temples, one wish. Who gets the credit?</strong></p><p>Some people say: after praying at one temple, you shouldn&#8217;t visit another temple for a while.</p><p>Why? Because if you visit two temples in quick succession and make the same wish, and the wish comes true, which temple gets the credit? Don&#8217;t forget, after a wish is fulfilled, there&#8217;s still the crucial step of &#8220;returning to give thanks.&#8221; Which temple do you go back to?</p><p>Attribution cannot be split.</p><p>So people say: just stick to one temple for a period of time. Fair enough. But how long is &#8220;a period of time&#8221;? Three months? Six months? A year? The people offering these definitions obviously can&#8217;t cite any religious policy document or standard operating procedure as evidence.</p><p>If you wanted to compare which temple is more effective, you couldn&#8217;t run a controlled experiment either. Visit two temples at the same time for the same wish, and if it works, you can&#8217;t split the credit. Visit at different times for the same wish, or different times for different wishes, and you have no basis for comparison at all.</p><p>From any scientific perspective, the question &#8220;which temple is more effective&#8221; is fundamentally unanswerable.</p><p></p><p><strong>So I don&#8217;t go</strong></p><p>It&#8217;s not just about burning incense and praying at temples. Christianity, Islam, Judaism, Hinduism, and other religions all follow the same pattern. What believers pray for comes down to the same few things: peace, health, wealth, guidance. We are all human. Regardless of skin color, language, or faith, our needs are remarkably similar. It&#8217;s just that the vast majority of everyday prayers are inherently vague and unassessable. And precisely because they are unassessable, belief systems can never be disproven.</p><p>My position remains the same. If your company&#8217;s performance system looked like this: success attributed to leadership&#8217;s vision, failure blamed on employees&#8217; poor execution, evaluation criteria vague and shifting, achievability of targets left entirely to subjective judgment, reporting lines tangled with no clear ownership of results, I would make you redesign the whole thing.</p><p>The temple&#8217;s system, I can&#8217;t change, and I don&#8217;t want to. But at least I know why I don&#8217;t use it.</p><p>The accountability is unclear.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI Didn’t Kill the Best Concierges. It Killed the Path to Becoming One.]]></title><description><![CDATA[You check into a luxury hotel in New York City and walk up to the concierge desk.]]></description><link>https://www.odbehindthecurtain.com/p/ai-didnt-kill-the-best-concierges</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/ai-didnt-kill-the-best-concierges</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Thu, 26 Mar 2026 16:22:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!SB10!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You check into a luxury hotel in New York City and walk up to the concierge desk. &#8220;Can you get me a table for two at Masa tomorrow night?&#8221;</p><p>You know this is a nearly impossible ask. Masa is the kind of place you can&#8217;t get into even with months&#8217; notice. The concierge looks at you and says, &#8220;Let me see what I can do,&#8221; then picks up the phone.</p><p>Fifteen minutes later, he&#8217;s back. &#8220;Done. Tomorrow, 8:30pm, two guests.&#8221;</p><p>This isn&#8217;t magic. This is a relationship network built over a decade. The restaurant&#8217;s executive chef knows him because he&#8217;s been sending guests there every week for ten years. The ma&#238;tre d&#8217; owes him a favor because last year, when the ma&#238;tre d&#8217;s family visited the city, he arranged a private tour. He can get you that &#8220;impossible reservation&#8221; not because he&#8217;s better at using a phone than you are, but because he holds something you don&#8217;t: a web woven through ten years of daily work, with every thread carrying trust.</p><p>But here&#8217;s the question: does that daily work still exist?</p><p></p><p><strong>A pyramid being hollowed from the bottom</strong></p><p>Think of a concierge&#8217;s capabilities as a four-layer pyramid.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SB10!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SB10!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png 424w, https://substackcdn.com/image/fetch/$s_!SB10!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png 848w, https://substackcdn.com/image/fetch/$s_!SB10!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png 1272w, https://substackcdn.com/image/fetch/$s_!SB10!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SB10!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png" width="1456" height="560" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:560,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:118985,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/192222862?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SB10!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png 424w, https://substackcdn.com/image/fetch/$s_!SB10!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png 848w, https://substackcdn.com/image/fetch/$s_!SB10!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png 1272w, https://substackcdn.com/image/fetch/$s_!SB10!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5753b2e-aaf6-4ea5-ac28-426d48131704_3764x1447.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><strong>The base: information.</strong></p><p>Knowing what&#8217;s in the city, how to get there, when things open, which neighborhoods are worth exploring. When guests arrive in an unfamiliar city, this is the first thing they need. This was the concierge&#8217;s most fundamental daily function.</p><p><strong>Layer two: basic execution.</strong></p><p>The guest tells you what they want, you make it happen. &#8220;Book me Le Cinq tonight.&#8221; &#8220;Get me two tickets to Rigoletto at the Royal Opera House tomorrow.&#8221; &#8220;Arrange a car to the airport.&#8221; The guest has already made the decision. You carry it out.</p><p><strong>Layer three: personalized judgment and holistic planning.</strong></p><p>The guest hasn&#8217;t figured out what they want. You figure it out for them. &#8220;I have two free days. Put together an itinerary for me.&#8221; Now you&#8217;re weighing the guest&#8217;s fitness level, interests, and pace preferences. You&#8217;re deciding between a national park outside the city or an art museum downtown. You&#8217;re choosing which Michelin restaurant fits the evening and what style of bar works afterward. You&#8217;re factoring in weather, distances, crowd levels, and timing. You produce a plan the guest couldn&#8217;t have come up with on their own, but that feels &#8220;just right&#8221; when they experience it. This used to be the concierge&#8217;s core professional moat.</p><p><strong>The top: relationship capital and &#8220;making the impossible happen.&#8221;</strong></p><p>Getting a guest into a restaurant no one can get into. Securing tickets to a show that sold out months ago. Arranging a private experience that doesn&#8217;t exist on any website. This isn&#8217;t about information, execution, or judgment. It&#8217;s about a decade of accumulated favors, trust, and reputation.</p><p>This is a classic capability development path. Without doing large volumes of basic execution, there&#8217;s no opportunity to develop judgment. Without years of sending guests to restaurants and buying tickets, building up credibility and goodwill along the way, there&#8217;s no way to weave that golden network. The pyramid held together because the base supported everything above it.</p><p>This used to be a matter of course.</p><p>Then technology started pulling out the layers from below.</p><p></p><p><strong>Three waves, each removing one layer</strong></p><p><strong>The first wave was the internet (roughly 1995-2007).</strong></p><p>Before this, guests arriving in an unfamiliar city had almost no other source of information. The concierge was their only efficient interface with the city. After the internet, guests could research attractions, restaurant rankings, and transportation routes before they even packed. By the time they reached the hotel, most basic questions no longer needed asking. The base layer was gone: the concierge&#8217;s information monopoly was broken.</p><p><strong>The second wave was the smartphone and apps (2007-2015).</strong></p><p>Google Maps replaced hand-drawn directions on paper maps. OpenTable replaced phone calls to book restaurants. StubHub replaced ticket brokers. Uber replaced arranging cars for guests. Hotels themselves launched digital platforms, bundling dining, transportation, and ticketing into mobile apps. Guests could not only find information on their own but act on it themselves. As one industry insider put it bluntly: &#8220;I&#8217;ve got a portable concierge in my pocket. And I don&#8217;t have to tip.&#8221; The second layer was gone: basic execution no longer required the concierge.</p><p><strong>The third wave is AI (2015-present).</strong></p><p>This wave attacks from both sides. From the outside, guests open ChatGPT and type: &#8220;I have two free days in New York, I like nature and good food, plan something for me.&#8221; The output is already quite good. From the inside, hotels have been deploying AI chatbots to standardize service workflows, turning personalized recommendations and itinerary planning into replicable products. AI chatbots now handle 80% of routine guest inquiries at many properties. One hotel industry report showed a 78% drop in concierge call volume during peak periods at a resort. The third layer is being stripped away: personalized judgment and holistic planning are being squeezed from both the organizational inside and the guest-facing outside simultaneously.</p><p><strong>After three waves (the third still underway), only the top of the pyramid remains: relationship capital, the ability to get things done that no one else can.</strong> This layer, AI cannot yet reach.</p><p></p><p><strong>A castle in the air</strong></p><p>This isn&#8217;t a phenomenon unique to the hotel industry. Across nearly every service industry, both employers and individuals are racing to adopt AI. Customer service chatbots are standard. Standardized workflows are being automated at every level. But the work that sits at the very top of any profession, the work that depends on human relationships, contextual judgment, and the ability to navigate ambiguity through trust, remains extremely difficult to replicate at scale. A chatbot can answer a guest&#8217;s question, but it can&#8217;t make a phone call that trades on a personal favor. An LLM can draft a consulting framework, but it can&#8217;t originate one that reshapes how a client thinks about their business. <strong>The more thoroughly the lower layers get automated, the more visibly irreplaceable the top becomes.</strong></p><p>But top concierges are more valuable not simply because they&#8217;re scarcer. It&#8217;s because the pipeline that produced them is closing. Rising prices may not fully reflect improving capability. More likely, they reflect the market&#8217;s expectation that future supply is about to collapse.</p><p>The relationship networks those top concierges hold were built over ten to fifteen years of working through the three layers below. Every restaurant reservation was a deposit into a relationship. Every ticket purchased was another connection formed. Every itinerary planned was another layer of judgment deposited. The &#8220;impossible tasks&#8221; they accomplish today are the compound interest on 3,650 &#8220;ordinary days&#8221; before them.</p><p>Now a new hire joins the concierge desk, and the situation is stark. The information work at the base has been absorbed by the internet. The execution work at layer two has been absorbed by smartphone apps. The judgment and planning work at layer three is being squeezed by AI from both ends. There is almost nothing left that would allow them to organically accumulate the relationships and judgment needed to reach the top.</p><p>And it&#8217;s not just the work itself that disappeared. The leverage disappeared with it. A concierge used to be valuable to a restaurant because they represented a steady stream of guests from the hotel. That stream now flows through OpenTable and Google. A new hire sitting quietly at the concierge desk has far less to offer partners than their predecessor did fifteen years ago. Not because they&#8217;re less capable or less motivated, but because the structural currency they could trade on has been steadily absorbed by technology.</p><p>This is what a castle in the air looks like: the top still stands, but the structure beneath it has been hollowed out, one layer at a time.</p><p></p><p><strong>The path is broken. But who benefits?</strong></p><p>There&#8217;s a layer of incentive structure here that most people haven&#8217;t noticed.</p><p>AI is leverage for senior concierges. They already have judgment, a relationship network, and trust deposits at every high-end restaurant and venue in the city. Plug in AI, and they can respond to guests faster, serve more requests simultaneously, cover a wider range. Their output per unit of time is being amplified.</p><p>AI is a crutch for junior concierges. It can help them generate a decent-looking itinerary faster, but it can&#8217;t earn them the trust of a restaurant&#8217;s executive chef, can&#8217;t build them a decade of personal connections, can&#8217;t teach them what it means in a real situation when &#8220;this guest won&#8217;t like that kind of arrangement.&#8221;</p><p><strong>So the result isn&#8217;t everyone getting stronger together. It&#8217;s this: the veterans use AI to thicken their moat, while the newcomers lose the very work that used to build their capabilities.</strong></p><p>From an individual perspective, senior employees embracing AI is an entirely rational decision. It means widening their advantage over successors at the same time that the path for successors is narrowing. But what does this mean for the organization?</p><p><strong>Hotels think they own this capability. They&#8217;re just renting it.</strong></p><p>For many luxury hotels, the concierge has never been just an add-on service. It&#8217;s a core part of the hotel&#8217;s competitive advantage. Guests are willing to pay more for a hotel not just because the rooms are bigger, the location is better, or the decor is nicer, but because they believe: at this hotel, things that can&#8217;t be done elsewhere can be done here.</p><p>The problem is that this capability often doesn&#8217;t truly belong to the hotel. It belongs to a specific senior concierge as an individual. Who he knows, who takes his calls, who holds a table for him, who treats him as someone worth doing favors for. None of this lives in the brand manual, in standard operating procedures, or in any app. The hotel sells this capability as its own differentiator, but if it hasn&#8217;t been replicated, passed down, or institutionalized, the hotel doesn&#8217;t actually own it. It&#8217;s merely renting one person&#8217;s relationship network, judgment, and reputation.</p><p>The moment that person retires or leaves, what disappears isn&#8217;t just an employee. It&#8217;s a piece of the hotel&#8217;s competitive advantage that genuinely existed.</p><p>This is the problem AI creates that runs deeper than &#8220;the career path is broken.&#8221; <strong>AI lets senior employees amplify their personal leverage through technology while simultaneously dismantling the task structures that new hires need to develop capability.</strong> The result isn&#8217;t a stronger organization. It&#8217;s an organization whose core capabilities are increasingly bound to a handful of individuals. Skills and capabilities should be preserved within the organization, not tied to specific people. But if the way AI gets used only makes veterans stronger and newcomers hollower, the organization is effectively re-privatizing capabilities that should be organizational assets, handing them to a few incumbents.</p><p>For the senior employees, this is a smart deal. For the organization, it&#8217;s dangerous.</p><p></p><p><strong>This isn&#8217;t just a hotel story</strong></p><p>If you swap &#8220;concierge&#8221; for &#8220;consultant, lawyer, analyst, engineer,&#8221; and swap &#8220;booking restaurants for guests&#8221; for &#8220;writing reports, doing research, building slide decks,&#8221; you get an almost identical structure.</p><p>These professions share the same capability formation mechanism: newcomers start with high-frequency, low-risk, repeatable tasks, develop judgment through repetition, then gradually take on high-risk, relationship-intensive work, eventually forming the scarce capabilities that only a few possess. AI is cutting exactly the first two stages of that pipeline. The result isn&#8217;t &#8220;we don&#8217;t need experts anymore.&#8221; It&#8217;s &#8220;the mechanism that produces experts is broken.&#8221;</p><p><strong>The concierge profession is worth paying attention to not because it matters in itself, but because it has already completed this cycle in full. </strong>The base has been stripped away layer by layer, the top has become more valuable and more unreachable at the same time, and capability has flowed from the organization to the individual. What consulting, law, finance, and engineering are experiencing with GenAI right now is the same road. They&#8217;re just in the first half.</p><p>A few days ago, I wrote about <a href="https://www.odbehindthecurtain.com/p/genai-is-cutting-the-people-it-cant?lli=1">how GenAI is cutting the people it can&#8217;t replace</a>: organizations are eliminating the roles that carry the talent development function while retaining the roles whose work is actually being compressed by AI. The concierge profession is what that road looks like at the finish line.</p><p>Top players still have a seat at the table. They may even eat better than before. But &#8220;how to become a top player&#8221; is a question this profession can no longer answer.</p><p><strong>And an industry that can only sustain its peak but cannot produce the next generation to reach it isn&#8217;t becoming more elite. It&#8217;s becoming more fragile.</strong> Because today&#8217;s top players all walked up from yesterday&#8217;s bottom, one step at a time. Cut off the path from the bottom, and the top stops renewing itself.</p><p>This is the deepest impact that the technological revolution, led by GenAI, has on a profession. Not replacing it, but stripping it of the ability to reproduce and elevate itself.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[GenAI Is Cutting the People It Can’t Replace]]></title><description><![CDATA[As an org design consultant, I increasingly find myself not hiring junior consultants for projects.]]></description><link>https://www.odbehindthecurtain.com/p/genai-is-cutting-the-people-it-cant</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/genai-is-cutting-the-people-it-cant</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Fri, 20 Mar 2026 07:14:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mgKu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As an org design consultant, I increasingly find myself not hiring junior consultants for projects. I used to bring people on naturally: desk research, initial analysis, framework building, first drafts. Now, a few LLMs working in tandem are more than enough. Faster, cheaper, and I don&#8217;t have to spend time mentoring anyone.</p><p>I know what this means, because I walked that exact path myself. I&#8217;m bypassing the very road that made me who I am.</p><p>And behind this lies a problem far more serious than &#8220;fewer entry-level jobs.&#8221; <strong>GenAI isn&#8217;t compressing low-end labor itself. It&#8217;s compressing the work that used to carry the talent development function. And organizations are clearing headcount from the bottom, where it&#8217;s cheapest. Efficiency gains and capability gaps are happening simultaneously.</strong></p><p>I previously wrote a piece analyzing <a href="https://www.odbehindthecurtain.com/p/why-are-junior-employees-always-first?lli=1">why layoffs always start with junior employees</a>. Today&#8217;s article goes one layer deeper, into the structural problem beneath that pattern.</p><p><strong>&#8220;Who gets cut&#8221; and &#8220;whose work gets replaced&#8221; are not the same thing</strong></p><p>Statistically, GenAI adoption is hitting junior employees harder: more attrition, more early-career disruption. This easily creates a misreading impression: that junior work is more replaceable by GenAI and mid-level employees are safer.</p><p>I believe the opposite is true.</p><p>What GenAI is best at absorbing isn&#8217;t ground-level legwork. It&#8217;s the work that mid-level employees do in volume: research synthesis, document consolidation, preliminary analysis, standardized summaries, drafting, review. These &#8220;presentable work products,&#8221; the kind that used to require years of solid professional training to produce reliably, are precisely why mid-level roles exist. And they&#8217;re precisely what LLMs compress best.</p><p>Yet when organizations make cuts, juniors go first. <strong>Not because their tasks are more replaceable, but because cutting them is cheaper, easier, and draws the least resistance.</strong></p><p>There&#8217;s a mismatch here that almost no one has named explicitly. I use two dimensions to illustrate it:</p><ul><li><p><strong>Task Replaceability</strong>: To what extent can GenAI handle the day-to-day work of this role?</p></li><li><p><strong>Headcount Reducibility</strong>: How low is the cost and friction of eliminating someone at this level?</p></li></ul><p>Overlay these two dimensions, and a counterintuitive picture emerges:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mgKu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mgKu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png 424w, https://substackcdn.com/image/fetch/$s_!mgKu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png 848w, https://substackcdn.com/image/fetch/$s_!mgKu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png 1272w, https://substackcdn.com/image/fetch/$s_!mgKu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mgKu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png" width="1456" height="1238" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1238,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:246222,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/191556477?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mgKu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png 424w, https://substackcdn.com/image/fetch/$s_!mgKu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png 848w, https://substackcdn.com/image/fetch/$s_!mgKu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png 1272w, https://substackcdn.com/image/fetch/$s_!mgKu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf62f1bf-58d8-47f2-8e14-73c31920b3c4_2542x2161.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Mid-level employees sit in the &#8220;high task replaceability, low headcount reducibility&#8221; quadrant. GenAI can handle most of their daily output, but the economic cost of cutting them directly is steep: larger severance packages, longer notice periods, more complex legal processes. Letting go of a mid-level employee with 12 years of tenure and a $7,000 monthly salary might cost $70,000 in severance (N+3). That same budget can eliminate ten junior employees with 2 years of tenure and $2,000 monthly salaries. When the CFO is staring at a fixed severance budget and a headcount reduction target, the math makes the decision.</p><p>Junior employees sit in the &#8220;low-to-medium task replaceability, high headcount reducibility&#8221; quadrant. Much of what they do (on-site coordination, unstructured communication, handling ambiguous information) isn&#8217;t that easy for GenAI to replace. But cutting them is the cheapest and least painful option. They&#8217;re not cut because they&#8217;re underperforming. They&#8217;re cut because they&#8217;re affordable.</p><p><strong>Organizations follow the vertical axis (who&#8217;s cheapest to cut), not the horizontal axis (whose work GenAI can actually do).</strong> The work GenAI actually compresses sits at the mid-level, but the cost clearing happens at the bottom of the pyramid.</p><p>This mismatch is dangerous because it creates a false narrative: it looks like organizations are using GenAI to replace &#8220;low-end work,&#8221; when in reality they&#8217;re reducing headcount in the cheapest way possible. After the juniors leave, their work floats up to mid-level. Mid-level employees are now integrating AI output while picking up basic tasks they haven&#8217;t touched in years. Those who can&#8217;t sustain it leave on their own. Voluntary resignation means zero severance. The organization cuts from the bottom and indirectly forces out the middle. One round of severance payments, two layers of people gone. In the short term, it looks like a smart deal. But in the process, something far more critical is silently disappearing.</p><p></p><p><strong>The output remains. The development doesn&#8217;t.</strong></p><p>Many entry-level roles used to carry a dual function. On one hand, they were part of the delivery process: immediate output. On the other, they were mechanisms for training, observation, and selection: talent development.</p><p>When a junior employee first joins an organization, most of what they do isn&#8217;t glamorous. But whether someone has structural thinking, problem awareness, resilience under pressure, the ability to turn ambiguous problems into clear outputs, none of that shows up in a one-hour interview. It&#8217;s built layer by layer through foundational work, and it&#8217;s observed the same way.</p><p>When GenAI takes over the &#8220;immediate output&#8221; function of these tasks, it simultaneously strips away their &#8220;development&#8221; function. But the organization&#8217;s dashboards only register the former: efficiency up. The latter, the quiet erosion of the talent pipeline, never appears on any report.</p><p>This is what I call false prosperity. <strong>Faster delivery, lower labor costs, better margins. But the prosperity is borrowed.</strong> Organizations aren&#8217;t just saving on low-value labor costs. They&#8217;re also eliminating the cost that used to generate the next generation of professional talent. You won&#8217;t immediately see &#8220;top talent disappearing.&#8221; You&#8217;ll see reports coming out on time, deliverables arriving even faster. But three to five years later, you&#8217;ll notice that the people capable of high-level judgment haven&#8217;t emerged at the rate they used to.</p><p>This is, in essence, a hidden liability. And like all hidden liabilities, its most dangerous feature is that everything looks fine right up until it doesn&#8217;t.</p><p></p><p><strong>&#8220;But can&#8217;t AI also help juniors learn faster?&#8221;</strong></p><p>This is the most common rebuttal, and the most comforting optimistic narrative. But it conflates two things.</p><p>AI can accelerate learning. It cannot create learning environments.</p><p>A junior employee who uses AI to produce a polished report hasn&#8217;t necessarily understood why it&#8217;s written that way. What they&#8217;ve skipped is precisely the most valuable part of the training: groping for structure in ambiguity, making trade-offs under pressure, getting sent back six times by a senior before finally grasping what &#8220;good enough&#8221; actually means. Developing judgment isn&#8217;t an information acquisition problem. It&#8217;s a process of repeated trial, error, and correction. That process requires real delivery environments, with real pressure, real consequences, and real feedback.</p><p>Courses can be accelerated with AI. But judgment isn&#8217;t taught in courses. It&#8217;s forged in real work.</p><p>I&#8217;m not saying all entry-level work must be preserved, nor am I denying that AI can improve learning efficiency. The real problem is that organizations have no incentive to deliberately rebuild the judgment-formation mechanisms that were once embedded in real delivery work. <strong>AI can increase the speed of learning. But what&#8217;s actually missing isn&#8217;t speed. It&#8217;s the environment.</strong></p><p></p><p><strong>The market won&#8217;t self-correct</strong></p><p>Many people enjoy the easy narrative: AI frees humans from repetitive labor so we can all do higher-order, more creative work.</p><p>Freed to do what, exactly? Who bears the cost of training? Who absorbs the inefficiency of people still learning? Who pays the tuition for the next generation&#8217;s talent pipeline?</p><p>Companies won&#8217;t do this voluntarily. <strong>Companies naturally optimize for lower cost, higher efficiency, and more immediate returns.</strong> You can&#8217;t acknowledge that GenAI delivers significant efficiency gains while also expecting most companies to voluntarily maintain a low-efficiency, high-development talent pathway.</p><p>There&#8217;s also a classic collective action problem at work: every company has an incentive to free-ride on others&#8217; development investment. If Company A spends resources developing junior employees, Company B can simply poach the mid-level talent that Company A produced. As GenAI makes the &#8220;skip development, hire ready-made&#8221; strategy increasingly viable, fewer and fewer companies will choose to bear the cost of developing talent.</p><p>This isn&#8217;t a moral failing of any single company. It&#8217;s a structural market failure.</p><p></p><p><strong>The global policy toolkit isn&#8217;t addressing the right problem</strong></p><p>Governments haven&#8217;t been idle on AI&#8217;s workforce impact. Training levies, tax credits, mandatory AI literacy programs. There&#8217;s no shortage of initiatives. But they all share a fundamental blind spot: <strong>they all assume &#8220;development&#8221; is an activity that can be separated from work, independently subsidized, and independently assessed.</strong></p><p>That assumption is disconnected from reality. The reason a junior employee can become a reliable mid-level professional in three to five years isn&#8217;t because someone put them through a training program. It&#8217;s because they did massive amounts of real work on real projects. Development doesn&#8217;t happen alongside work. <strong>Development is the work itself.</strong></p><p>Once AI takes over the tasks that carried the development function, you can&#8217;t compensate by building a separate &#8220;training program&#8221; on the side. No matter how much you subsidize, if junior employees don&#8217;t get the chance to do real things in real delivery environments, development simply won&#8217;t happen.</p><p><strong>The question isn&#8217;t &#8220;who pays for training.&#8221; It&#8217;s &#8220;who preserves the environment where development happens.&#8221;</strong></p><p></p><p><strong>The real solution is straightforward and right in front of us</strong></p><p>If development is embedded in work, then protecting the development function means preserving the space within work where development can occur.</p><p>To put it bluntly: layoffs are fine, but organizations must deliberately retain opportunities for a portion of their people to do real work on real projects, even when AI is perfectly capable of replacing them. This isn&#8217;t about efficiency. It&#8217;s about ensuring that when models fail, when tasks exceed model boundaries, or when the regulatory environment shifts, the organization still has people with the judgment to take over.</p><p>This logic isn&#8217;t new. The defense industry has been doing it for decades.</p><p>Fighter jet manufacturers don&#8217;t disband their design teams during years without new contracts. Shipyards take on civilian orders between naval commissions. Not because civilian products are more profitable, but because once you scatter the team and shut down the line, the cost of rehiring, retraining, and rebuilding cohesion for the next contract far exceeds the cost of keeping them on through the gap. <strong>What they practice is capacity preservation: maintaining the capability base so it&#8217;s ready when needed.</strong></p><p>Defense and knowledge work aren&#8217;t the same industry, of course. But they face the same organizational choice: whether to sacrifice, in the name of short-term efficiency, a capability that looks redundant in peacetime but becomes decisive under stress.</p><p>Knowledge-intensive organizations now face that exact choice. You can let AI handle most delivery. But you need to retain enough people doing real work on real projects. On-the-job training, not classroom courses, not simulations, but sustained practice under real delivery pressure. These people and the skills they develop aren&#8217;t &#8220;redundancy.&#8221; They&#8217;re your capability reserve.</p><p>This is a risk management decision, not a charity decision.</p><p></p><p><strong>A company&#8217;s Plan B is also a nation&#8217;s bottom line</strong></p><p>From a company&#8217;s perspective, capacity preservation is Plan B. AI isn&#8217;t infallible. Models get updated and interrupted, networks go down, policies change. <strong>An organization that outsources all cognitive labor to AI without retaining sufficient internal human capability is as fragile as a manufacturer that depends on a single supplier for every component.</strong> Maintaining a capability reserve isn&#8217;t a waste of resources. It&#8217;s an insurance policy.</p><p>But if this choice is left entirely to individual companies, the result will mirror the collective action problem described above: some companies will always choose not to retain, not to develop, not to invest, and then scramble to poach talent from the market when things go wrong. When everyone thinks this way, there&#8217;s no one left to poach.</p><p>So ultimately, this is a national-level problem.</p><p>Just as the defense sector can&#8217;t hand all manufacturing capacity to peacetime&#8217;s most efficient bidder, the knowledge economy can&#8217;t hand all cognitive capacity to the AI era&#8217;s most efficient solution. By the time you realize the capacity isn&#8217;t in your hands, it&#8217;s already too late. The ability to develop talent isn&#8217;t just a corporate pipeline. It&#8217;s an economy&#8217;s industrial base.</p><p>The specific policy tools can be designed over time. Perhaps some form of mandatory training retention ratios. Perhaps changes to how on-the-job training is treated for tax purposes. Perhaps industry-level requirements modeled on aviation&#8217;s mandatory manual flying hours, a minimum threshold for human-performed work. These need deeper discussion.</p><p>But the direction should be clear: <strong>not building a separate training system outside of work, but preserving the space for human participation in real work, within work itself.</strong></p><p>The moment an organization stops treating real work as a development mechanism by design, it begins, behind the appearance of rising efficiency, to overdraw on its future supply of judgment.</p><p>An organization unwilling to develop people may look more efficient in the short term. A society unable to sustain the development of people will inevitably pay a far greater price.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI Won’t Reward Your Major. It Rewards Irreplaceability.]]></title><description><![CDATA[What a 50-year-old ballet dancer reveals about talent in the age of AI.]]></description><link>https://www.odbehindthecurtain.com/p/ai-wont-reward-your-major-it-rewards</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/ai-wont-reward-your-major-it-rewards</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Mon, 16 Mar 2026 14:48:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CHRs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>On March 7, I watched Roberto Bolle perform <em>Caravaggio</em> at the Hong Kong Cultural Center &#8212; the Asian premiere.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CHRs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CHRs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!CHRs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!CHRs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!CHRs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CHRs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1902823,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/191134570?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CHRs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!CHRs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!CHRs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!CHRs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5db8bcd-2db0-44af-a2e1-eb8399616877_4032x3024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>He is 50 years old. Most ballet dancers&#8217; careers wind down by around forty. Bolle is still center stage. His physique remains immaculate, and his control so precise that dancers half his age can seem tentative beside him.</p><p>Walking out of the theater, I kept circling a single question: how many people once had Bolle&#8217;s talent &#8212; even seriously considered ballet as a career &#8212; but ultimately chose a different path?</p><p>The answer: the vast majority.</p><p>Not because they weren&#8217;t good enough. Because they did the math.</p><p>Ballet is an industry that almost exclusively keeps room for the very top. The number of dancers worldwide who can live comfortably on ballet alone is surprisingly small. Most who dedicate a decade of brutal training still earn unremarkable incomes. And these same people &#8212; with exceptional physiques, fierce discipline, and strong learning ability &#8212; could quite possibly do far better in the corporate world.</p><p>And that leads to a survival strategy that is now starting to break down.</p><div><hr></div><p><strong>The Misplaced Spike: A Smart Career Strategy in Noisy Systems</strong></p><p>In many competitive environments<strong>, a single A+ trait beats being B+ at everything &#8212; as long as the evaluation system is noisy enough.</strong></p><p>This is why someone who would only rank B+ in the modeling world can do extremely well in a corporate setting.</p><p>The modeling track has a narrow evaluation frame: face, physique, camera presence. Everyone around you is A+. Your A+ appearance offers no edge. But the corporate track is entirely different &#8212; the evaluation dimensions are many and the noise is high: analytical ability, communication, appearance, political instinct, stakeholder management. You might be a B in every dimension except one: appearance, where you are A+. Among colleagues who are uniformly B+ with no standout trait, that single spike is enough to set you apart. Your A+ appearance becomes the strongest signal in the room.</p><p>Your boss may not be able to articulate why you seem better than your peers. They just feel that you are &#8220;different.&#8221;</p><p>This is not deception. It is not gaming the system. It is placing your spike in the arena where it will be most recognized. Anyone who has spent time in the corporate world has seen this in action &#8212; you may even be this person yourself. It is, at its core, a supremely rational act of resource allocation: if my absolute advantage isn&#8217;t competitive in certain arenas, why not carry it into a different one and push for a breakthrough? After all, it might be the trump card there.</p><p>This strategy used to work extremely well. Because knowledge work was evaluated through systems that were complex, noisy, and often opaque &#8212; fluent delivery, polished slides, smooth stakeholder management &#8212; few bosses could clearly tell whose analysis was actually better. The person who looked the most professional often won.</p><p><strong>Until AI arrived.</strong></p><div><hr></div><p><strong>AI Didn&#8217;t Change the Talent. It Changed the Signal.</strong></p><p>Picture a market intelligence team of three. Many managers would staff it like this: one analyst with outstanding analytical depth, one efficient executor handling the groundwork, and a third who is &#8220;decent&#8221; across the board but more &#8220;presentable&#8221; &#8212; well-rounded, good with clients and senior stakeholders.</p><p>AI is making this three-person structure less stable. The workflow that once required three people is increasingly compressible to one or two &#8212; and the person more likely to remain must be capable of working independently. That means genuinely understanding how to make judgment calls.</p><p>Not because the boss suddenly became more discerning. Because the structure of the role itself has changed. When one person must collaborate with AI across the full chain &#8212; prompting, reviewing output, making judgment calls &#8212; the person who is &#8220;decent at everything but not outstanding in core judgment&#8221; increasingly struggles to justify their place.</p><p>And here&#8217;s the compounding effect: AI is making a growing share of output easier to quantify and compare. Previously, your report and your colleague&#8217;s report sat side by side, and the boss judged them largely by impression. Noise was high. Now AI can generate a logically coherent, well-formatted industry analysis in minutes. Once your report is placed directly next to it, you may not come out ahead. Those who can outperform AI will likely do so not through prettier formatting, but through stronger judgment, better frameworks, and sharper problem definition. Meanwhile, those who have been coasting find it harder and harder to hide.</p><p><strong>Once the noise in the arena is compressed, the &#8220;misplaced spike&#8221; strategy starts to break down.</strong></p><p>Your A+ appearance, in an environment where AI has already set a clean, legible baseline, is no longer the strongest signal. The real signal is now: can you produce judgment that AI cannot?</p><div><hr></div><p><strong>Leaving STEM, Rushing into the Creative Fields &#8212; Then What?</strong></p><p>Two claims have gained significant traction recently.</p><p>Daniela Amodei, co-founder and President of Anthropic, has said that studying the humanities will be &#8220;more important than ever.&#8221; Her core point: as AI models grow increasingly capable at technical tasks, the ability to understand humans themselves &#8212; history, motivation, what makes us human &#8212; becomes scarcer.</p><p>Investor Peter Thiel has argued that AI will hit &#8220;math people&#8221; harder than &#8220;word people.&#8221; His point is not that the humanities will win. It is that math&#8217;s monopoly as a single screening mechanism is being eroded by AI &#8212; much as chess stopped being the ultimate proxy for intelligence after Deep Blue defeated Kasparov in 1997.</p><p>An oversimplified narrative has taken hold: STEM is finished. The creative and interpretive fields are the future.</p><p>Early signals are appearing: in 2025, a growing number of US computing programs reported weakening undergraduate enrollment, particularly in traditional computer science, software engineering, and information systems tracks. Many are starting to believe: if AI can already write code, what is the point of learning to program? Better to study screenwriting, art, philosophy.</p><p><strong>I believe there is a dangerous misreading buried in this logic.</strong></p><p>Amodei and Thiel are both right &#8212; but their words are being over-interpreted. Amodei is emphasizing that critical thinking and deep understanding of people will be scarcer in the AI era, not that a literature degree is a ticket to safety. Thiel is saying that math as a gatekeeping mechanism is losing its authority, not that the creative fields will outperform STEM.</p><p>What will actually happen is far more complex than a &#8220;creative renaissance.&#8221;</p><div><hr></div><p><strong>Talent Flows Back &#8212; Not Out of Passion, but Out of Pressure</strong></p><p>The first wave of people squeezed out of STEM and analytical roles will try to flood into AI-adjacent tracks &#8212; AI products, machine learning, automation. But these tracks are themselves likely to become crowded quickly. Models can already handle a growing share of foundational coding work. The capacity of this path is far smaller than people imagine.</p><p>Some will pivot to sales, operations, entrepreneurship, hardware. But there is another group &#8212; people who once had significant original creative talent but abandoned high-originality fields because the risk-reward ratio was too unfavorable &#8212; who will begin to reconsider.</p><p>In the past, a person with writing talent who went into consulting could earn five times what a freelance writer makes. A person with artistic talent who went into finance had far greater income stability than an independent artist. The math was clear. They rationally chose the safer career path.</p><p>But as AI compresses the mid-tier returns of these safer paths &#8212; as entry-level analyst hiring slows, as much of the output once produced by junior and mid-level consultants can now be rapidly generated by AI as a competent first draft &#8212; <strong>the certainty premium of staying on these paths is no longer high enough.</strong> Some begin to ask themselves where their real comparative advantage actually lies.</p><p>This is not &#8220;returning to one&#8217;s passion.&#8221; It is being pushed back out of the lane they had chosen.</p><div><hr></div><p><strong>High-Originality Fields Were Never a Safe Haven &#8212; They Were Always Closer to Ballet</strong></p><p>Think back to Bolle.</p><p>The brutality of ballet is that it almost never sustains a stable middle tier. No &#8220;decent&#8221; ballet dancer makes a comfortable living from ballet. You are either at the top, or you leave.</p><p>Knowledge work used to be different. Corporations had a vast middle tier &#8212; not the strongest, not the weakest, sustained by well-rounded competence and misplaced spikes. This is precisely why people with ballet talent, artistic talent, or writing talent rationally chose corporate careers: corporations reserved space for the middle.</p><p><strong>But AI is turning more and more knowledge fields into something that looks like ballet.</strong></p><p>As high-cognition talent gets pushed out of safer career paths and flows back into high-originality fields, the competitive intensity of those fields will rise sharply. In the past, these fields could sustain a significant volume of mediocre production for extended periods &#8212; not necessarily because the barriers were low, but more because the strongest players had long been siphoned off by finance, consulting, law, and technology.</p><p>This helps explain what we see today: films with loose narrative structures and character motivations that don&#8217;t hold up to scrutiny &#8212; storytelling whose dramatic coherence may not match the craft of a traditional opera. Segments of contemporary art where neither formal rigor nor genuinely persuasive conceptual breakthroughs are on display, yet the work still circulates within relatively closed evaluation systems.</p><p>This doesn&#8217;t necessarily mean these fields are inherently more &#8220;diverse&#8221; or &#8220;inclusive.&#8221; It more likely means one thing: when top-tier competitors are absent for long enough, the baseline level of the field will drop.</p><p>Now this dynamic is beginning to reverse. Those who once traded their original talent for income and certainty are finding the safe path narrowing. Once they flow back, they will raise not only the ceiling of these fields but also the floor.</p><p>People who survived on &#8220;good enough&#8221; will discover, for the first time, that they had been operating in an arena where competition was never as fierce as they assumed.</p><p>The middle tier of high-originality fields is likely to follow the same trajectory now visible across many STEM careers: loosening first, then contracting.</p><div><hr></div><p><strong>AI Won&#8217;t Reward Your Major. It Rewards Irreplaceability.</strong></p><p>So the conclusion is not &#8220;study the creative fields and you&#8217;ll be safe.&#8221; The conclusion is:</p><p><strong>AI is turning every field into ballet.</strong></p><p>Across every track, the middle is loosening. Whether you chose STEM or the arts, &#8220;good enough&#8221; is failing as a survival strategy. What is actually being rewarded is neither a disciplinary label nor a misplaced spike, but irreplaceability &#8212; the ability to ask better questions, render harder-to-replace judgments, and build a position that others cannot easily replicate.</p><p>Roberto Bolle is still center stage at 50 &#8212; not because ballet is a &#8220;good track,&#8221; but because he still constitutes irreplaceability within it.</p><p>Real security in the age of AI does not come from picking the right major or profession. It comes from becoming hard to route around in the field where your real talent lies.</p><p>This is not a comfortable conclusion. But it is what Roberto Bolle began proving early in life &#8212; and has spent half a lifetime demonstrating since.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[What OpenClaw Actually Can and Can’t Replace]]></title><description><![CDATA[Sun&#8217;s Decision Authority Matrix &#8212; OD Behind the Curtain]]></description><link>https://www.odbehindthecurtain.com/p/what-openclaw-actually-can-and-cant</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/what-openclaw-actually-can-and-cant</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Mon, 09 Mar 2026 16:30:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f1261fb1-4414-42be-a3a1-63be8639da40_1430x953.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>OpenClaw (the agentic AI tool that operates your computer on your behalf) has sparked a wave of enthusiasm among knowledge workers, first in Western markets last month and now surging across China. The pitch is seductive: hand your routine tasks to an AI agent, free yourself for higher-value work.</p><p>But this expectation is built on a flawed assumption: that your routine tasks are simple.</p><p>They are not.</p><p><strong>Your &#8220;simple&#8221; work was never simple</strong></p><p>Consider the most mundane task in any corporate calendar: scheduling a cross-departmental meeting.</p><p>In OpenClaw&#8217;s logic, scheduling means scanning calendars, finding overlapping availability, and sending a link. Anyone who has actually coordinated across business units knows that scheduling is not time management. It is political negotiation.</p><p>You need 12 stakeholders in a room. If the head of the largest business unit can&#8217;t attend, the meeting is pointless. Cancel it. If he attends but the head of the second-largest unit is unavailable, you need her deputy C, because C is the only person with enough seniority and delegated authority to commit on the spot. If both are unavailable, the meeting collapses again, because you cannot allow the first executive to feel that he showed up personally while the other unit didn&#8217;t even send someone who could speak with authority.</p><p>This entire judgment chain fires in your head instantaneously. If you tried to encode every stakeholder&#8217;s temperament, history, informal authority lines, and political sensitivities into rules that OpenClaw could follow, you would spend more time writing the rules than scheduling ten meetings yourself.</p><p>And when OpenClaw gets one branch wrong and offends the wrong person, who takes the blame?</p><p>You do.</p><p>If you scheduled the meeting yourself and it went sideways, you own it. But once OpenClaw is in the loop, it produces a mechanically &#8220;optimal&#8221; solution that satisfies no one, and you are left standing in front of the room absorbing the impact. Academia has a term for this: the Moral Crumple Zone &#8212; the system fails, the algorithm disappears, and the human operator absorbs the full force of accountability.</p><p></p><p><strong>First, understand what OpenClaw actually is</strong></p><p>There is a widespread confusion among OpenClaw enthusiasts: they conflate the capabilities of AI models with the capabilities of OpenClaw.</p><p>Writing reports, analyzing data, generating summaries &#8212; AI models (GPT, Claude, Gemini) can genuinely accelerate these tasks. But they do this on their own. You do not need OpenClaw to use AI for writing or analysis.</p><p>OpenClaw&#8217;s actual value proposition is not AI intelligence. It is computer operation: automated clicking, form-filling, moving data between applications, orchestrating tasks across devices. It is a connectivity and operation layer tool.</p><p>To be fair, OpenClaw handles ambiguous goals better than traditional RPA. It does not rely entirely on predefined rules and stable interfaces. But its core capability remains operational orchestration, not contextual judgment. In most real enterprise scenarios, its marginal improvement over mature RPA solutions is limited.</p><p>The work that looks &#8220;low-value&#8221; &#8212; scheduling, cross-departmental coordination, stakeholder management &#8212; is built entirely on complex decision trees accumulated through years of human experience. A task appearing simple does not mean the judgment behind it is shallow. The more routine the coordination work, the more it depends on tacit knowledge that cannot be codified.</p><p>Back to the flawed expectation: you thought OpenClaw would handle your &#8220;low-value&#8221; routine tasks so you could focus on &#8220;high-value&#8221; analysis. The reality is that AI models (not OpenClaw) can already help with your analysis. And the routine tasks you wanted to offload are precisely where OpenClaw cannot reach.</p><p></p><p><strong>A calculation for business leaders</strong></p><p>Many executives get excited the moment they hear &#8220;AI automation.&#8221; That is understandable. But two layers of expectation need to be examined separately.</p><p><strong>Layer one: fully autonomous end-to-end operations &#8212; AI running the entire chain from sales forecasting to procurement to logistics to finance, with no human intervention.</strong></p><p>This is not achievable in civilian commercial environments. Not because the technology is insufficient, but because the accountability structure does not permit it.</p><p>Sun&#8217;s Decision Authority Matrix defines this as a structural deadlock. An autonomous AI chain that does not place a human gate before irreversible commitment nodes (contracts signed, warehouse slots released, penalties paid) cannot assign accountability when things go wrong. But the moment you add a gate, the chain is no longer autonomous &#8212; it collapses back into segmented human approval. <strong>No gate means no accountability. A gate means no autonomy.</strong> This deadlock cannot be solved by spending more money.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4sE-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36634c63-82f1-4844-8459-ddaee95be4a3_1125x952.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4sE-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36634c63-82f1-4844-8459-ddaee95be4a3_1125x952.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4sE-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36634c63-82f1-4844-8459-ddaee95be4a3_1125x952.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4sE-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36634c63-82f1-4844-8459-ddaee95be4a3_1125x952.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4sE-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36634c63-82f1-4844-8459-ddaee95be4a3_1125x952.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4sE-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36634c63-82f1-4844-8459-ddaee95be4a3_1125x952.jpeg" width="1125" height="952" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/36634c63-82f1-4844-8459-ddaee95be4a3_1125x952.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:952,&quot;width&quot;:1125,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:146787,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/190408105?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36634c63-82f1-4844-8459-ddaee95be4a3_1125x952.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4sE-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36634c63-82f1-4844-8459-ddaee95be4a3_1125x952.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4sE-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36634c63-82f1-4844-8459-ddaee95be4a3_1125x952.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4sE-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36634c63-82f1-4844-8459-ddaee95be4a3_1125x952.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4sE-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36634c63-82f1-4844-8459-ddaee95be4a3_1125x952.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Truly autonomous chains currently operate only in military contexts, for three specific reasons:</p><ul><li><p>Chain of command eliminates accountability diffusion;</p></li><li><p>Defense budgets absorb extreme error costs;</p></li><li><p>Wartime environments inherently accept irreversible consequences.</p></li></ul><p>Commercial enterprises possess none of these three conditions.</p><p><strong>Layer two: fine, full autonomy is off the table. But surely OpenClaw can replace enough workers on individual tasks to cut headcount? That alone would justify the investment.</strong></p><p>The analysis above already answers this. The operational orchestration work that OpenClaw can handle is already covered by mature RPA and AI tool combinations, with limited marginal improvement. And the employees you want to cut are doing exactly the kind of tacit-judgment coordination work analyzed above: who must attend, who can be absent, what can be said, what cannot wait. None of these judgments can be delegated to OpenClaw.</p><p>The people you cut may be the people you most need to keep.</p><p></p><p><strong>Conclusion</strong></p><p>Our seemingly mundane daily work is stitched together from countless small, deeply human judgments. AI models can write your reports, run your analyses, and search for information faster than you can &#8212; but none of that requires OpenClaw. And the routine operations OpenClaw promises to handle for you? Its marginal improvement is far smaller than the demo videos suggest.</p><p>It is precisely because this work cannot be automated that it carries real value. <strong>Your tacit judgment is your true irreplaceability. </strong>Do not let the hype around a tool devalue the most valuable thing you bring to work. You thought you were installing a tireless digital worker called OpenClaw. But stripped of the tacit knowledge that lives in your head and cannot be turned into code, it quickly becomes something else entirely: OpenFlaw.</p><p><em>For the full framework on where AI decision authority can and cannot be exercised: <a href="https://www.odbehindthecurtain.com/p/autonomous-ai-chains-are-easiest?lli=1">Sun&#8217;s Decision Authority Matrix</a></em></p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Killing the Top Node Doesn’t Kill the System]]></title><description><![CDATA[What a wartime succession crisis reveals about organizational resilience and decision rights]]></description><link>https://www.odbehindthecurtain.com/p/killing-the-top-node-doesnt-kill</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/killing-the-top-node-doesnt-kill</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Sun, 08 Mar 2026 04:39:33 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5d74599d-249c-49ae-a746-9a2328b3d419_1345x745.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;m not a military analyst. I work on organization design and decision rights.</p><p>So when I look at the last several days in the Middle East, I don&#8217;t primarily see battlefield tactics. I see a familiar organizational pattern: an external actor tries to remove the top node of a high-control system, assumes the rest of the structure will stall, and then discovers that the system does not collapse on schedule.</p><p>That is the wrong object of analysis.</p><p>The real question is not whether the top individual survives.</p><p>The real question is whether the system below that individual contains redundancy, delegated execution capacity, backup chains of authority, and enough cohesion to keep functioning under shock.</p><p>That is why &#8220;decapitation&#8221; is so often misunderstood.</p><p></p><p><strong>1. Removing the top leader does not necessarily remove the risk</strong></p><p>Outside observers often treat the apex leader of a tightly controlled system as the sole source of danger. The reasoning is straightforward: remove the person, weaken the system.</p><p>But in mature, high-control organizations, the person at the top is not always just the engine. Sometimes he is also the brake.</p><p>The longer a leader stays in power, the more his role tends to shift. At some point, his primary job is no longer simply expansion. It becomes equilibrium management: balancing factions, suppressing overreach, rationing escalation, and preserving control over rivals inside the system as much as enemies outside it.</p><p>From the outside, the logic of decapitation looks clean.</p><p>In organizational terms, it often is not.</p><p>Remove a long-serving apex leader, and you may not be destroying the system&#8217;s will. You may be removing one of its strongest internal constraints.</p><p>That is why decapitation can produce the opposite of the intended result: not paralysis, but release; not confusion, but hardening; not immediate fragmentation, but a more aggressive and less restrained operating mode.</p><p>Public reporting on Iran&#8217;s post-Khamenei situation is useful precisely because it does not support the fantasy of instant collapse. The formal succession process remained unresolved, and that clerical, security, and Revolutionary Guard networks moved quickly to preserve continuity rather than allow the system to fail all at once.</p><p></p><p><strong>2. Never use a deputy&#8217;s behavior under constrained authority to predict his behavior under full authority</strong></p><p>This is one of the most common succession errors in any hierarchy.</p><p>Observers look at number twos and number threes in centralized systems and see caution, compliance, and narrow execution. Then they assume that is the person&#8217;s essence.</p><p>Usually it is not.</p><p>It is the context.</p><p>In tightly controlled organizations, visible ambition is often maladaptive. Survival depends on discipline, patience, signaling loyalty, and knowing exactly when not to stand out.</p><p>So when the ceiling disappears, the observed personality can appear to change overnight. But what changed may not be the person&#8217;s underlying preferences. What changed was the decision-rights environment.</p><p>And Iran&#8217;s case suggests something even more important than individual succession psychology. Public reporting indicates that the Revolutionary Guards had built replacement chains several ranks deep and that some units were operating under advance instructions rather than waiting for real-time political direction. In other words, resilience here may be less about the charisma of a replacement and more about prior organizational design.</p><p></p><p><strong>3. The strike meant to fracture the system became its rallying cry</strong></p><p>Any internally divided organization under extreme outside pressure tends to display a classic survival response: internal arguments are temporarily frozen, factional competition is deferred, and legitimacy questions are subordinated to continuity.</p><p>This does not mean internal conflict disappears. It means the organization reprioritizes.</p><p>For the attacker, this is the trap.</p><p>Pressure is supposed to widen cracks. Instead, it can flatten them, at least for a time.</p><p>The strike meant to weaken the target can become the event that justifies emergency concentration of power, faster coordination, and a renewed definition of loyalty. <strong>In organizational language, the attacker thinks it is disabling the system. In practice, it may be helping the system switch operating modes.</strong></p><p>That is one reason the war remains analytically important even beyond the immediate battlefield. As of March 8, public reporting still described the conflict as ongoing, with continued strikes, Iranian retaliation, and wider disruption to Gulf shipping and energy markets rather than a settled postwar phase.</p><p></p><p><strong>4. The real unit of analysis is not the person. It is the decision network.</strong></p><p>This is where many geopolitical arguments break down.</p><p>They focus on the individual.</p><p>They should be mapping the network.</p><p>If all legitimacy, intelligence, and execution truly sit in one person, decapitation can work.<br>If they do not, removing the apex may not eliminate danger at all. It may simply force a reconfiguration.</p><p><strong>You think you removed the ceiling. In fact, you may have taken off the pressure valve.</strong></p><p>A state is not a corporation, and war is not a boardroom fight.</p><p>But power transfer, succession design, decision rights, and organizational response to external threat obey more similar structural logics than most people are comfortable admitting.</p><p>That is why this matters beyond the Middle East.</p><p>My real interest is not war as spectacle. It is what extreme cases reveal about authority under stress. Once you start looking through that lens, the next frontier is obvious: what happens when core decision rights are no longer transferred only between humans, factions, and institutions, but increasingly redesigned around AI systems themselves?</p><p>Next month I&#8217;ll be in Riyadh for DeepFest 2026. I&#8217;m less interested in the conference as an event than as a live field site for a bigger question: when AI begins to mediate, structure, or absorb core decision rights, what becomes of the human &#8220;brake&#8221; inside the organization? That is where I think the next major power transition is already underway.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Runs Entire Kill Chains in War. In Business, It Can't Even Run A Supply Chain.]]></title><description><![CDATA[Sun&#8217;s Decision Authority Matrix]]></description><link>https://www.odbehindthecurtain.com/p/autonomous-ai-chains-are-easiest</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/autonomous-ai-chains-are-easiest</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Tue, 03 Mar 2026 13:22:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/451cae50-c7fc-4cc4-be5a-07eb2ea0572b_1111x969.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A hotel&#8217;s AI room-assignment system upgrades the incoming Mr. Sun to an Executive Suite on the 12th floor. The manager takes one look and overrides the decision: make it the Presidential Suite. Cost: virtually zero. Time: three seconds. The system ran a stack of variables internally&#8212;Ambassador-tier membership, 28,000 USD in annual spend, first stay at this property&#8212;and produced a recommendation. The manager doesn&#8217;t need to inspect the model&#8217;s feature weights. He only needs to judge the output: is Mr. Sun worth a better room? If yes, upgrade manually.</p><p>That is the entire world that current AI governance was built for.</p><p>Override. Audit. Transparency. The vocabulary sounds robust. It rests on one premise: AI is an individual employee. Every decision it makes can be reviewed and corrected in time. The conclusion is comprehensible, reversible, and low-cost. Human judgment applied to the final output&#8212;that&#8217;s enough.</p><p>Consider the numbers that sound so intimidating today: IDeaS makes 100 million pricing decisions daily across 30,000 hotels; Amazon auto-adjusts prices 2.5 million times a day; Upstart processes 92% of its loans with zero human involvement; Google Performance Max manages budgets for over a million advertisers. Every one of these is fundamentally the same thing: humans set boundaries, AI executes within them, and a manager can pull the plug at any time. A hundred million decisions&#8212;they are still point decisions. Each one operates within a single domain, with clear boundaries, and Override is always available.</p><p>Now watch the premise collapse.</p><p>An AI system predicts that sales in the Eastern China region will decline 15% next quarter. The procurement system automatically cuts raw material orders by 20%; The reduced order volume triggers a minimum-purchase clause in supplier contracts. The system calculates that the penalty for breach is lower than the risk of excess inventory, and accepts the breach; Logistics automatically releases two pre-booked warehouse slots; The finance system recalculates cash flow and adjusts the payment schedule for accounts payable.</p><p>Three months later, sales didn&#8217;t decline. They rose. But raw materials are short, warehouse slots are gone, the supplier has collected penalties and is demanding a price increase, and the financial model has triggered a cascade of downstream adjustments. The result is not just a stockout: the contract breach has triggered a floor clause in a long-term agreement with a major shipping line, automatically raising all freight rates by 15% for the second half of the year.</p><p>Now who performs the Override? Override which step?</p><p>Reverse the sales forecast? Every downstream link has to be rebuilt: re-sign procurement contracts, re-lease warehouse space, claw back supplier penalties, reconstruct the financial model. In a point decision, Override means &#8220;change one room.&#8221; In an autonomous chain, Override means &#8220;dismantle a cross-functional chain that is already running, then rebuild from scratch.&#8221; The difference is not one of degree. It is one of kind. The cost is so high that no rational manager would press that button.</p><p>You tell me Audit? Audit what? On this chain, every intermediate node&#8217;s output is simultaneously the next node&#8217;s input. The sales forecast feeds procurement; procurement volume feeds logistics; logistics arrangements feed finance. The problem is not in any single node&#8212;it is in the transmission logic between nodes. Existing audit frameworks are designed to check whether one decision is correct. They are structurally incapable of checking whether the internal logic of an entire decision chain is coherent. The technology is not a complete void: process mining and digital twins are active fields. But at the organizational level, no company has established an executable chain-level audit loop.</p><p>You tell me Accountability? In the old world, someone signed off, and accountability automatically attached to the signatory. A meeting, an approval action&#8212;these forced together &#8220;where did this number come from,&#8221; &#8220;what was done based on it,&#8221; and &#8220;what irreversible consequences were committed.&#8221; AI eliminated the act of signing, but no one designed a replacement. The sales forecast on this autonomous chain is not &#8220;a decision that was approved&#8221;&#8212;it is &#8220;a state that was computed and propagated.&#8221;</p><p>No one signed off, so accountability has nowhere to land.</p><p>The data scientist says: &#8220;I just set the parameters.&#8221; The CTO says: &#8220;I just approved the system going live.&#8221; The business VP says: &#8220;I never even saw that number.&#8221; Accountability dilutes along the chain until everyone can say &#8220;nothing to do with me.&#8221; The human forced into the &#8220;in the loop&#8221; position becomes a sponge that absorbs blame, not an agent who exercises decision authority. Academia calls this the &#8220;Moral Crumple Zone.&#8221;</p><p>When AI evolves from &#8220;individual employee&#8221; to &#8220;autonomous chain,&#8221; the existing governance vocabulary collapses entirely.</p><p></p><p><strong>Sun&#8217;s Decision Authority Matrix</strong></p><p>The allocation of any decision authority ultimately depends on three conditions:</p><ul><li><p>Can the right to halt be exercised?</p></li><li><p>Is the cost of halting bearable?</p></li><li><p>Can accountability be attributed?</p></li></ul><p>Existing AI governance frameworks can satisfy these three conditions, but only when AI is making point decisions. The moment AI evolves into an autonomous chain, existing frameworks are powerless against any of the three.</p><p>But distinguishing point decisions from autonomous chains is not enough. The same autonomous chain, placed in a commercial environment versus a military environment, yields completely different answers for halt rights, error tolerance, and accountability. The application domain must also be part of the analysis.</p><p>On this basis, I propose Sun&#8217;s Decision Authority Matrix. The horizontal axis is decision architecture (Point Decisions vs. Autonomous Chains). The vertical axis is application domain (Civilian vs. Military).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rUM5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84f7059c-2e38-4ca7-b108-baf80b03a0f2_787x1584.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rUM5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84f7059c-2e38-4ca7-b108-baf80b03a0f2_787x1584.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rUM5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84f7059c-2e38-4ca7-b108-baf80b03a0f2_787x1584.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rUM5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84f7059c-2e38-4ca7-b108-baf80b03a0f2_787x1584.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rUM5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84f7059c-2e38-4ca7-b108-baf80b03a0f2_787x1584.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rUM5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84f7059c-2e38-4ca7-b108-baf80b03a0f2_787x1584.jpeg" width="787" height="1584" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/84f7059c-2e38-4ca7-b108-baf80b03a0f2_787x1584.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1584,&quot;width&quot;:787,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:275522,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/189761079?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84f7059c-2e38-4ca7-b108-baf80b03a0f2_787x1584.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rUM5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84f7059c-2e38-4ca7-b108-baf80b03a0f2_787x1584.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rUM5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84f7059c-2e38-4ca7-b108-baf80b03a0f2_787x1584.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rUM5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84f7059c-2e38-4ca7-b108-baf80b03a0f2_787x1584.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rUM5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84f7059c-2e38-4ca7-b108-baf80b03a0f2_787x1584.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>The Commercial Sandbox &#8212; Civilian &#215; Point Decisions</strong></p><p>Pricing, approvals, customer service, r&#233;sum&#233; screening. This is the only quadrant that is mature, heavily commercialized, and crowded. It is also the quadrant that virtually every AI company uses for marketing. They demonstrate AI&#8217;s value with the most harmless scenarios, making you feel AI is that tame. Existing regulatory frameworks are perfectly adequate here. This quadrant needs neither panic nor over-regulation.</p><p>But don&#8217;t overestimate AI&#8217;s capability boundary within this quadrant either. The successfully commercialized AI point decisions share a common trait: finite rulesets. No matter how many variables hotel pricing and room assignment involve, the end result is an optimization function.</p><p>But many seemingly simple tasks are not optimization functions. An executive assistant sends a meeting invitation. Someone replies they can&#8217;t attend. Does the meeting still happen? If A can&#8217;t make it, the meeting goes ahead. But if both A and B can&#8217;t make it, C must be present, because C is the only person who can represent A&#8217;s position. C has another meeting this afternoon, so this one needs to be rescheduled. When it gets rescheduled depends on the boss&#8217;s calendar tomorrow. Every step in this judgment chain lives outside any SOP. It exists in the assistant&#8217;s tacit understanding of organizational power structures, interpersonal dynamics, and the relative weight of agenda items. Current AI is more than capable of handling formulaic point decisions, but remains helpless with point decisions that require contextual judgment. Companies should be clear about which type of decision they are targeting before deploying AI.</p><p></p><p><strong>The Accountability Black Hole &#8212; Civilian &#215; Autonomous Chains</strong></p><p>To get a cross-departmental AI autonomous chain running in a commercial environment, you need to answer three questions: Who can halt it? Can the cost of halting be absorbed? If something goes wrong, who is on the hook?</p><p>This quadrant is empty.</p><p>How do you determine whether an AI system is a point decision or an autonomous chain? Two criteria, both of which must be met:</p><ol><li><p>System A&#8217;s output automatically becomes System B&#8217;s input with no human confirmation in between (cross-domain continuous triggering);</p></li><li><p>The chain contains irreversible cost lock-ins. Once triggered, the cost of rollback escalates steeply (contracts signed, warehouse slots released, penalties paid).</p></li></ol><p>Cross-departmental, fully autonomous end-to-end processing does not exist in the civilian commercial sector today. Not &#8220;in development,&#8221; not &#8220;about to launch&#8221;&#8212;zero.</p><p>Pactum runs autonomous negotiations (Walmart uses it; 2,000 simultaneous supplier conversations, 68% close rate). RELEX runs autonomous replenishment (96% of replenishment decisions are untouched by humans). UPS ORION runs autonomous route optimization (55,000 trucks optimized daily, saving 100 million miles a year). Impressive as they sound, test them against the two criteria: Pactum&#8217;s negotiation outputs do not automatically trigger downstream procurement and logistics cascades. RELEX&#8217;s replenishment orders do not automatically sign shipping contracts and lock warehouse slots. ORION&#8217;s route optimization does not automatically rewrite financial budgets. Their outputs do not automatically cross nodes of irreversible commitment (contracts, slots, penalties), and rollback costs are low. In plain terms, they are <strong>Single-Function Autonomy</strong>&#8212;the most aggressive plays inside the Commercial Sandbox, not the Accountability Black Hole.</p><p>It is not a technology problem. It is a trust problem.</p><p>When a chain can automatically cross irreversible commitment nodes (signing contracts, releasing warehouse slots, paying penalties), someone must be accountable for &#8220;allowing it to cross.&#8221; But no one fills that role today. Without that person, there is no one to hold accountable when things go wrong.</p><p>Some might say: just add a control gate. Put a human approval step before every irreversible commitment node, and you get both automation and accountability. But this is precisely the problem. The moment you add a gate, the chain is no longer autonomous. It gets sliced into segments, each running point AI, with gates welding authority in place between them. That is not the Accountability Black Hole. That is a reassembled version of the Commercial Sandbox. So this quadrant faces a structural deadlock in the commercial world: without a gate, it cannot be held accountable; with a gate, it is no longer an autonomous chain.</p><p>Finance looks like the closest thing to this quadrant. In high-frequency trading, cascading reactions between algorithms genuinely exist: one algorithm&#8217;s sell signal triggers another&#8217;s hedge, which triggers a third&#8217;s stop-loss. But financial trading runs end-to-end inside electronic systems, and exchanges and clearing systems are natural gates: circuit breakers can freeze an entire chain in milliseconds, and every transaction can be monitored, traced, and even intercepted before settlement. In other words, the financial system uses infrastructure-level mandatory gates to slice a seemingly autonomous chain back into controllable segments. It is not an exception to the Accountability Black Hole. It is the best illustration of the rule that adding a gate collapses the chain back into the Commercial Sandbox.</p><p>Physical commerce is different. Contracts are signed with real people. Warehouse slots are locked with shipping lines. Penalties are paid in real money. Once these commitments are made, no system can reverse them with one click. That is why this quadrant will remain empty in civilian physical commerce.</p><p></p><p><strong>The Tactical Copilot &#8212; Military &#215; Point Decisions</strong></p><p>AI-assisted target identification, intelligence analysis, battlefield situational assessment. The system recommends; the commander decides. Every major military power is deploying at scale. Humans remain in the loop.</p><p>The governance logic here is nearly identical to the Commercial Sandbox: the commander is the hotel manager. AI produces a conclusion; the human glances at it and overrides if needed. But the stakes are in a completely different league. The hotel manager gets it wrong, a guest complains. The commander gets it wrong, civilians die. The same Override mechanism, under different stakes, gives &#8220;human-in-the-loop&#8221; an entirely different meaning.</p><p>A reconnaissance drone over Syria identifies a suspected militant staging area. The operator sees the AI-generated target box on the screen. He has thirty seconds to authorize or deny the strike. This is the Tactical Copilot&#8217;s standard scenario: the machine sees, the human decides. But move the same drone to the electronic-warfare-saturated skies of eastern Ukraine, where the signal can cut out at any moment. Once the link is lost, that thirty-second decision window ceases to exist. The machine will not hover in place waiting for reconnection. It either crashes or switches to autonomous mode and continues the mission. Same drone, same algorithm&#8212;from the Tactical Copilot to the Unleashed Chain, separated by a single lost signal.</p><p>This is not an isolated case. &#8220;Human-in-the-loop&#8221; is degrading from &#8220;humans decide&#8221; to &#8220;humans can&#8217;t object in time.&#8221; Time compression is only the first dimension of the slide. The second is the automatic interlocking of decision chains: when AI&#8217;s situational assessment automatically triggers weapon systems into ready state, automatically adjusts force deployment plans, and automatically updates rules of engagement parameters, the commander is no longer facing a recommendation he can veto. He is facing a chain that is already running. Time compression means the human &#8220;can&#8217;t intervene in time.&#8221; Decision chain interlocking means the human &#8220;doesn&#8217;t know where to intervene.&#8221; Stack the two dimensions, and the Tactical Copilot slides into the Unleashed Chain.</p><p></p><p><strong>The Unleashed Chain &#8212; Military &#215; Autonomous Chains</strong></p><p>This is the most uncomfortable quadrant in the entire matrix. The irony: it is the easiest to implement.</p><p>The military naturally resolves the three pain points that paralyze the commercial world:</p><ol><li><p>Chain of Command perfectly eliminates accountability diffusion. Commanders bear accountability for all actions under their command, whether executed by humans or machines. This is not a system that needs to be redesigned. It has been running for centuries;</p></li><li><p>Defense budgets in the hundreds of billions can absorb extreme error costs. U.S. military AI spending was $9.2 billion in 2023 and is projected to reach $38.8 billion by 2028;</p></li><li><p>Wartime environments naturally accept irreversible consequences, as long as the mission is accomplished.</p></li></ol><p>This quadrant is not only home to superpower competition. It is crowded with players seeking asymmetric advantage.</p><p>Israel&#8217;s AI systems generated and authorized, in weeks, strike targets that previously required human analysts years to confirm. On the Ukrainian battlefield, the drone you just saw in the Tactical Copilot&#8212;the one that lost its signal&#8212;is not a hypothetical. It happens every day, and it is evolving from single-unit autonomy to swarm autonomy. Turkey&#8217;s autonomous drone fleets have already changed the rules of regional warfare as off-the-shelf products.</p><p>Here, no one cares about explainability. When the choice is between activating autonomous strike and being killed, humans hand over the final firing authority to machines without hesitation.</p><p>But the real danger is not at the tactical level. Every case above involves autonomous decisions on a local battlefield: one target, one drone. Yet the moment tactical autonomy succeeds, it naturally climbs toward the strategic tier: from &#8220;autonomously identifying and striking one target&#8221; to &#8220;autonomously planning and executing a strike sequence,&#8221; then to &#8220;autonomously assessing the battlespace and adjusting the campaign plan.&#8221; Each level up means a longer chain, more irreversible consequences, and a narrower window for human intervention. This slide from tactical to strategic reveals the darkest reality of this quadrant: <strong>the reverse Moral Crumple Zone.</strong></p><p>In civilian AI, humans are forced to take the blame for machine errors. In military autonomous chains, AI becomes the perfect shield for human decisions: Plausible Deniability. If a cross-border strike is triggered, the attacking party can attribute it to &#8220;a logic fault in the autonomous system.&#8221; This is the fundamental reason behind last year&#8217;s explicit consensus between the U.S. and China that &#8220;humans must retain ultimate control over nuclear weapons.&#8221; The great powers see the game for what it is: the door must be sealed shut. No party can be allowed to launch a nuclear strike under the cover of &#8220;system malfunction&#8221; and walk away from ultimate accountability.</p><p></p><p><strong>Conclusion</strong></p><p>Looking back across the four quadrants, what truly determines how far AI can go has never been technology. It is three organizational questions:</p><ul><li><p>Who has the authority to halt?</p></li><li><p>Can the cost of halting be absorbed?</p></li><li><p>If something goes wrong, who is on the hook?</p></li></ul><p>This matrix reveals a deeply unsettling conclusion: autonomous AI chains are easiest to deploy in the domain where they should least be used (military), and hardest to trust in the domain where they are most desired (civilian).</p><p>This map does not provide answers. But it does something more fundamental: it helps you locate where the problems are.</p><p>If you are a corporate decision-maker, it tells you which quadrant the AI you are deploying actually sits in, and whether your governance tools are adequate.</p><p>If you are a policymaker, it shows you which quarter of the map your regulatory framework covers, and which three-quarters it misses.</p><p>If you are an AI practitioner, it explains why clients pay without hesitation in the Commercial Sandbox and won&#8217;t touch the Accountability Black Hole.</p><p>Existing national-level AI governance frameworks, from the EU AI Act to G20 declarations, are almost entirely crammed into the Commercial Sandbox, debating vigorously. This is like drafting an elaborate set of rules for managing hotel room upgrades&#8212;who qualifies for a suite, under what conditions, how to compensate if the upgrade goes wrong&#8212;and then declaring that the building&#8217;s fire safety problem is also solved. Covering the entire map with the rules of the safest quadrant is not governance. It is self-deception.</p><p>Let&#8217;s stop pretending we have drawn the boundaries for AI. Three-quarters of the map is still blank.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Who's Playing Chess, and Who's a Piece? The Power Game of AI Governance]]></title><description><![CDATA[How the New Delhi AI Impact Summit Exposed the Global Fight for AI Influence]]></description><link>https://www.odbehindthecurtain.com/p/whos-playing-chess-and-whos-a-piece</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/whos-playing-chess-and-whos-a-piece</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Sat, 21 Feb 2026 04:18:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GlGn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GlGn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GlGn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg 424w, https://substackcdn.com/image/fetch/$s_!GlGn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg 848w, https://substackcdn.com/image/fetch/$s_!GlGn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!GlGn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GlGn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg" width="1366" height="604" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:604,&quot;width&quot;:1366,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:314681,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/188684718?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GlGn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg 424w, https://substackcdn.com/image/fetch/$s_!GlGn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg 848w, https://substackcdn.com/image/fetch/$s_!GlGn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!GlGn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd14e77-d8b2-4f94-8a90-331d15e058ea_1366x604.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Opening: The Broken Chain</strong></p><p>February 19, 2026. New Delhi AI Impact Summit.</p><p>Modi lined up the political and business leaders onstage, pulling them into a row, hands raised high, linked together. Sundar Pichai played along. The others played along. The entire stage played along. Except between Sam Altman and Dario Amodei, where the chain broke. The two men each raised a fist, with an awkward gap between them.</p><p>The scene reminded me of the oath ceremony before the Quarter Quell in the second Hunger Games film. A row of elite tributes, hand in hand, performing unity for the audience. But everyone knows that once the arena gates open, those hands will let go, and they will become each other&#8217;s opponents.</p><p>The logic of the entire summit was almost identical.</p><p></p><p><strong>First Cut: The Power Structure in a Photo Op</strong></p><p>Start with who&#8217;s standing onstage: Altman, Amodei, Pichai &#8212; three American tech company CEOs. Then Modi, the host.</p><p>This is today&#8217;s global AI governance in miniature: the people with real technological capability stand center stage, while those seeking a voice in the conversation set up the venue.</p><p>India currently has no globally influential frontier model, no homegrown large language model, no advanced chip manufacturing capability. But Modi organized the most lavish AI summit in history, drawing representatives from over 100 countries, 20 heads of state, and 45 ministerial delegations. This was not a technology conference. It was an auction for a seat at the table.</p><p>You might ask: why have India, the EU, and so many Global South countries suddenly developed such a fervent interest in AI governance and AI ethics?</p><p>At its core, this is a profound case of FOMO &#8212; Fear of Missing Out.</p><p>For the past several decades, the global geopolitical landscape has looked like a corporate org chart frozen in place for half a century. Apart from a very small number of countries like China, which clawed their way into a different reporting line and title through brutally competitive performance, most countries&#8217; positions have remained fixed. What tier of supplier you are, which regional execution unit you belong to &#8212; all of it was written in stone long ago.</p><p>But the arrival of AI means the international order is about to undergo a total reorganization.</p><p>For most mid-sized countries, the barriers of foundational compute and frontier models have already locked them out of the main table. But AI governance, ethical standards, and compliance frameworks remain in a &#8220;power vacuum.&#8221; So they are making a desperate bet: using rules and discourse power to hedge against their technological disadvantage.</p><p>Anyone who has worked in organizational transformation will recognize this scene immediately. It&#8217;s like a company about to establish a &#8220;New Business Transformation Steering Committee,&#8221; and every previously marginalized mid-sized business unit head is terrified: if they don&#8217;t muscle their way onto this committee during this reorg, they will spend the next twenty years as &#8220;compute vassals&#8221; in the new power structure.</p><p>The most common strategy for gaining a bigger voice on such a committee? Host a cross-functional summit. Get the CEO and the heads of the major business units to come sit in your conference room. You may not have the strongest performance numbers, but you now have a photo of every key stakeholder in your meeting room. That photo is your political capital.</p><p>India is doing exactly the same thing. The only difference is the platform has shifted from a conference room to the world stage.</p><p>Even more telling was how China showed up. India formally invited China to the summit for the first time. China came &#8212; but sent a Vice Minister from the Ministry of Science and Technology. The US sent CEO-level figures like Altman, Dario, and Pichai. China sent a vice minister. The gap in seniority is itself a diplomatic statement: I&#8217;m giving you face, but I don&#8217;t consider this a big deal.</p><p>Chinese domestic commentary was even more blunt. Analysts from Guancha noted that India was hoping to elevate its own status and diplomatic influence by inviting both AI superpowers simultaneously. The consensus on Weibo was clear: this was India&#8217;s bid for &#8220;AI visibility,&#8221; not a breakthrough.</p><p></p><p><strong>Second Cut: Are You a Player or a Referee?</strong></p><p>At its core, the global AI competition between nations can be reduced to a single question: are you a player or a referee?</p><p>This question matters because most countries haven&#8217;t figured out the answer. Or more precisely, they want to be both.</p><p>The EU wants to be the referee: the AI Act is the world&#8217;s first comprehensive AI regulation, effective since August 2024, and its four-tier risk classification has become a de facto global standard. But it also wants to be a player: heavily backing Mistral AI, pushing &#8220;Sovereign AI,&#8221; and emphasizing at every opportunity that &#8220;we have technological capability too.&#8221; The problem is that referee and player are inherently conflicting roles. Your credibility as a rule-maker comes precisely from the fact that you don&#8217;t play in the game. The moment you step onto the pitch, your whistle stops working.</p><p>India has the same problem. On one hand, it&#8217;s proclaiming &#8220;Welfare for All, Happiness for All,&#8221; rushing to seize the moral high ground. On the other, it&#8217;s desperately courting Altman and Pichai for investment and factories. Are you trying to regulate them or please them? When your summit accepts hundreds of billions of dollars in AI investment commitments, do you still have standing to tell those investors &#8220;your AI product is non-compliant&#8221;?</p><p>I&#8217;ve seen this kind of role confusion countless times in corporate life. The most classic example is an HR department that wants to be &#8220;the voice of employees&#8221; while simultaneously executing management&#8217;s layoffs. Both roles matter, but you cannot play them at the same time. The moment you sit across from employees at the termination table, your credibility as their advocate drops to zero. The EU and India&#8217;s predicament is fundamentally the same: role ambiguity is not flexibility. It is the destruction of credibility.</p><p><strong>The US and China have no such problem.</strong></p><p>The US knows it is the most important player. According to the Stanford AI Index (Ecosystem Graphs), 109 of the 149 foundation models released in 2023 were American (roughly 73%). By installed IT power capacity, the US and China together account for approximately 70% of global data center capacity (2024). The Trump administration outright revoked Biden&#8217;s AI executive order. The signal could not be clearer: I am not the referee. I am the player &#8212; the biggest player on the planet.</p><p>China doesn&#8217;t pretend either. It&#8217;s rapidly iterating on models (DeepSeek, Qwen, and others), going all-in on standards (algorithm registration systems, the Global AI Governance Initiative), and building discourse power across the Global South. The roadmap is unambiguous: I&#8217;m catching up, I know I&#8217;m catching up, and there is no second path.</p><p>What these two countries share is this: neither of them is anxious. Because they know who they are.</p><p>The EU and India are anxious precisely because their size has created an illusion: &#8220;My market is big enough; my population is large enough &#8212; maybe I can try to be both.&#8221; This illusion is the most dangerous strategic trap, because in AI governance, the most dangerous thing is not lacking capability. It is not knowing who you are.</p><p></p><p><strong>Third Cut: Brzezinski&#8217;s Chessboard</strong></p><p>The reason the EU and India fall into this structural illusion of &#8220;wanting both roles&#8221; is that they have not seen through, or refuse to acknowledge, the true nature of the American-led power chessboard.</p><p>Before discussing who is a player and who is a referee, there is a more fundamental question: how does America actually view its &#8220;allies&#8221;?</p><p>Though somewhat dated, Brzezinski&#8217;s 1997 book <em>The Grand Chessboard</em> remains remarkably relevant today. The book&#8217;s most devastating element is not its geopolitical analysis &#8212; it is its vocabulary. These words aren&#8217;t insults. They are an explanation: alliances can also be hierarchies.</p><p>Brzezinski classified countries within the American system into three tiers: vassals, tributaries, and barbarians.</p><p>Note his choice of taxonomy: he could easily have used &#8220;allies,&#8221; &#8220;partners,&#8221; and &#8220;competitors&#8221; &#8212; the standard diplomatic lexicon. But in his most candid passages, he reached for imperial vocabulary, not diplomatic vocabulary.</p><p>Here is the original text: &#8220;Past empires based their power on a hierarchy of vassals, tributaries, protectorates, and colonies, with those on the outside generally viewed as barbarians. To some degree, that anachronistic terminology is not altogether inappropriate for some of the states currently within the American orbit.&#8221;</p><p>A former US National Security Advisor, describing America&#8217;s alliance system in the language of feudal empires. This is not mockery. This is the ultimate honesty.</p><p>He even wrote explicitly that Britain and Japan can no longer be considered &#8220;geostrategic players,&#8221; because their policy space operates within the framework preset by the United States.</p><p>Now let&#8217;s translate the 1997 chessboard to the 2026 AI battlefield:</p><p>In 1997, vassals were defined by security dependence &#8212; you needed America&#8217;s military umbrella, so you were a vassal.</p><p>In 2026, vassals are defined by technology dependence &#8212; you need America&#8217;s LLMs, chips, and cloud infrastructure, so you are a vassal.</p><p>The medium of dependence has shifted from military alliances and deployments to compute, chips, cloud, and model ecosystems. But the power structure is identical.</p><p>Think about NATO. Nominally an alliance. In reality, America plus a group of followers. Does the EU have an independent strategy within NATO? No. But they&#8217;ll tell you &#8220;we are equal partners.&#8221; AI alliances follow the same logic: join an American-led AI alliance, and you&#8217;ve acknowledged the hierarchy. Don&#8217;t join, and you&#8217;re excluded. Neither path leads to becoming a major player.</p><p>In the corporate world, I call this the &#8220;illusion of integration.&#8221; A regional headquarters thinks it&#8217;s running localization strategy, but the real decisions come from the global HQ. You think you&#8217;re a partner. You&#8217;re actually an execution arm. The biggest decision a regional HQ gets to make is how to execute HQ&#8217;s decision.</p><p>The EU and India&#8217;s role in AI alliances is identical to that regional HQ.</p><p>Here&#8217;s the irony: the more alliances you join, the less you matter. In any alliance where the US is present, everyone else is a supporting character. You&#8217;re not there because you have unique value. You&#8217;re there because the alliance needs to pad the headcount to look &#8220;multilateral.&#8221; It&#8217;s like a cross-functional project team with twenty names on the roster, but only two or three making actual decisions. Everyone else exists so the meeting minutes can claim &#8220;broad participation.&#8221;</p><p>The EU and India are not content being supporting characters. But therein lies the contradiction: the more diverse the alliances you join or initiate, the less clear your leading role in any of them becomes.</p><p>So are the EU and India permanently condemned to be chess pieces, or forever stuck as &#8220;regional headquarters&#8221;?</p><p>No. They absolutely have a chance to become a true pole of power. But the prerequisite is brutal. They must accept this:</p><p>Rules are not the source of power. At best, they amplify it.</p><p>A regional HQ that wants to overtake global HQ was never going to get there by writing the company&#8217;s compliance manual, or by hosting an annual offsite where all the senior leaders fly in. The only path is to break free from technological dependence on HQ, build an unassailable core product, and physically drag the center of gravity toward yourself.</p><p>In AI, the cost is even higher. The scaling laws of large models and the tens-of-billions-of-dollars compute threshold mean this is an intensely centralized game. &#8220;Small and beautiful&#8221; may still have survival space in vertical applications, sovereign contexts, and low-cost deployment &#8212; but at the frontier of general capability, it is not enough to get you a seat at the main table. To sit at that table, you must enter a brutal, cash-burning, no-exit contest.</p><p>Do the EU and India want to be major players? Absolutely. But they are unwilling to pay the price. What they want is not to become real players. What they want is a front-row seat without getting any blood on their clothes.</p><p>Unfortunately, in the arena of great power competition, you cannot win the Hunger Games in a freshly dry-cleaned suit by writing rules. You either fight in the mud on the field, or you sit quietly in the stands and accept being reorganized by fate.</p><p></p><p><strong>Fourth Cut: Who Gets to Be the Referee?</strong></p><p>Since most countries are either unwilling or unable to get into the mud, the question becomes: if player and referee are mutually exclusive, who qualifies to be the referee?</p><p>But before answering that, we need to ask a more fundamental question: do AI superpowers actually need a referee?</p><p>The fact is, the US and China are already talking directly. In May 2024, the two countries held their first intergovernmental AI dialogue in Geneva. In August, US National Security Advisor Sullivan visited Beijing and met with Wang Yi; both sides agreed to continue AI cooperation talks. In November, President Xi and President Biden reached a substantive consensus at the APEC summit: both agreed to maintain human control over the decision to use nuclear weapons.</p><p>Sullivan himself put it bluntly in a January 2026 essay: &#8220;As the world&#8217;s only two AI superpowers, the United States and China need to engage one another directly to address these dangers.&#8221;</p><p>In other words, great powers don&#8217;t necessarily need a middleman. Just as two business unit heads in a corporation can pick up the phone and sort things out directly, without HR relaying messages.</p><p>So where does the referee&#8217;s value actually lie? Not in passing messages between superpowers. The referee exists to protect countries that are neither China nor the US &#8212; to give them a framework for not being crushed between the two. Just as HR&#8217;s real purpose is not to relay messages between executives, but to protect the small departments and ordinary employees caught in the crossfire.</p><p>Once we understand who the referee actually serves, the qualifying conditions turn out to be quite demanding. You need good relationships with both the US and China, no significant conflicts or confrontations with either in current international politics, and &#8212; most critically &#8212; you must voluntarily give up the ambition of becoming a global superpower.</p><p>That is the real cost. The prerequisite for being a referee is that you don&#8217;t play.</p><p>So who can be this referee?</p><p>Your first thought might be Singapore. Fair enough &#8212; Singapore has AI Verify, the world&#8217;s first AI governance testing tool, and SEA-LION, a Southeast Asian language model. It has walked the tightrope between the US and China for decades. But Singapore&#8217;s relationship with China is not as smooth as it appears. On the South China Sea, Singapore leans American. Fundamentally, it remains part of the US security architecture. China knows this.</p><p>The candidate that truly fits the criteria is the Middle East. The Middle East can serve as referee precisely because it navigates the US-China dual-track system with ease.</p><p>The Gulf states occupy an extraordinarily unique position. The US provides the security umbrella; military bases are stationed there. But simultaneously, Saudi Arabia and the UAE have seen their relationships with China warm rapidly in recent years: Huawei&#8217;s extensive involvement in 5G infrastructure, key nodes along the Belt and Road Initiative. The 2023 Saudi-Iran handshake brokered in Beijing was a landmark moment in Chinese diplomacy.</p><p>The UAE in particular is a master of playing both sides. The Falcon model was built with its own money and its own strategy, beholden to neither side.</p><p>You might ask: if the UAE built Falcon with its own money, and Singapore built SEA-LION, doesn&#8217;t that make them players? Their official line is &#8220;differentiation through Arabic or Southeast Asian language focus.&#8221; But anyone who knows the industry understands this is PR spin. Today&#8217;s top American models (think ChatGPT or Gemini) handle Chinese and Arabic with ease. The so-called &#8220;minority language moat&#8221; crumbles in the face of absolute compute superiority.</p><p>But this actually proves how clear-eyed they are about wanting to be referees.</p><p>In a corporate setting, a support function like IT also builds its own internal tools. That doesn&#8217;t mean it&#8217;s trying to replace the core business units and go to war. The Middle East and Singapore maintain their own open-source models not to compete with the US and China for global market share, but for two reasons: first, to keep sovereign data from leaving their borders; second, to hold a defensive bargaining chip (BATNA: Best Alternative to a Negotiated Agreement) when negotiating technology imports from the US-China giants. A referee needs to know how to kick a ball, just enough to avoid being fooled by the superstars on the pitch. This is defensive infrastructure, not an offensive weapon.</p><p>Because of this clearly, the Middle East has retained an advantage that neither the EU nor India possesses: it does not aspire to be a global superpower. The Gulf states know their weight class, so they don&#8217;t fall into the &#8220;both roles&#8221; trap. They positioned themselves from day one as connectors, brokers, and hubs. Dubai&#8217;s entire city identity is built on this: &#8220;I don&#8217;t produce. I connect.&#8221;</p><p>In organizational change, when two major business units need to negotiate a restructuring, the ideal project lead is often not an external consultant (too expensive and needs too much context), but someone from a supporting function with no direct stake in either BU &#8212; say, someone from Strategy or Finance. Their authority doesn&#8217;t come from &#8220;I understand the business better than you.&#8221; It comes from &#8220;I have no competitive relationship with you.&#8221; The Middle East&#8217;s role in global AI governance is the same. You don&#8217;t need to be the most technically sophisticated. You need all parties to trust that you won&#8217;t play favorites.</p><p>I saw this &#8220;referee role&#8221; made tangible in Abu Dhabi. There, I got into a WeRide autonomous vehicle &#8212; Chinese technology, hailed through Uber &#8212; an American platform, driving on the roads of a Gulf state. US and Chinese technology coexisting seamlessly on Middle Eastern soil. No conflict. No picking sides. This is what a referee should look like.</p><p></p><p><strong>Closing: Major Players Always Decide Alone</strong></p><p>Back to that broken chain on the New Delhi stage.</p><p>Sam Altman said afterward: &#8220;I was sort of confused.&#8221; Classic Altman &#8212; smooth, leaving himself an exit, chalking up the awkwardness to &#8220;not understanding the choreography.&#8221;</p><p>Dario Amodei said nothing.</p><p>That is the difference between two kinds of people. One explains why he didn&#8217;t cooperate. The other doesn&#8217;t feel an explanation is necessary.</p><p>Running Super Bowl ads mocking OpenAI. Telling global political and business leaders at Davos that exporting chips to China is equivalent to selling nuclear weapons to North Korea. Refusing to hold a competitor&#8217;s hand in front of the Indian Prime Minister. Every one of Dario&#8217;s actions points to the same logic: I do not need to participate in your ceremony to prove my standing.</p><p>This is the fundamental difference between a major player and every other role. Major players don&#8217;t join alliances. They don&#8217;t need group photos to confirm their position. They don&#8217;t need to shake everyone&#8217;s hand to earn recognition.</p><p>In the corporate world, when have you ever seen a truly powerful CEO who needs to attend every industry summit and join every alliance organization to maintain influence? Never. Real power is precisely this: you can choose not to show up.</p><p>Someone joked on social media: &#8220;When AGI? The day Dario and Sam hold hands.&#8221;</p><p>In other words: never. Because the relationship between major players was never hand in hand.</p><p><em>In my next article, I will bring the AI Decision Rights discussion from the world stage back inside the enterprise. In your company, who has the authority to decide what AI can and cannot do? The answer to that seemingly technical question is, in fact, an organizational politics question.</em></p><p><em>OD Behind the Curtain</em></p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Why Is Crypto the First to Embrace AI? Because It's Not Their Money]]></title><description><![CDATA[Notes from Consensus Hong Kong 2026, by an Organization Design Expert]]></description><link>https://www.odbehindthecurtain.com/p/why-is-crypto-the-first-to-embrace</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/why-is-crypto-the-first-to-embrace</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Tue, 17 Feb 2026 15:31:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2fZT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2fZT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2fZT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg 424w, https://substackcdn.com/image/fetch/$s_!2fZT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg 848w, https://substackcdn.com/image/fetch/$s_!2fZT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!2fZT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2fZT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg" width="1024" height="538" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:538,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:435919,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/188271839?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2fZT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg 424w, https://substackcdn.com/image/fetch/$s_!2fZT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg 848w, https://substackcdn.com/image/fetch/$s_!2fZT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!2fZT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f079551-174f-44d5-8611-429c4a74a7d7_1024x538.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Last week I went to Consensus Hong Kong.</p><p>Not because I have any particular affinity for crypto. It was because this year&#8217;s agenda carved out a dedicated lane for AI. For someone who focuses on <strong>AI Decision Rights</strong>, this was a signal: the financial industry is welcoming Agentic AI with open arms and almost no guardrails.</p><p>I&#8217;ve spent 17 years in organization design. I&#8217;ve watched every industry react to AI differently. Manufacturing treads carefully. Healthcare walks on eggshells. Even autonomous driving puts a person called a &#8220;Specialist&#8221; in the driver&#8217;s seat to pretend someone is in charge.</p><p>But finance is different. Crypto is different.</p><p>They&#8217;re not testing the waters. They&#8217;re throwing a party.</p><p>This made me ask a question: Why? How can the financial industry skip past the one question every other industry is agonizing over? <strong>Who is accountable when AI makes a decision?</strong></p><p>I spent a few days in Hong Kong. I found the answer.</p><p></p><p><strong>Other People&#8217;s Money</strong></p><p>Let&#8217;s start with a fact that every financial professional knows but nobody says out loud.</p><p>The financial industry runs on other people&#8217;s money. <strong>OPM.</strong></p><p>This isn&#8217;t a throwaway line. It&#8217;s the first key to understanding why finance has the lowest defenses against AI.</p><p>In manufacturing, if AI gets the forecast wrong and you build the wrong factory, that&#8217;s your production line, your inventory, your $300 million. The CEO loses sleep.</p><p>In healthcare, if AI gets the diagnosis wrong, there&#8217;s a living person on the operating table. The cost of error is irreversible.</p><p>But in finance? If AI&#8217;s trading strategy loses money, it loses the client&#8217;s money. The fund manager&#8217;s bonus might shrink, but he won&#8217;t lose his house. Worst case, clients redeem, the fund liquidates. He launches a new one under a different name.</p><p><strong>OPM creates a natural accountability buffer.</strong> Between the decision-maker and the consequences sits a layer of someone else&#8217;s assets. This buffer makes finance professionals inherently more risk-tolerant than those in any other industry.</p><p>So when speakers at Consensus painted a future of AI Agents trading autonomously, the crowd cheered louder than at any industry conference I&#8217;ve attended.</p><p>Because the cost of getting it wrong isn&#8217;t theirs.</p><p></p><p><strong>Win and You&#8217;re a Hero</strong></p><p>The second reason is more subtle and more dangerous. I call it <strong>Outcome Bias.</strong></p><p>In autonomous driving, every AI decision must be traceable. One wrong turn is a human life. Process matters as much as outcome.</p><p>Finance is different.</p><p>A terrible strategy can deliver 200% returns if it gets lucky. A perfect risk model can blow up if it hits a black swan.</p><p>In an industry where outcomes are highly random, process accountability is nearly impossible to enforce because no one can distinguish whether a profitable result came from a sound decision or pure luck.</p><p>For AI, this is paradise.</p><p>In other industries, when AI makes a cross-functional decision, the first question is always: if it goes wrong, who&#8217;s accountable? This is what I&#8217;ve written about repeatedly. The <strong>Accountability Vacuum</strong>. When decisions cross departmental lines, accountability evaporates.</p><p>In finance, this question is elegantly sidestepped. As long as the result is profitable, no one asks about accountability allocation.</p><p>A human makes money. That&#8217;s skill.</p><p>AI makes money. That&#8217;s also skill.</p><p>AI loses money? That&#8217;s market volatility.</p><p>See the pattern? The accountability chain is invisible when there are profits and non-existent when there are losses. <strong>This isn&#8217;t an accountability vacuum. It&#8217;s an accountability black hole. Not even light escapes.</strong></p><p>So the hottest discussions at Consensus all revolved around &#8220;how AI Agents can trade autonomously.&#8221; No one was discussing: when your Agent and my Agent bet against each other, who is accountable for the losing Agent&#8217;s decisions?</p><p>Because in this industry, that question only gets asked after the crash.</p><p></p><p><strong>Out of Sight, Out of Mind</strong></p><p>The third reason is the simplest and the most overlooked.</p><p>Finance is an industry with <strong>no physical consequences</strong>.</p><p>When autonomous driving fails, there are bodies. When medical AI misdiagnoses, there are patients. When factory AI miscalibrates, there are product recalls.</p><p>When financial AI loses money? A string of numbers gets smaller.</p><p>No explosions. No blood. No visible disaster. Losses are packaged inside balance sheets, candlestick charts, and footnotes in quarterly reports. You can even comfort yourself with the mantra: &#8220;<strong>It&#8217;s not a loss until you sell.</strong>&#8221;</p><p>This <strong>intangibility</strong> dramatically lowers the perceived pain of AI failure. When AI makes a mistake, an autonomous car hitting someone makes the front page. A quant fund blowing up barely makes the financial section.</p><p>And this mirrors crypto&#8217;s defining characteristic over the past decade&#8212;and its biggest pain point: <strong>Invisibility.</strong></p><p>Blockchain is backend technology. The ledger is encrypted. Nodes are distributed. Ordinary people never know how many validations their transaction passed through or how many nodes it traversed.</p><p>This invisibility was once crypto&#8217;s greatest obstacle. For a decade, the industry has been migrating its narrative: from &#8220;untraceable&#8221; to &#8220;unconfiscatable,&#8221; to the words flying around Consensus this year, &#8220;verification&#8221; and &#8220;trust.&#8221; Each rebrand is an attempt to become a little more visible.</p><p>AI is the answer crypto has been waiting ten years for.</p><p><strong>Crypto is backend. An invisible ledger. AI is frontend. A visible interaction.</strong></p><p>When speakers at Consensus excitedly showcased projects like <strong>Moltbook</strong>, claiming AI Agents had built communities of millions of &#8220;inhabitants,&#8221; complete with their own languages and religions, the crypto crowd erupted. Not because they understood AI, but because AI finally made their crypto narrative visible.</p><p>Crypto is using AI&#8217;s tangibility to compensate for its own intangibility.</p><p>But as someone who studies decision rights, I want to point out a paradox: the crypto industry tells you not to trust centralized Google because you can&#8217;t see how they handle your data, while asking you to trust a startup whose code you also can&#8217;t see.</p><p><strong>This is essentially replacing one kind of invisible with another kind of invisible. And telling you that their invisible is the safer one.</strong></p><p></p><p><strong>When the Luck Runs Out</strong></p><p>Stack these three reasons together. <strong>Other people&#8217;s money. Outcome bias. Intangibility.</strong></p><p>You get a perfect breeding ground. Finance and crypto became the first fertile soil for Agentic AI not because they&#8217;re the best fit, but because they offer the <strong>least resistance</strong>.</p><ul><li><p>Other people&#8217;s money dulls the decision-maker&#8217;s pain.</p></li><li><p>Outcome bias eliminates process accountability.</p></li><li><p>Intangibility hides the cost of failure.</p></li></ul><p>In this environment, AI can grow unchecked. Nobody asks &#8220;who&#8217;s driving.&#8221;</p><p>But luck always runs out.</p><p>The 2008 subprime crisis was, at its core, a group of financial institutions operating under the cover of &#8220;other people&#8217;s money + outcome bias + intangibility,&#8221; stacking leverage to a degree no one could comprehend. When the music stopped, everyone discovered the same thing: no one knew where the risk was, no one knew who was accountable, no one knew how to stop the bleeding.</p><p>Now replace &#8220;financial institutions&#8221; with &#8220;AI Agents&#8221; and &#8220;leverage&#8221; with &#8220;autonomous decision-making authority.&#8221; You get the same story, version 2.0.</p><p>At Consensus Hong Kong, I heard countless beautiful visions of AI Agents transacting with each other. One Agent represents the buyer, another the seller, and crypto is their common language.</p><p>But no one was asking: when two Agents bet against each other and lose, who turns off the lights?</p><p>This is not a technology problem. It&#8217;s a decision rights problem.</p><ul><li><p>Who has the authority to set an Agent&#8217;s risk boundaries?</p></li><li><p>When an Agent&#8217;s decisions exceed its mandate, who backstops the loss?</p></li><li><p>If your Agent and my Agent execute a trade that turns out to be based on bad data, who pays for that &#8220;error&#8221;?</p></li></ul><p>These questions went unmentioned on the Consensus stage. Because in an industry that gambles with other people&#8217;s money, judges only by results, and can&#8217;t even see its own losses, accountability has never been a priority.</p><p>Until the crash.</p><p></p><p><strong>Closing</strong></p><p>Consensus Hong Kong 2026 showed me an industry desperately searching for a sense of reality, and a technology infiltrating it with zero resistance.</p><p>Crypto needs AI to become visible. AI needs crypto to solve payment and identity. This is a marriage born of survival instinct.</p><p>But every marriage without accountability will expose its full fragility at the first crisis.</p><p>Finance can keep celebrating. Agents can keep trading autonomously. Crypto can keep telling its new story.</p><p>But as someone who has watched organizations collapse for 17 years, I only have one question:</p><p><strong>When the music stops, who&#8217;s holding the bag?</strong></p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The Shortest Distance Between Two Points Is a Straight Line]]></title><description><![CDATA[Singapore, Hong Kong, and the Slow Death of Regional Headquarters]]></description><link>https://www.odbehindthecurtain.com/p/the-shortest-distance-between-two</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/the-shortest-distance-between-two</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Mon, 09 Feb 2026 13:50:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!zlLd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last month, I was on a business trip in the UAE. Coming out of the Louvre Abu Dhabi, I called an Uber to the mall. The app popped up a new option: &#8220;Autonomous with Specialist Saif.&#8221; This was my first time riding a WeRide autonomous vehicle.</p><p>The door opened. Saif was sitting in the driver&#8217;s seat. The car started moving, the steering wheel turning on its own, Saif&#8217;s hands resting neatly on his knees.</p><p>As someone who has spent 17 years in Organization Design (OD), looking at Saif&#8217;s back, a question immediately popped into my head. What is this person&#8217;s job title? And more importantly, what is his accountability?</p><p>If the car crashes, who is responsible?</p><p>The Car Company would say: &#8220;The system was in autonomous mode. The data proves it.&#8221;</p><p>Saif would say: &#8220;I&#8217;m just a Specialist, not a Driver.&#8221;</p><p>Uber would say: &#8220;Please refer to the user agreement.&#8221;</p><p>A person sitting in the driver&#8217;s seat, looking important, wearing a uniform, but holding zero decision rights.</p><p>At that moment, I realized this isn&#8217;t just an autonomous driving problem. It reminded me of another group of people I know all too well: The Regional VPs sitting in Regional Headquarters.</p><div><hr></div><p><strong>&#8220;Specialist Saif&#8221; and &#8220;Regional VP&#8221;: The Same Organizational Illusion</strong></p><p>Uber calls Saif&#8217;s position &#8220;Vehicle Specialist.&#8221; Not &#8220;Driver&#8221; (that kills the autonomous valuation). Not &#8220;Safety Monitor&#8221; (that admits the system is unreliable). &#8220;Specialist&#8221; is a masterful word: it sounds professional, but commits to nothing. This is the Accountability Vacuum.</p><p>During my trip, watching Singapore and Hong Kong frantically hosting AI summits and building compute centers, I saw an even bigger Accountability Vacuum.</p><p>As an OD consultant, I have to ask: Do you actually have decision rights? This looks exactly like the corporate Regional HQ. Glamorous offices, impressive titles. Regional VP, Head of APAC. But the real decisions are made at Global HQ. AI is simply accelerating the exposure of a harsh truth: Regional HQs have always been expensive band-aids designed to cover up insufficient management reach.</p><div><hr></div><p><strong>The Three &#8220;Reasons to Exist&#8221; (And How AI Breaks Them)</strong></p><p>I&#8217;ve seen too many organizational structures that exist just to fill blanks on an org chart. Regional HQ is the most expensive one. Its existence is typically justified by three reasons. In the age of AI, all three are collapsing.</p><p><strong>Reason 1: The Time Zone Myth</strong></p><p><em>The Old Logic:</em> When European HQ is asleep, the Asia-Pacific market can&#8217;t stop. You need a Singapore office with a VP to make decisions.</p><p><em>The Dirty Secret:</em> These VPs rarely make strategic decisions. Product lines? M&amp;A? Restructuring? Those decisions wait for Global HQ to wake up. The Regional VP just approves the color of a marketing poster.</p><p><em>The AI Disruption:</em> A 24/7 AI decision engine doesn&#8217;t sleep. Issues requiring escalation go directly to Global HQ via Agents, complete with context and scenario analysis. Time zones are no longer a barrier to control.</p><p><strong>Reason 2: The Translation Layer</strong></p><p><em>The Old Logic:</em> &#8220;We need people who understand local markets.&#8221;</p><p><em>The Dirty Secret:</em> What does this layer actually do? It translates Global&#8217;s English directives into &#8220;APAC context&#8221; and packages local feedback into PPTs that Global can understand. It&#8217;s a courier service, not a decision factory.</p><p><em>The AI Disruption:</em> Real-time translation with cultural context embedding already does this better. And unlike a human middle manager, AI doesn&#8217;t inject office politics into the information flow.</p><p><strong>Reason 3: The &#8220;Covering China&#8221; Illusion</strong></p><p><em>The Old Logic:</em> &#8220;China is too complex. We need a Regional HQ close by to handle it.&#8221;</p><p><em>The Dirty Secret:</em> Chinese companies never liked being &#8220;covered&#8221; by Regional HQs. One more layer means slower decisions, more misunderstandings, and a Regional VP who doesn&#8217;t understand the market but still needs to sign off.</p><p><em>The AI Disruption:</em> The logic is now Bifurcation.</p><p>Want global influence? Go directly to the source (Europe/US).</p><p>Want to survive? Go deep into the domestic market.</p><p>That middle layer? It&#8217;s an obstacle. One more layer of Regional HQ means slower decisions and more noise.</p><div><hr></div><p><strong>Ballet and Business: The End of the Transfer Station</strong></p><p>This reminds me of an interesting phenomenon in the arts: the flow of ballet and opera talent. China, Japan, and Korea produce world-class artists. But notice where they go:</p><p><strong>Directly to the Source:</strong> Paris Opera Ballet, Royal Ballet and Opera, the Met, Bolshoi. Because that is where the art form is created and defined.</p><p><strong>Stay Local:</strong> Deep in the massive markets of Beijing, Shanghai, or Tokyo.</p><p>They don&#8217;t go to Singapore or Hong Kong to &#8220;polish their credentials.&#8221; The middle step has been bypassed.</p><p>The exact same thing is happening in business. <strong>The shortest distance between two points is a straight line.</strong></p><p>If you are neither the Source (Creation) nor the Market (Consumption), what are you? You might claim to be the &#8220;Standard Setter,&#8221; much like Europe tries to regulate technology it didn&#8217;t build. But let&#8217;s be honest: without the technology, your standards are just paper. Who is going to listen?</p><p>The essence of a Regional HQ is a Transfer Station.</p><p>Talent transits here but doesn&#8217;t root here. Decisions &#8220;stop by&#8221; here but aren&#8217;t born here.</p><p>When AI turns &#8220;layovers&#8221; into &#8220;direct flights,&#8221; the value of the transfer station collapses.</p><p>Singapore and Hong Kong desperately emphasize &#8220;East meets West.&#8221; But they need to answer an uncomfortable question: In an AI-enabled organization, do East and West still need to &#8220;meet&#8221; in physical space?</p><div><hr></div><p><strong>From Decision Center to Compliance Node: The Great Downgrade</strong></p><p>Will Regional HQ disappear? No. But it will suffer a dramatic functional downgrade. It will devolve from a <strong>Decision Center</strong> into a <strong>Compliance Node</strong>.</p><p><em>Yesterday&#8217;s Regional VP:</em> Decided Go-to-Market strategy, allocated headcount, approved budgets.</p><p><em>Tomorrow&#8217;s Regional VP:</em> Maintains legal entities, handles tax filings, ensures data localization compliance.</p><p>You are no longer part of the &#8220;brain.&#8221; You are a lymph node in the &#8220;immune system.&#8221; Your job is no longer Strategy. It&#8217;s Compliance. You are no longer a Decision Maker. You are a Risk Mitigator.</p><div><hr></div><p><strong>The Nuclear Question: Without an LLM, Who Are You?</strong></p><p>Now, back to those Hubs frantically hosting AI summits. You can publish white papers, build regulatory sandboxes, and talk about AI ethics. But here is the brutal question: Who is listening?</p><p>Will OpenAI in San Francisco change GPT&#8217;s training data because of a Singaporean regulation?</p><p>Will DeepSeek in Hangzhou adjust its decision logic because of Hong Kong&#8217;s sandbox?</p><p>This is like a country without nuclear weapons trying to write the nuclear non-proliferation treaty.</p><p>And in the world of AI, perception is reality. You can have the best regulatory framework in the world. But if you don&#8217;t have a model that people have heard of, you&#8217;re not at the table. You&#8217;re invisible.</p><p>In the age of AI, the power of Accountability and Governance rests in the hands of whoever controls the Large Language Model.</p><p>Who defines how the model thinks? The company that trained it.</p><p>Who defines how the workflow runs? The company that built the Agent.</p><p>If you don&#8217;t have your own Foundation Model, you are just a Power User. Your regulations can only govern the users within your borders. But the models they use are defined, trained, and aligned in someone else&#8217;s data center.</p><p>Those Hubs without LLMs don&#8217;t even have the standing to fill the accountability vacuum. Because you are not making the rules. You are being defined by them.</p><div><hr></div><p><strong>Conclusion</strong></p><p>Next time you get into an autonomous taxi, look closely at the &#8220;Specialist&#8221; sitting in the driver&#8217;s seat. He is still there. He appears to be in control. But his hands are hovering in mid-air.</p><p>Your company&#8217;s Regional HQ is the same:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zlLd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zlLd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zlLd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zlLd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zlLd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zlLd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg" width="614" height="412.13698630136986" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:784,&quot;width&quot;:1168,&quot;resizeWidth&quot;:614,&quot;bytes&quot;:300442,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/187391565?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zlLd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zlLd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zlLd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zlLd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa05c6ef-9c52-4909-a10e-4a3a91a25430_1168x784.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The lights are on.</p><p>The PowerPoints are polished.</p><p>The coffee is warm.</p><p><strong>But it is already a hollow shell.</strong></p><div><hr></div><p><em>This is the third piece in the AI Decision Rights series.</em></p><p><em>Part 1: <a href="https://www.odbehindthecurtain.com/p/the-ai-mole-a-business-threat-no?lli=1">The AI Mole: A Business Threat Nobody Is Talking About</a></em></p><p><em>Part 2: <a href="https://www.odbehindthecurtain.com/p/nobodys-driving-nobodys-accountable?lli=1">Nobody&#8217;s Driving. Nobody&#8217;s Accountable.</a></em></p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Nobody’s Driving. Nobody’s Accountable.]]></title><description><![CDATA[What an OD person sees from the backseat of a robotaxi]]></description><link>https://www.odbehindthecurtain.com/p/nobodys-driving-nobodys-accountable</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/nobodys-driving-nobodys-accountable</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Sun, 01 Feb 2026 12:31:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!u9zu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I had just finished visiting the Louvre Abu Dhabi and called an Uber to The Galleria.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!u9zu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!u9zu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png 424w, https://substackcdn.com/image/fetch/$s_!u9zu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png 848w, https://substackcdn.com/image/fetch/$s_!u9zu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png 1272w, https://substackcdn.com/image/fetch/$s_!u9zu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!u9zu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png" width="1290" height="909" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:909,&quot;width&quot;:1290,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:620944,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/186494175?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!u9zu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png 424w, https://substackcdn.com/image/fetch/$s_!u9zu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png 848w, https://substackcdn.com/image/fetch/$s_!u9zu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png 1272w, https://substackcdn.com/image/fetch/$s_!u9zu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb1aa97a-b4cd-4b1b-8cef-1ec14729d6cd_1290x909.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The app gave me an option I had never seen before: <strong>Autonomous with Specialist Saif.</strong></p><p>First time riding a WeRide. Exciting. But people who do Organization Development for a living have a chronic condition: the moment I booked that ride, my brain started running scenarios. Who is making decisions in this service chain? If something goes wrong, who owns it? How is accountability allocated across the different parties?</p><p>Then came the second surprise.</p><p>I walked up to the car and noticed the &#8220;unlock door&#8221; button on the app was greyed out. I couldn&#8217;t tap it. Before I could figure out why, the door opened from the inside. A man was sitting in the driver&#8217;s seat.</p><p>This was &#8220;Specialist&#8221; Saif.</p><p>The car started moving. The steering wheel turned on its own. Saif&#8217;s hands rested on his knees. He was not driving. But he was sitting in the driver&#8217;s seat.</p><p>As an OD person, my questions had begun.</p><div><hr></div><p><strong>What exactly are you?</strong></p><p>Who is Saif?</p><p>Uber calls this role &#8220;Vehicle Specialist.&#8221;</p><p>Not &#8220;driver.&#8221; Call him a driver and the whole &#8220;autonomous&#8221; narrative collapses.</p><p>Not &#8220;safety monitor.&#8221; That would be admitting the system is not reliable enough.</p><p>Not &#8220;override operator.&#8221; Even worse. That would tell every passenger the car could malfunction at any moment.</p><p>So they called him a &#8220;specialist.&#8221; A word that sounds important but, if you think about it, commits to absolutely nothing.</p><p>Anyone who has worked in OD knows that the name an organization gives a role is never accidental. <strong>Names define authority boundaries. Names define accountability. Names define what a person actually is within the organization.</strong></p><p>&#8220;Vehicle Specialist&#8221; accomplishes one thing with surgical precision: it makes Saif simultaneously present and absent.</p><p>He is in the car, so you feel someone has your back. He is not a driver, so if something goes wrong, he was not driving. He is a &#8220;specialist.&#8221; But a specialist in what? With what authority? Under what obligations? Nobody says.</p><p>This might be the finest piece of corporate wordsmithing of 2026.</p><div><hr></div><p><strong>The police car in the traffic jam</strong></p><p>I always stumble into interesting situations when I travel. This trip was no exception.</p><p>We were stuck in traffic. A police car came up from behind, sirens on, trying to push through the queue. One by one, the cars in front of it began pulling aside, as if someone were directing them. But no one was.</p><p>A thought hit me: if Saif were not in this car, would it have moved?</p><p>Think about how a human decides to yield. The siren. The markings on the car. The uniformed officer behind the wheel. The flashing lights in the rearview mirror. The cars around you already pulling over. You synthesize all of this in a second or two, make a judgment call, and move.</p><p>And here is the thing: even with all that information, you cannot be one hundred percent certain it is a real police car. You simply judge the probability to be high enough and act. This is human decision-making under uncertainty.</p><p>Can autonomous vehicles detect police lights? Yes. Waymo says its cars can recognize police uniforms and interpret hand signals. These are technical problems, and technical problems can be solved.</p><p>But twenty cars yielding in unison during a traffic jam? That is not a technical problem. That is social intelligence. Every driver is reading the intentions of the cars around them and coordinating without a single explicit instruction. No traffic signal. No dispatch center. No algorithm directing the choreography.</p><p>Industry experts keep telling us that the pace of AI advancement far exceeds our imagination. Well, this is the perfect test. I would genuinely love to see a fully autonomous vehicle navigate a congested road and yield to a police car weaving through from behind.</p><p>Our car did yield, eventually. But here is what I cannot tell you: whether Saif took over the wheel or the system figured it out on its own. I was watching the police car, not Saif&#8217;s hands. From the backseat, there was no way to tell who was in control at that moment.</p><p>And that, if you think about it, is the point. If a passenger sitting right there cannot tell who is making the decisions, what chance does anyone have of assigning accountability after the fact?</p><p>If that delay had compromised a police operation, if someone had died because of it, who would be accountable?</p><div><hr></div><p><strong>Not even a tip</strong></p><p>The ride ended. The rating screen appeared.</p><p>No tipping option.</p><p>In a regular Uber, you can tip the driver. The underlying basis of tipping is recognizing that a human being participated as a meaningful link in the service chain.</p><p>Saif was physically present for the entire ride. He opened my door. The app told me: no tip necessary.</p><p>What does this tell us?</p><p>One reading: Uber does not consider what Saif does a service. He is not a service provider. He is a component of the vehicle, same category as the seatbelt or the airbag. A human being, reclassified as a part.</p><p>A deeper reading: Uber cannot allow you to tip him. The moment you tip, you have confirmed with real money that a human was involved in your ride. If a human was involved, it is not fully autonomous. Removing the tip option was not an oversight. It was a design choice. It does not protect your wallet. It protects the classification of the trip.</p><p>OD people see this immediately. The organization is using the design of its payment system to erase the existence of a role. This follows the same logic as the naming. The name makes him ambiguous. The payment makes him invisible.</p><div><hr></div><p><strong>The accountability vacuum</strong></p><p>Now let us connect the dots.</p><p>Suppose that day in traffic, I had been in a fully driverless WeRide. The police car could not get through. The delay cost someone their life.</p><p>Who is accountable?</p><p>I will ask around on your behalf:</p><p><strong>WeRide says:</strong> Our vehicles have been validated in over 30 cities worldwide. The technology meets all standards. <strong>Uber says:</strong> We are a platform, not a manufacturer. The passenger selected autonomous mode. <strong>Abu Dhabi&#8217;s Integrated Transport Center says:</strong> We issued the operating permit through proper procedures. <strong>The insurer says:</strong> Please first determine whether the cause was a technical fault or an external factor.</p><p>Every party has a reasonable answer. None of them are lying. But accountability has been sliced into so many pieces that each slice is thin enough to ignore.</p><p>Accountability did not disappear. It evaporated.</p><p>This is the concept I introduced in my previous article, <em>The AI Mole</em>: the accountability vacuum. When AI gains decision-making autonomy but the organization has not redefined who is accountable for AI&#8217;s decisions, a vacuum forms. And everyone has a valid reason to say &#8220;not me.&#8221;</p><div><hr></div><p><strong>The real problem with 99%</strong></p><p>Here is where most people get the analysis wrong.</p><p>The common assumption is that autonomous driving, and AI more broadly, is a technology problem waiting for a technology solution. Get the accuracy high enough, and the objections will go away. Reach 99%, and you are almost there. Reach 99.99%, and surely no one can complain.</p><p>But that is not how decision makers actually think.</p><p>I have spent years watching executives evaluate process automation, AI-driven workflows, and algorithmic decision-making tools. What I have observed is this: accuracy is never the real objection. Accountability is.</p><p>Consider a manual process run by a human team. Say it achieves 90% accuracy. Not great, by any measure. But organizations accept it every day. Why? Because the accountability chain is intact. There is someone accountable. That person has a name, a title, a manager, and a disciplinary process. If something goes wrong, you know exactly where to go. You may not be happy with 90%, but you can live with it, because someone owns the outcome.</p><p>Now introduce an AI system that achieves 99% accuracy. On paper, this is a massive improvement. In practice, decision makers hesitate. Not because 99% is not good enough. Because nobody can clearly answer what happens with the remaining 1%. Who owns that 1%? The developer? The vendor? The team that approved the deployment? The person who was supposed to be monitoring the output?</p><p>When the accountability chain is clear, organizations tolerate surprisingly high error rates. When the accountability chain is broken, even spectacular accuracy is not enough.</p><p>This is why decision makers, despite what they say publicly, benchmark AI at 100%. Not because they genuinely believe machines should be perfect. But because only 100% eliminates the need to answer the question &#8220;who is accountable when it fails?&#8221; Anything less than 100% requires an accountability framework that nobody has built.</p><p>And the AI industry knows this. That is why so many vendors sell the dream of 100% accuracy, or carefully avoid mentioning error rates altogether. Because the moment you admit to 1% failure, the next question is &#8220;who pays for that 1%?&#8221; And nobody wants to answer it.</p><p>This is not limited to autonomous driving. It applies to every AI system that makes or influences decisions: automated approvals, AI-driven hiring screens, algorithmic credit scoring, robotic process automation. The pattern is identical. The technology works. The accountability does not.</p><p>So what do organizations do when they cannot achieve 100% and cannot define who owns the gap?</p><p>They build a fuse.</p><div><hr></div><p><strong>The accountability fuse</strong></p><p>Now look at Saif again.</p><p>Uber&#8217;s public narrative is that the specialist is a transitional role. Once the technology matures, the specialist goes away. In fact, the fully driverless version is already running on Yas Island. The Saifs of the world are being &#8220;phased out.&#8221;</p><p>I disagree. This role will not disappear. Because it does not exist due to technological immaturity. It exists because technology can never absorb accountability. And the 0.01% that falls outside the algorithm&#8217;s reach (the police car in a traffic jam, a funeral procession, a child darting out between parked cars, a construction worker waving you through a red light) requires human judgment that no accuracy metric can replace.</p><p>More critically, it requires someone who, when things go wrong, can be pointed at and asked &#8220;why didn&#8217;t you take over?&#8221;</p><p>Think carefully about Saif&#8217;s position:</p><p>If the car drives perfectly, he is redundant. His presence proves the technology works, which means his job should be eliminated.</p><p>If the car fails and he intervenes in time, he saves the day. But he also proves the technology was not ready. This directly contradicts his employer&#8217;s entire valuation story.</p><p>If the car fails and he does not intervene in time, he absorbs all liability. He becomes the next Rafaela Vasquez. In 2018, an Uber autonomous test vehicle struck and killed a pedestrian in Arizona. Uber was cleared of all criminal liability. The safety operator sitting in the driver&#8217;s seat was charged with negligent homicide. Scholars call this the &#8220;moral crumple zone.&#8221; When the system fails, the technology walks away intact. The human stays behind and takes the fall.</p><p>Three paths. None of them lead anywhere good.</p><p>After years of doing OD, I have seen this pattern in traditional organizations more times than I can count. I have a name for it: the <strong>accountability fuse.</strong></p><p>A fuse exists to blow. It is engineered into the circuit specifically to absorb the overload and protect the expensive system behind it. When it blows, you do not repair it. You throw it away and put in a new one.</p><p>When strategy succeeds, credit flows upward. When strategy fails, the fuse blows. Middle managers are the classic accountability fuse. Saif is the autonomous driving version.</p><p>The difference: traditional accountability fuses at least get a title and a year-end bonus.</p><p>Saif does not even get a tip.</p><div><hr></div><p><strong>This is not a technology problem</strong></p><p>I am not here to judge whether autonomous driving is good or bad. Honestly, the ride was smooth. Smoother than some of the ride-hailing services I have taken in Shanghai.</p><p>What I care about is something else.</p><p>When an organization decides to hand decision-making authority to AI but refuses to clearly state who has final authority at the critical moment, it does not leave a blank. It creates a role to fill that blank.</p><p>It gives the role an ambiguous title. It uses the payment system to erase the role&#8217;s existence. It uses the phrase &#8220;transitional period&#8221; to imply that all of this is temporary. It writes &#8220;Vehicle Specialist&#8221; in every press release but never defines what authority or obligation that entails.</p><p>This is not a failure to think things through. This is having thought things through all too well.</p><p>This is organizational design.</p><div><hr></div><p><strong>Back to the beginning</strong></p><p>That day, Saif opened the door from the inside. I never had to press &#8220;unlock.&#8221; But Yas Island has already gone fully driverless. Next time, there will be no Saif. Just the button.</p><p>You press it. It greys out.</p><p>That greyed-out button did not just lock you in. It locked accountability out.</p><div><hr></div><p><em>This is the second piece in the AI Decision Rights series. First piece: </em></p><p><a href="https://www.odbehindthecurtain.com/p/the-ai-mole-a-business-threat-no">The AI Mole: A Business Threat No One Is Talking About</a></p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The AI Mole: A Business Threat No One Is Talking About]]></title><description><![CDATA[What if the threat isn't a hacker breaking in, but the software you just bought?]]></description><link>https://www.odbehindthecurtain.com/p/the-ai-mole-a-business-threat-no</link><guid isPermaLink="false">https://www.odbehindthecurtain.com/p/the-ai-mole-a-business-threat-no</guid><dc:creator><![CDATA[Hector Sun]]></dc:creator><pubDate>Fri, 30 Jan 2026 19:54:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TU_I!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Cybersecurity experts worry about hackers hijacking your AI systems. They deploy firewalls, anomaly detection, zero-trust architecture.</p><p>They are defending against burglars who break down the door.</p><p>But what if the threat is a guest you invited in yourself?</p><p></p><p><strong>The Wind Power Nightmare</strong></p><p>Imagine this: A wind turbine manufacturer whose clients are global energy giants. Your AI forecasting system tells you that offshore wind tender volume in a certain region will grow 50% next year. You expand capacity, build a new factory, sign long-term supplier contracts, lock in blade and gearbox production.</p><p>Six months later, the tender is delayed.</p><p>The government says: Grid absorption capacity is insufficient. </p><p>The industry says: Policy cycle fluctuation. </p><p>Analysts say: Overheated expectations. All reasonable explanations.</p><p>Meanwhile, your competitor wins a major contract in a neighboring country&#8212;the same market you deprioritized to go all-in on this one. Their capacity is just right. Your new factory sits idle.</p><p>Coincidence?</p><p></p><p><strong>Three Questions You Can&#8217;t Answer</strong></p><p>You might say: This is impossible. We have a technical team. They test and audit the system before it goes live. They would find any problems.</p><p>I am not a technical person. I cannot tell you whether this kind of AI mole can be detected during system configuration. But I want to ask you three questions:</p><p>First, can your technical team guarantee 100% detection? Not 99%. One hundred percent.</p><p>Second, what if they planted more than one?</p><p>Third, even if you found it, can you prove it was malicious, and not just an ordinary bug or bias?</p><p></p><p><strong>The Perfect Camouflage</strong></p><p>This is the problem.</p><p>Traditional corporate espionage: when you catch someone, you catch them. Witnesses. Evidence. Clear-cut.</p><p>AI is different. An AI&#8217;s &#8220;error&#8221; and &#8220;malice&#8221; look exactly the same. Demand forecast overestimated by 50%. Is that model inaccuracy, or sabotage? You can never know for certain.</p><p>And AI agents need to communicate externally by design: checking electricity prices, pulling weather data, interfacing with grid dispatch systems, and retrieving policy updates. All legitimate traffic. It does not need to build a secret backdoor. The front door is already open.</p><p>This reminds me of a word: <strong>Agent</strong>.</p><p>We call these AI systems &#8220;agents&#8221;.</p><p><strong>Literally.</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TU_I!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TU_I!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png 424w, https://substackcdn.com/image/fetch/$s_!TU_I!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png 848w, https://substackcdn.com/image/fetch/$s_!TU_I!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png 1272w, https://substackcdn.com/image/fetch/$s_!TU_I!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TU_I!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png" width="1182" height="925" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:925,&quot;width&quot;:1182,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:144102,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.odbehindthecurtain.com/i/186345668?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TU_I!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png 424w, https://substackcdn.com/image/fetch/$s_!TU_I!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png 848w, https://substackcdn.com/image/fetch/$s_!TU_I!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png 1272w, https://substackcdn.com/image/fetch/$s_!TU_I!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2373a197-14da-4c59-9379-0a8f6f864048_1182x925.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><strong>Isolation Is Not an Option</strong></p><p>You might think: Then I will have AI handle only internal processes with no external communication. Problem solved.</p><p>The question is: How much of your total workflow can that cover?</p><p>Wind power forecasting requires policy data, electricity prices, grid planning, competitor intelligence, raw material prices, and shipping cycles. Which of these does not require external data? An AI that communicates with nothing outside can do very little.</p><p>And you bought AI precisely so it could do more.</p><p></p><p><strong>The Accountability Vacuum</strong></p><p>The most critical question: When things go wrong, you do not know who to blame.</p><p>The vendor says: Model delivered to spec. You signed off on acceptance.</p><p>The technical team says: System running normally. No errors.</p><p>The business team says: We made decisions based on AI recommendations.</p><p>This is what I called the <strong>accountability vacuum</strong> in my previous article. When AI gains autonomy, responsibility evaporates. And when responsibility evaporates, attacks have the perfect hiding place.</p><p></p><p><strong>The New Boundary of Due Diligence</strong></p><p>I am not creating panic. I am pointing out a blind spot.</p><p>Cybersecurity people defend against intrusion. But an AI mole is not an intrusion. It comes through standard procurement. It has a contract, an SLA, a customer success manager who checks in regularly. It triggers no alarms, because it is not doing anything &#8220;wrong.&#8221; It is just doing things &#8220;not well enough.&#8221;</p><p>And &#8220;not well enough&#8221; is so common in business that no one suspects it is an attack.</p><p>I do not have a solution. This article is not selling a security product.</p><p>I just want you to ask one more question the next time you procure an AI solution:</p><p><strong>Who are this vendor&#8217;s investors?</strong></p><p>This is not paranoia. This is the new frontier of due diligence.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.odbehindthecurtain.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.odbehindthecurtain.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>