Why Is Crypto the First to Embrace AI? Because It's Not Their Money
Notes from Consensus Hong Kong 2026, by an Organization Design Expert
Last week I went to Consensus Hong Kong.
Not because I have any particular affinity for crypto. It was because this year’s agenda carved out a dedicated lane for AI. For someone who focuses on AI Decision Rights, this was a signal: the financial industry is welcoming Agentic AI with open arms and almost no guardrails.
I’ve spent 17 years in organization design. I’ve watched every industry react to AI differently. Manufacturing treads carefully. Healthcare walks on eggshells. Even autonomous driving puts a person called a “Specialist” in the driver’s seat to pretend someone is in charge.
But finance is different. Crypto is different.
They’re not testing the waters. They’re throwing a party.
This made me ask a question: Why? How can the financial industry skip past the one question every other industry is agonizing over? Who is accountable when AI makes a decision?
I spent a few days in Hong Kong. I found the answer.
Other People’s Money
Let’s start with a fact that every financial professional knows but nobody says out loud.
The financial industry runs on other people’s money. OPM.
This isn’t a throwaway line. It’s the first key to understanding why finance has the lowest defenses against AI.
In manufacturing, if AI gets the forecast wrong and you build the wrong factory, that’s your production line, your inventory, your $300 million. The CEO loses sleep.
In healthcare, if AI gets the diagnosis wrong, there’s a living person on the operating table. The cost of error is irreversible.
But in finance? If AI’s trading strategy loses money, it loses the client’s money. The fund manager’s bonus might shrink, but he won’t lose his house. Worst case, clients redeem, the fund liquidates. He launches a new one under a different name.
OPM creates a natural accountability buffer. Between the decision-maker and the consequences sits a layer of someone else’s assets. This buffer makes finance professionals inherently more risk-tolerant than those in any other industry.
So when speakers at Consensus painted a future of AI Agents trading autonomously, the crowd cheered louder than at any industry conference I’ve attended.
Because the cost of getting it wrong isn’t theirs.
Win and You’re a Hero
The second reason is more subtle and more dangerous. I call it Outcome Bias.
In autonomous driving, every AI decision must be traceable. One wrong turn is a human life. Process matters as much as outcome.
Finance is different.
A terrible strategy can deliver 200% returns if it gets lucky. A perfect risk model can blow up if it hits a black swan.
In an industry where outcomes are highly random, process accountability is nearly impossible to enforce because no one can distinguish whether a profitable result came from a sound decision or pure luck.
For AI, this is paradise.
In other industries, when AI makes a cross-functional decision, the first question is always: if it goes wrong, who’s accountable? This is what I’ve written about repeatedly. The Accountability Vacuum. When decisions cross departmental lines, accountability evaporates.
In finance, this question is elegantly sidestepped. As long as the result is profitable, no one asks about accountability allocation.
A human makes money. That’s skill.
AI makes money. That’s also skill.
AI loses money? That’s market volatility.
See the pattern? The accountability chain is invisible when there are profits and non-existent when there are losses. This isn’t an accountability vacuum. It’s an accountability black hole. Not even light escapes.
So the hottest discussions at Consensus all revolved around “how AI Agents can trade autonomously.” No one was discussing: when your Agent and my Agent bet against each other, who is accountable for the losing Agent’s decisions?
Because in this industry, that question only gets asked after the crash.
Out of Sight, Out of Mind
The third reason is the simplest and the most overlooked.
Finance is an industry with no physical consequences.
When autonomous driving fails, there are bodies. When medical AI misdiagnoses, there are patients. When factory AI miscalibrates, there are product recalls.
When financial AI loses money? A string of numbers gets smaller.
No explosions. No blood. No visible disaster. Losses are packaged inside balance sheets, candlestick charts, and footnotes in quarterly reports. You can even comfort yourself with the mantra: “It’s not a loss until you sell.”
This intangibility dramatically lowers the perceived pain of AI failure. When AI makes a mistake, an autonomous car hitting someone makes the front page. A quant fund blowing up barely makes the financial section.
And this mirrors crypto’s defining characteristic over the past decade—and its biggest pain point: Invisibility.
Blockchain is backend technology. The ledger is encrypted. Nodes are distributed. Ordinary people never know how many validations their transaction passed through or how many nodes it traversed.
This invisibility was once crypto’s greatest obstacle. For a decade, the industry has been migrating its narrative: from “untraceable” to “unconfiscatable,” to the words flying around Consensus this year, “verification” and “trust.” Each rebrand is an attempt to become a little more visible.
AI is the answer crypto has been waiting ten years for.
Crypto is backend. An invisible ledger. AI is frontend. A visible interaction.
When speakers at Consensus excitedly showcased projects like Moltbook, claiming AI Agents had built communities of millions of “inhabitants,” complete with their own languages and religions, the crypto crowd erupted. Not because they understood AI, but because AI finally made their crypto narrative visible.
Crypto is using AI’s tangibility to compensate for its own intangibility.
But as someone who studies decision rights, I want to point out a paradox: the crypto industry tells you not to trust centralized Google because you can’t see how they handle your data, while asking you to trust a startup whose code you also can’t see.
This is essentially replacing one kind of invisible with another kind of invisible. And telling you that their invisible is the safer one.
When the Luck Runs Out
Stack these three reasons together. Other people’s money. Outcome bias. Intangibility.
You get a perfect breeding ground. Finance and crypto became the first fertile soil for Agentic AI not because they’re the best fit, but because they offer the least resistance.
Other people’s money dulls the decision-maker’s pain.
Outcome bias eliminates process accountability.
Intangibility hides the cost of failure.
In this environment, AI can grow unchecked. Nobody asks “who’s driving.”
But luck always runs out.
The 2008 subprime crisis was, at its core, a group of financial institutions operating under the cover of “other people’s money + outcome bias + intangibility,” stacking leverage to a degree no one could comprehend. When the music stopped, everyone discovered the same thing: no one knew where the risk was, no one knew who was accountable, no one knew how to stop the bleeding.
Now replace “financial institutions” with “AI Agents” and “leverage” with “autonomous decision-making authority.” You get the same story, version 2.0.
At Consensus Hong Kong, I heard countless beautiful visions of AI Agents transacting with each other. One Agent represents the buyer, another the seller, and crypto is their common language.
But no one was asking: when two Agents bet against each other and lose, who turns off the lights?
This is not a technology problem. It’s a decision rights problem.
Who has the authority to set an Agent’s risk boundaries?
When an Agent’s decisions exceed its mandate, who backstops the loss?
If your Agent and my Agent execute a trade that turns out to be based on bad data, who pays for that “error”?
These questions went unmentioned on the Consensus stage. Because in an industry that gambles with other people’s money, judges only by results, and can’t even see its own losses, accountability has never been a priority.
Until the crash.
Closing
Consensus Hong Kong 2026 showed me an industry desperately searching for a sense of reality, and a technology infiltrating it with zero resistance.
Crypto needs AI to become visible. AI needs crypto to solve payment and identity. This is a marriage born of survival instinct.
But every marriage without accountability will expose its full fragility at the first crisis.
Finance can keep celebrating. Agents can keep trading autonomously. Crypto can keep telling its new story.
But as someone who has watched organizations collapse for 17 years, I only have one question:
When the music stops, who’s holding the bag?


