Nobody's Driving. Nobody's Accountable. And Now We Have Proof.
On the evening of March 31, 2026, roughly one hundred Baidu Apollo Go robotaxis were immobilized across Wuhan after what police preliminarily described as a system malfunction. On elevated highways, bridges, and arterial roads. Not pulled over. Stopped in the travel lane. According to passenger accounts reported by multiple Chinese media outlets, some passengers were trapped inside for up to two hours. The in-car speaker repeatedly broadcast a single instruction: “Vehicle has a problem. Do not open the door.” Passengers reported that the SOS button produced no response and that calls through the vehicle’s system auto-disconnected. Customer service, when passengers finally reached it on their personal phones, told them to “wait patiently.”
The vehicles did not stop because of an accident. Police preliminarily determined the cause to be a system failure. But the vehicles’ stopping in live traffic lanes caused secondary collisions and severe congestion. Other drivers, not expecting stationary vehicles in the middle of a highway, collided with them.
On April 1, Baidu’s customer service told multiple media outlets that all robotaxi operations across Wuhan had been suspended, with no timeline for resumption. The timing was not an April Fools’ joke. The passengers stranded on elevated highways the night before can confirm.
I read this news and felt something I don’t often feel: vindication.
Two months ago, I wrote about this exact scenario
In February, I published an article about my experience riding a WeRide autonomous vehicle in Abu Dhabi through Uber. That article introduced several concepts I developed from what I observed in the back seat.
The first was the “accountability fuse.” The car had a human in the driver’s seat. Uber called him a “Vehicle Specialist,” not a driver. He wasn’t driving. But he was there. I argued that his real function was not technical backup but accountability absorption. If something went wrong, he was the person who could be pointed at and asked: “Why didn’t you intervene?” Like a fuse in an electrical circuit, he existed to burn out and protect the system behind him.
The second concept was the “accountability vacuum.” When AI holds decision-making authority but no one in the organization has been clearly designated as accountable for the AI’s decisions, accountability doesn’t become unclear. It evaporates. Every party in the chain has a reasonable explanation for why it’s not their fault.
That article ended with an image. In Abu Dhabi, the Uber app showed an “unlock door” button that was greyed out. The human specialist opened the door from inside. I wrote: “Next time, there won’t be a Saif. Only that button. You press it, and it turns grey. That greyed-out button doesn’t just lock you inside the car. It locks accountability outside.”
Wuhan proved this wasn’t a theoretical exercise. The button didn’t just turn grey. It stopped working entirely.
With a human driver, accountability is clear
In Abu Dhabi, with Saif sitting in the driver’s seat, the accountability for my safety as a passenger was unambiguous. It sat with Saif, and through Saif, with the company that employed him. How Saif and his employer divided that accountability between themselves was their business, not mine. As a passenger, I didn’t need to think about my own safety. That was someone else’s job.
This is no different from a traditional taxi. You get in, you say where you’re going, you arrive. If something goes wrong during the ride, the accountability falls on the driver and the company behind the driver. The passenger is never asked to make decisions about their own safety during the trip. That’s the deal. That’s what you’re paying for.
The key point is not just who is accountable. It is that the accountability is exercised in real time. The driver is there throughout the journey, continuously making judgments, continuously ready to act. If the car breaks down on a highway, the driver calls for help. The driver decides whether it’s safe to exit. The driver opens the door. The passenger doesn’t have to make any of these decisions. Accountability isn’t a document filed somewhere. It is a living function, performed by a human, every second of the trip.
Without a human driver, that accountability cannot be replicated
Remove the human from the vehicle, and this real-time accountability chain breaks the moment the system fails. This is not a criticism of any particular company’s execution. It is a structural observation. A machine can be shut down, recalled, or updated. It cannot be held accountable. Accountability requires a human who can answer the question: why did you let this happen?
This structural gap cannot be closed by remote support, by corporate promises, or by better technology. Here is why.
Remote support is not a human in the vehicle. Every major robotaxi operator maintains remote human support. Waymo has its Fleet Response team. Baidu has remote operators who can take direct control of vehicles. But remote support is conditional. It depends on the network being up, the system being online, enough staff being available, the connection being fast enough, and the operator being able to accurately assess a situation through a camera feed. In Wuhan, when the system failed, passenger accounts indicate that every one of these support channels failed with it. Saif’s presence in the driver’s seat is unconditional. He is there regardless of network status, server capacity, or staffing levels. You can add redundancy to remote support, add backup channels, add more staff. But you can never turn conditional presence into unconditional presence.
A corporate promise of “full accountability” is not real-time protection. A company can declare that it accepts full accountability for anything that goes wrong. But the purpose of clear accountability is not to determine who pays compensation after someone is hurt. The purpose is to ensure that someone is actively protecting the passenger throughout the journey. “We accept full accountability” is a promise that can only be redeemed after something has already gone wrong. During the two hours a passenger is trapped on an elevated highway, that promise cannot open the door, cannot assess whether exiting is safe, cannot call for help through a functioning channel. In any context involving human life, accountability that cannot be exercised in real time is not accountability. It is a liability settlement waiting to happen.
Even foreseeable risks were not addressed. What makes the Wuhan incident particularly hard to defend is that none of it was unforeseeable. System failures happen. Fleet-wide failures are possible when vehicles share a single network architecture. Passengers in a stopped vehicle on an elevated highway will need to communicate with someone. Emergency communication should function independently of the system it backstops. Every one of these risks could have been written on a whiteboard. And for every one of them, the question is the same: who is accountable for the passenger’s safety in this scenario, and how will that accountability be exercised in real time? Wuhan showed these questions had not been answered. If the foreseeable scenarios don’t have clear, executable accountability solutions, how can anyone be confident that the unforeseeable ones will?
Developing truly autonomous ride-hailing means accepting this accountability vacuum as a structural cost, not a transitional inconvenience. The SOS button failing along with the driving system is a design flaw, and it’s fixable. The lack of a physical emergency stop button is an engineering gap that Zoox has already solved voluntarily. The customer service collapse is an operational failure. All fixable. But the absence of a human in the vehicle is not a flaw. It is the product. And with it comes an accountability gap that no amount of engineering can close.
Private vehicles don’t have this problem. Robotaxis do. Here’s why.
If you own an L4 or L5 private vehicle, the accountability structure is comparatively straightforward. You purchased the car. You chose to activate the autonomous driving function. If something goes wrong, the question is between you and the manufacturer, a product liability framework with established legal precedent. You accepted the technology’s risk when you bought the vehicle.
There is also no structural dependency on remote support. The SAE J3016 standard, which defines the L0 through L5 levels of driving automation adopted worldwide, does not include remote support as a component of any level. The standard defines who performs the driving task, who monitors the environment, and who serves as the fallback. It is entirely about the relationship between the human in the vehicle and the automated system. In fact, L4 and L5 vehicles are required by the standard to be capable of reaching a minimal risk condition, a safe stop, independently, without any remote assistance. A private autonomous vehicle that loses network connectivity must still be able to protect its occupants on its own. The accountability framework is self-contained: you, the car, and the manufacturer.
Robotaxis cannot operate within this clean framework. There is no human driver in the vehicle, so when the system fails, someone must be reachable from outside the vehicle to take accountability for the passenger’s safety. Remote support is not optional for robotaxis. It is the only human link in the chain.
But here is the structural problem: the moment remote support becomes essential, it introduces an accountability layer that the SAE standard never defined and that no regulatory framework has fully addressed. Who operates the remote support? If the remote operator makes a wrong judgment call, who is accountable? If network failure makes the remote operator unreachable, who is accountable then? If remote support is overwhelmed by simultaneous incidents across a fleet, who is accountable for the passengers left waiting?
Private vehicles live inside a clean, self-contained accountability framework. Robotaxis are forced to depend on a layer that sits outside any standardized framework, that is conditional on infrastructure beyond the vehicle’s control, and that has already been shown to fail at scale.
In a robotaxi, the passenger owns nothing, chose nothing about the underlying system, cannot take over driving, and has no direct relationship with the technology provider. They hailed a ride. Their reasonable expectation is identical to hailing any other taxi: I get in, I arrive safely, the driving is someone else’s job. The accountability vacuum lives precisely in the use case that is being commercialized most aggressively.
The stakes are uniquely high
Because autonomous ride-hailing directly involves human lives, there is no such thing as a minor incident. Cruise had one pedestrian incident in San Francisco in October 2023. Its California permit was revoked. Operations were suspended nationwide. The CEO resigned. GM ultimately shut down the entire robotaxi business after investing over $10 billion. That incident triggered a regulatory, governance, and credibility crisis from which Cruise never recovered.
The risk profile is also categorically different from individual vehicles. When a private car breaks down, one car stops. When a robotaxi fleet suffers a system failure, roughly a hundred vehicles can stop simultaneously across an entire city. The economic cost, delayed freight, missed flights, diverted ambulances, gridlocked commerce, radiated outward from every stopped vehicle. And if a system failure can happen accidentally, it can happen deliberately. A single network intrusion, timed to rush hour, could reproduce the same scenario across any city where a robotaxi fleet operates.
The benefits of autonomous driving in military and emergency applications are clear, and the accountability challenges are minimal. Military vehicles carry trained personnel who accepted operational risk. Emergency and medical transport often carries no passengers at all. Civilian ride-hailing is the application that places untrained, uninformed, ordinary civilians into an accountability vacuum they never agreed to enter.
A valuable lesson, at the right time
In fairness, this incident may ultimately prove valuable for the entire industry. One hundred passenger vehicles stopping is better than one hundred vehicles losing control. Small robotaxis stopping is better than a fleet of autonomous freight trucks stopping on a highway. And a system-wide failure occurring now, during the early stages of deployment with no fatalities, is far better than the same failure occurring after thousands of vehicles are operating across dozens of cities.
The Wuhan shutdown is an opportunity to confront the question that matters most. Not just the technical questions about network redundancy and emergency system design, which are important but solvable. The deeper question: how should accountability be structured between the service provider and the passenger when there is no human driver in the vehicle? This is a question that engineering cannot answer. It requires a deliberate decision by regulators, operators, and society.
So what does society gain?
After everything discussed above, it is fair to ask: what does society gain from autonomous ride-hailing that justifies these costs?
The safety potential is real. Waymo’s published data, collected in real-world mixed traffic, shows about a 92% reduction in serious-injury crashes across over 170 million driverless miles. But the industry’s larger safety promise, virtually eliminating the 1.19 million annual global road deaths (WHO), rests on a condition that does not currently exist: a road environment with high autonomous vehicle penetration. Some simulation-based research suggests that optimal safety may not fully materialize until penetration rates are much higher than today, with one study finding an optimum around 70% in its modeled environment. In the current reality, autonomous vehicles and human drivers share the same road operating on fundamentally different logics, and many collisions that still involve robotaxis are initiated by human drivers, especially in rear-end scenarios. The full safety benefit is real but conditional, and the conditions may take decades to arrive.
Robotaxi fleets also generate real-world driving data that feeds back into algorithm improvement and adjacent applications.
But the most immediate, tangible economic benefit of removing the human driver is straightforward: it eliminates labor costs. Driver compensation accounts for about 70% of traditional ride-hailing operating costs. This is the primary commercial incentive driving the industry forward. A secondary argument is that autonomous ride-hailing releases drivers into other industries where labor is more urgently needed.
The accountability vacuum is the cost. It is borne by the passenger. The labor savings are the benefit. They are captured by the company. The question is whether the social justification for this transfer holds up.
The world currently has 186 million unemployed people (ILO, Employment and Social Trends 2026). The global “jobs gap,” people who want paid work but cannot access it, stands at 408 million (ILO, Employment and Social Trends 2026).
Are other industries short of workers?
Do we lack drivers?


