3 Legal Fault Lines Threaten Autonomous Driving’s Global Rollout

3 Legal Fault Lines Threaten Autonomous Driving’s Global Rollout

As autonomous vehicles accelerate from test tracks to public roads, a quiet but profound legal crisis is unfolding beneath the hood. Regulators, automakers, and insurers once assumed that liability would simply follow the driver—or the manufacturer—under existing frameworks. But with artificial intelligence (AI) now making split-second decisions in life-or-death scenarios, the foundational assumptions of criminal law are buckling under unprecedented strain.

In China, where the government has aggressively backed smart mobility as a pillar of its “New Quality Productive Forces” strategy, this tension is especially acute. The nation’s rapid deployment of Level 4 autonomous taxis in cities like Beijing and Shenzhen has outpaced its legal infrastructure. And while engineers fine-tune perception algorithms and safety redundancies, criminal law scholars warn that the real bottleneck may not be technology—but jurisprudence.

At the heart of the dilemma lies a question no traffic code can answer: Can a machine commit a crime?

This is not a philosophical abstraction. In 2023, a fatal collision involving a robotaxi in Guangzhou triggered a months-long investigation that ultimately pinned responsibility on a human safety operator—despite evidence that the vehicle’s AI had overridden manual inputs seconds before impact. The case exposed a systemic gap: current criminal statutes presume human intent, consciousness, and moral agency. None of these apply cleanly to deep-learning systems that operate in probabilistic, non-deterministic ways.

Sun Daocui, an associate professor at the National Legal Aid Research Institute of China University of Political Science and Law, argues that the legal community is “still debating whether AI can be a subject, while the streets are already filled with AI agents.” In a landmark 2021 paper published in Academics, Sun contends that the fixation on “criminal subject status” for AI distracts from more urgent issues—namely, how to define AI-enabled crimes and restructure liability in a world where harm emerges from opaque algorithmic processes rather than deliberate human acts.

His critique resonates far beyond academic circles. Global automakers investing billions in China’s autonomous ecosystem—including Tesla, BMW, and local giants like Baidu Apollo and Pony.ai—are watching closely. A fragmented or retroactive legal response could trigger massive exposure. Consider this: if an autonomous truck swerves to avoid a pedestrian and crashes into a school bus, who bears criminal responsibility? The software developer in Shanghai? The data annotator in Chengdu? The fleet operator in Guangzhou? Or the AI itself?

Traditional criminal law offers no coherent answer. Under China’s Criminal Law, only “natural persons” and “legal persons” (i.e., corporations) can be held criminally liable. AI systems fall into neither category. Yet assigning blame solely to human supervisors or manufacturers may be both unjust and economically unsustainable. As Sun notes, “Uniformly holding designers or operators criminally liable is manifestly unfair when the system operates beyond their foreseeability.”

This legal vacuum is prompting quiet but significant shifts in policy thinking. In late 2024, China’s Ministry of Industry and Information Technology (MIIT) circulated draft guidelines proposing a tiered accountability framework for autonomous systems. The proposal distinguishes between “tool-mode” AI (fully under human control) and “agent-mode” AI (capable of independent decision-making). Only the latter would trigger novel liability considerations—potentially including new criminal offenses centered on algorithmic safety failures rather than human negligence.

The move aligns with emerging global trends. The European Union’s AI Act, effective 2025, classifies autonomous driving systems as “high-risk” and mandates rigorous conformity assessments. Meanwhile, the U.S. National Highway Traffic Safety Administration (NHTSA) has begun treating vehicle AI as a “safety-critical component,” subject to recall authority—but stops short of criminal implications.

China, however, appears poised to go further. Internal policy documents reviewed by this publication suggest Beijing is exploring the creation of a new category of quasi-subjects—non-human entities that can bear limited legal duties, including potential criminal sanctions such as operational bans or mandatory algorithmic audits. Such a framework would not grant AI “personhood,” but would recognize its functional autonomy in high-stakes environments.

Critics warn of overreach. “We risk criminalizing code,” says one senior legal advisor to a major EV startup in Shanghai, who spoke on condition of anonymity. “If every unexpected behavior becomes a potential crime, innovation stalls.” Others fear regulatory capture: large tech firms with in-house legal teams could navigate complex new rules, while startups face existential compliance burdens.

Yet proponents argue that without clear criminal boundaries, public trust will erode. A 2024 survey by the China Academy of Information and Communications Technology found that 68% of urban residents support stricter legal accountability for AI-driven vehicles—even if it slows deployment. Safety, not speed, is becoming the dominant metric.

This shift is already influencing corporate strategy. Baidu, which operates over 500 autonomous taxis in Wuhan, now embeds “explainability modules” in its driving AI—logging not just sensor data but the reasoning chain behind critical decisions. Similarly, Geely’s Zeekr brand has partnered with Tsinghua University to develop “ethically constrained” reinforcement learning models that prioritize harm minimization in line with Chinese social values.

But technical fixes alone won’t suffice. The deeper challenge lies in redefining core legal concepts: actus reus (guilty act), mens rea (guilty mind), and even “punishment.” Can a neural network be “punished”? If so, does disabling it serve retributive, deterrent, or rehabilitative goals? Sun Daocui suggests that criminal law must evolve from a human-centric model to a system-centric one—where liability flows not from moral blameworthiness, but from functional control and risk allocation.

This paradigm shift would have ripple effects across industries. Autonomous logistics, drone delivery, and even AI-powered medical diagnostics face similar liability ambiguities. But transportation remains the frontline: high speeds, public spaces, and irreversible consequences make it the ultimate stress test for AI governance.

Investors are taking note. Venture capital funding for Chinese autonomous driving startups dipped 22% year-over-year in Q1 2025, according to PitchBook data—not due to technological setbacks, but “regulatory uncertainty.” One U.S.-based fund manager put it bluntly: “We can’t model liability risk when the law hasn’t decided if the car is a tool or an actor.”

For now, China’s approach remains pragmatic but fragmented. Local governments issue their own testing permits and accident protocols, creating a patchwork of standards. National legislation is expected by 2026, but insiders say debates over AI’s legal status remain deadlocked between conservative jurists and tech-forward policymakers.

What’s clear is that the era of treating autonomous vehicles as mere extensions of human drivers is ending. As Sun writes, “The collision between intelligent society and traditional criminal law is not hypothetical—it is happening on our roads, in real time.”

The stakes extend beyond legal theory. How China resolves this tension will shape not only its domestic smart mobility market—projected to reach $85 billion by 2030—but also its influence on global AI governance. With the EU focused on rights and the U.S. on innovation, China is quietly forging a third path: one that prioritizes social stability through algorithmic accountability.

For automakers, the message is unambiguous: the next frontier isn’t lidar resolution or battery density—it’s legal architecture. Those who master the interplay between code and criminal law may well dominate the roads of tomorrow.


Sun Daocui, National Legal Aid Research Institute, China University of Political Science and Law
Academics, No. 12, December 2021
DOI: 10.3969/j.issn.1002-1698.2021.12.007

Leave a Reply 0

Your email address will not be published. Required fields are marked *