From: jimruttshow8596

The evolution of autonomous vehicle technology brings with it complex legal and regulatory questions, particularly concerning liability and safety standards [00:42:07]. The prevailing framework for self-driving automation defines different levels, which, according to George Hotz, CEO of comma.ai, say more about liability than actual capability [00:07:20].

Levels of Driving Automation and Liability

The six levels of self-driving automation (Level 0 through Level 5) primarily allocate responsibility between the human driver and the automated system [00:07:11], [00:07:20]:

  • Level Two: The human driver remains fully liable for decisions made by the car [00:07:25]. This level involves supervision of the car [00:07:30].
  • Level Three: The human is liable only in specific scenarios [00:07:37].
  • Level Four: The human is not liable within defined operational domains, such as specific cities or areas [00:07:41].
  • Level Five: The human is never liable, implying full automation where a driver could, hypothetically, sleep in the backseat [00:07:47], [00:08:12]. Google’s early prototypes, built without a steering wheel, were aiming for this level of automation [00:08:20].

The Challenge of Predicting Human Behavior

Despite the perception that humans are poor drivers, statistical data suggests otherwise. Most civilized countries report approximately one fatality per 100 million driven miles [00:08:56]. This benchmark is considerably higher than the miles logged by autonomous vehicle companies like Waymo and Cruise, implying humans are “absurdly good drivers” [00:09:07], [00:09:51]. Early claims in 2018-2019 that AI could “certainly exceed human capacity” were considered “total hubris” [00:10:22], [00:10:29].

Regulatory Environment

In the United States, the automotive industry operates on a system where manufacturers self-certify compliance with regulatory standards [00:42:41], [00:42:43]. comma.ai, for instance, self-certifies its products in accordance with standards like ISO 26262 [00:42:53]. Regulations, especially those from the EU, focus on parameters like maximum torque on the wheel, braking force, and acceleration, which comma.ai ensures its system complies with [00:42:57], [00:43:00], [00:43:11].

Driver Responsibility and System Limitations

For Level 2 systems like comma.ai’s Openpilot, the explicit understanding is that the human remains in control of the vehicle at all times [00:43:17]. The system guarantees that the car will never become uncontrollable; the driver can always use the brake pedal, which will work, or overpower any torque applied to the steering wheel [00:43:21], [00:43:26], [00:43:30].

“The only thing the comma can guarantee you, the only thing we promise you, is that the car will never become uncontrollable. You can always reach out, hit the brake pedal, and the brakes will work. You can always massively overpower any torque we’re putting on the steering wheel.” [00:43:21]

While users might choose to drive hands-off, the company emphasizes that eyes must remain on the road at all times [00:44:01]. comma.ai employs a camera to monitor driver attention [00:44:03]. The system is designed to provide alerts only when genuinely necessary to avoid “alert fatigue” and ensure drivers respect the system [00:44:56]. This driver monitoring is local to the device unless the user opts in to share data [00:45:26].

Liability in Accidents

If a car crashes while using a Level 2 system, liability rests with the human driver [00:43:36], [00:43:38]. The philosophy is that “the human is in control of the car at all times” [00:50:21]. This aligns with the understanding that a computer cannot be held accountable for decisions [00:50:16]. However, if a mechanical failure or product malfunction is directly caused by the system (e.g., brakes stop working due to system interference), the manufacturer might be liable [00:53:08], [00:53:24].

Business Models and Future Trajectories

Companies like Waymo and Cruise, operating with Level 4 systems in defined regions and relying on extensive high-resolution mapping, face significant economic challenges. Their “trackless monorail” approach, where cars may cost upwards of $500,000, leads to “hilariously negative unit economics” [00:27:16], [00:27:20], [00:27:51], [00:32:09]. These systems are often characterized by remote human intervention and reliance on cell phone networks, making them fragile and centralized [00:29:32], [00:30:11], [00:30:18].

In contrast, companies like Tesla and comma.ai pursue positive unit economics by selling products to consumers profitably [00:33:27], [00:33:43]. Their systems process data locally on the device, making them less dependent on external infrastructure and more robust [00:39:08], [00:40:59]. This approach is seen as a stepping stone to general-purpose robotics, rather than an endpoint [00:47:28].

“The level five cars will come too quickly after the level four cars for you to ever recapture the amount of value that you burned creating that thing.” [00:49:30]

The belief is that Level 4 (geo-fenced) systems are not a viable business model because Level 5 (anywhere) systems will arrive too quickly to recoup the investment in limited Level 4 deployments [00:49:18], [00:49:30]. Therefore, comma.ai has no interest in pursuing liability beyond Level 2, preferring to build software that is a “better driver than a human” and allow others to provide higher-level liability services on top of their open-source platform [00:48:39], [00:48:42], [00:48:50].