From: jimruttshow8596

The evolution of self-driving technology brings forth significant legal and ethical challenges with tech regulation, particularly concerning liability and regulatory frameworks. The six levels of self-driving automation, from Level Zero to Level Five, are primarily defined by liability, not capability [07:20:20].

Levels of Automation and Liability

  • Level Two: The human driver remains fully liable for the decisions the car makes [07:27:28]. The system acts as supervision of the car [07:30:32].
  • Level Three: The human is liable in certain scenarios [07:39:39].
  • Level Four: The human is not liable in specific cities or defined areas [07:43:43].
  • Level Five: The human is never liable [07:47:47]. Early predictions of full automation, such as Google’s initial prototypes without steering wheels, envisioned this level of capability [08:10:10].

Regulatory Environment

In the United States, automotive regulation operates on a self-certification model, where manufacturers certify their compliance with safety standards [42:41:43]. Comma.ai, for example, self-certifies its compliance with standards like ISO 26262 [42:50:53]. The European Union has taken a lead in regulating specific aspects like maximum steering wheel torque, braking force, and acceleration [42:57:00].

There is skepticism about the feasibility of extensive government infrastructure investment (e.g., smart telemetry in roads) for self-driving cars, given that even basic road maintenance like fixing stop signs is often neglected [29:13:10].

Liability for Level 2 Systems (Comma.ai’s Stance)

Comma.ai maintains that its product is a Level 2 system, meaning the human driver remains in control and is ultimately liable for any incidents [43:14:00]. The system is designed to always allow human override:

  • The car can never become uncontrollable [43:21:22].
  • The brake pedal will always function [43:26:28].
  • The steering wheel can be overpowered with minimal effort [43:30:32].
  • Emergency braking is not disabled by default [44:19:21].

The company’s philosophy is that a computer cannot be held accountable, so the human is always responsible [50:16:20]. They view their system as an advanced form of driver assistance, akin to power steering or cruise control, where the driver still bears responsibility [50:32:32].

Users are explicitly required to keep their eyes on the road at all times, and a camera monitors this [44:01:03]. The driver monitoring system is designed to provide timely alerts without inducing fatigue, helping to keep drivers attentive [45:00:01].

Comma.ai’s terms of service clearly state that users indemnify the company for liability when using the system [52:31:33]. While the company has successfully defended against patent trolls [50:57:58], they anticipate future lawsuits and intend to maintain the stance that the human is always in control [51:49:50].

There is a distinction in product liability:

  • Functional Safety: If the product malfunctions and directly causes a loss of control (e.g., brakes stop working), the manufacturer could be liable [53:22:24].
  • Judgment Calls: If the human makes a decision or fails to intervene, that responsibility rests with the driver [53:33:33].

Comma.ai develops its system to operate within the manufacturer’s intended specifications for driver assistance messages, using reverse-engineered specifications. They have even discovered bugs in manufacturer software due to their full-loop system analysis [54:26:26].

Remote Operation and Fragility

Some self-driving companies, such as Cruise and Waymo, have been criticized for their reliance on remote operators and high-definition maps. These systems are described as “fancy remote control cars” [06:26:29] or “trackless monorails” [28:48:00]. Cruise has admitted to using multiple remote operators for each car, with decisions fundamentally still being made by a human [29:36:00]. These systems often stop if the cell phone network goes down, highlighting their fragility and dependence on external infrastructure [30:11:11].

This contrasts with systems like Comma.ai’s, which operate locally on the device without needing an internet connection [30:27:29]. The reliance on expensive, meticulously mapped environments (Level 4 approaches) is viewed as an unsustainable business model, as Level 5 (full autonomy everywhere) is expected to follow too quickly to recoup the investment [49:21:23].