Self-driving cars, automatic drive
Self-driving cars, automatic driveiStock

The rules of the road today are all focused around one key element: drivers. Licensing, insurance, traffic laws – everything assumes vehicles are operated under the control of a human.

For driverless vehicles, this presents a dilemma: How can you tell which car is at fault in an accident? Should we license and insure owners or manufacturers or the cars themselves? More importantly: How can self-driving and human-driven cars co-exist safely?

Before society will welcome autonomous cars en masse, we must answer those questions – and others – with certainty. People have expressed apprehension about self-driving vehicles and are unlikely to accept them if it is not clear that they are inordinately safer than human-driven vehicles. We’ve already seen incidents involving current driver-assistance technology where fault was unclear during months-long investigations, leading to consumer wariness.

This issue will become more acute as vehicles take on more of the driving tasks. Although crashes caused by human error kill more than one million people annually, it may only take a few fatal crashes of a fully autonomous vehicle, where fault is uncertain, to meaningfully delay or forever foreclose on the tremendous life-saving potential of this technology.

Governments around the world are recognizing the need to tackle these issues, and the US has been proactive with pending self-driving vehicles legislation and new USDOT Automated Vehicle Guidelines. Industry can be an important partner.

An important next step is to collaboratively construct industry standards that definitively assign accident fault and thereby prove the safety of driverless vehicles when collisions with human-driven vehicles inevitably occur. Clear standards of blame are critical, as the AV’s decision-making software (i.e. driving policy) can then be programmed to follow these agreed-upon standards. In this scenario, the AV could not cause an accident that would be attributable to the AV system’s fault. Our proposed Responsibility Sensitive Safety (RSS) model, made public through a white paper titled, “On a formal model of safe and scalable self-driving cars” is one approach to consider.

RSS is a formal, mathematical model for ensuring that a self-driving vehicle operates in a responsible manner. It provides specific and measurable parameters for the human concepts of responsibility and caution and defines a “Safe State,” where the autonomous vehicle cannot cause an accident, no matter what action is taken by other vehicles.

The ability to assign fault is the key. Just like the best human drivers in the world, self-driving cars cannot avoid accidents due to actions beyond its control. But the most responsible, aware and cautious driver is very unlikely to cause an accident of his or her own fault, particularly if they had 360-degree vision and lightning-fast reaction times like Autonomous Vehicles will. The RSS model formalizes this in a way that avoids putting self-driving vehicles in danger of violating those same rules.

We’ll use the common rear-end collision to illustrate how this works. When two cars are traveling in the same lane, one behind the other, and the rear car crashes into the front car, the driver of the rear car is deemed to be at fault. Often this is because the rear car did not maintain a safe following distance and was unable to stop in time when the lead car braked suddenly.

If the rear vehicle was a self-driving car and employed the RSS model, this accident would never have happened. Using software that evaluates all actions against a comprehensive set of driving scenarios and rules of responsibility, the driverless car will continuously calculate a safe following distance wherein the AV maintains a Safe State.

With a model like RSS, an AV’s system of sensors will collect and maintain definitive data of all activity involving the AV at all times, think of it like the “black box” in an airplane cockpit. This vital data can be used to rapidly, conclusively determine responsibility for incidents that may involve an autonomous vehicle, but only if there are clear definitions of fault on which to compare the data. Such a model for safety could be formalized by industry standards organizations, and ultimately regulatory bodies, to establish clear definitions for fault that in turn can be translated into insurance policy and driving laws.

There is little argument that machines will be better drivers than humans. Yet there is very real risk that self-driving vehicles will never realize their life-saving potential if we can’t agree on standards for safety. We believe self-driving vehicles can and should be held to a standard of operational safety that is inordinately better than what we humans exhibit today. And the time to develop those standards is now.

Written by Professors Amnon Shashua and Shai Shalev-Shwartz