Experts Weigh in on Mobileye's AV Safety Model

Experts Weigh in on Mobileye's AV Safety Model

MADISON, Wis. — A technical paper recently published by Mobileye, “On a Formal Model of Safe and Scalable Self-Driving Cars,” has struck more than one nerve with observers of the autonomous driving industry.

A central issue in the controversy stirred up by the paper is Mobileye’s assertion that the industry needs a mathematical model that absolves an autonomous vehicle (AV) from blame for an accident, as long as it follows a pre-determined set of “clear rules for fault in advance.”

Questions erupted, from “How dare an industry which makes a product defines for itself what the definition of safety is?” to “Wait, are ‘safety’ and ‘assigning faults’ the same thing?”

EE Times has since reached out to experts in academia whose research interests range from robotics and embedded computer systems to autonomous vehicle safety and human-robotic interactions. We asked them to break down Mobileye’s proposal, talk about what they agree with and what they find problematic, and steps they recommend for the industry.

It turns out that the academics have initially given Mobileye an overwhelmingly positive response. They applaud the company for sticking its neck out and tackling head-on the hardest issue in the robocar debate. The paper was written by Amnon Shashua, Mobileye CEO/Intel senior vice president, and Shai Shalev-Shwartz, Mobileye’s vice president of technology.

Phil KoopmanPhil Koopman

Asked about Mobileye’s technical paper, Phil Koopman, professor at Carnegie Mellon University, told us, “Overall, I think it's great to see an initial rigorous approach that talks about autonomous vehicle safety. Every vehicle must have some approach to deciding what it's allowed to do and what it's not. So, I applaud the authors for starting down that path.”

Missy Cummings, a Duke professor who also serves as director of the school’s Humans and Autonomy Lab, agreed. “I appreciate that Mobileye is thinking so deeply about these issues.”

But both Koopman and Cummings regard Mobileye’s proposal as only a “first step.” The proposal’s resilience in the real world — especially when autonomous vehicles must co-exist and interact with human-driven vehicles — is more of a great leap. Mobileye’s definition of what might be safe for autonomous cars needs to be subjected to the rigors of the real world.

Emphasizing the value of Mobileye’s effort to pose a concrete proposal on robocar safety, Koopman said, “Nobody will get a proposal like this perfect the first time, but that's OK.  We're going to have to try a lot of approaches to representing and formalizing AV safety before we find one that works in practice.”

The two Mobileye authors discuss “safety” in their paper by explaining that their policy is “provably safe, in the sense that it won’t lead to accidents of the AV’s blame.” 

Missy CummingsMissy Cummings

Duke’s Cummings, however, noted that the notion of “provably safe” is not new. She referred to a number of academic papers already published on the topic on the Internet. The list includes a paper entitled as "Provably Safe Robot Navigation with Obstacle Uncertainty," written by Brian Axelrod, Leslie Pack Kaelbling and Tomás Lozano-Pérez.

Cummings told us that the thorniest problem with provable safety has not changed: “What computer scientists consider to be provably safe from a mathematical perspective does not mean proving safety in the way that test engineers would consider safe.”

Assumptions must be questioned
Both Koopman and Cummings cautioned that assumptions made by Mobileye should not be taken for granted. They need to be questioned. Koopman noted, “There are some assumptions that I'd be surprised hold up in the real world.”

An example pointed out by Cummings was software bugs.

Here’s how the authors framed the safety issue in their technical paper:

…We are now discuss sensing mistakes that lead to non-safe behavior. As mentioned before, our policy is provably safe, in the sense that it won’t lead to accidents of the AV’s blame. Such accidents might still occur due to hardware failure (e.g., a breakdown of all the sensors or exploding tire on the highway), software failure (a significant bug in some of the modules), or a sensing mistake. Our ultimate goal is that the probability of such events will be extremely small — a probability of 10-9 for such an accident per hour.  

Cummings challenged Mobileye's claims that potential problems caused by software bugs will be extremely small. She referred to a report that discusses the historic nature of automobile safety recalls due to software problems.

Next page: Define safety


PreviousARM CEO Sounds Security Alarm
Next    Bitcoin ASIC Maker Bets on AI