Skip to main content

How Do You Know Self-Driving Cars Are Safe?

Thursday, May 25, 2023

Simulation a common, if not the only, option for training and testing autonomous vehicles. But how different is it from training an autonomous system in the real world and who will be deciding if autonomous vehicles are ready to hit real roads? Does “safety” mean the same in simulation and reality?

Across every manufacturer, the training of AI based systems mainly happens in simulation. It is believed to be a much safer and cheaper option than gathering data in the real world from actual vehicles, says Henry Liu, a professor of engineering at the University of Michigan and director of Mcity, a facility for testing autonomous vehicles. Even though many companies are competing in how many miles their autonomous and semiautonomous vehicles have driven in the real world, most miles driven are still done in simulation. For example, Waymo has said that its vehicles have logged more than 2 million miles without a driver. “We’ve driven tens of millions of miles on public roads, and then billions of miles in simulation,” says Trent Victor, director of safety research at Waymo.

In the real-world vehicles rarely encounter critical safety situations, but in simulation autonomous driving systems can encounter an endless number of such situations. Training and testing with the toughest scenarios are important for verifying that self-driving systems are safe.  Another major stopping point for the rollout of autonomous vehicles has been so-called “edge-cases”: rare but potentially dangerous situations that have already led to accidents. Simulation makes it possible to train autonomous systems on these edge cases. Waymo has an approach called “Collision Avoidance Testing” where they expose its autonomous-driving software to simulated situations that could lead to injury or death, then they compare results with a completely focused human driver.  As a result, a number of initiatives have been developed to collect and make available pools of data on what kinds of scenarios lead to crashes and other accidents including Safety Pool Initiative, SET Level project. Companies like Waymo, Cruise, Tesla and Mobileye have their own data on such cases.

However, mathematically proving that self-driving systems are safer than humans is critical to getting them on real roads. Without such proof, regulators have no standard by which to objectively evaluate a system. With such proof self-driving vehicles could be granted a kind of driver’s license by safety regulators. For example, in the US, the National Highway Traffic Safety Administration is already working on new rules that could include such a requirement.

In the end the definition of “safe” is very relative and no autonomous vehicle will ever be accident-free – especially in a world with human drivers, pedestrians, and cyclists, says Shalev-Shwartz, the chief of technology officer of Mobileye. At some point, no matter how resilient self-driving systems are, it’s up to government to determine whether they are safe enough.

Personal Comment:

I think when it comes to autonomous vehicle safety most people would expect them to be completely accident-free. This is supported by how people react when they see news about an accident involving a self-driving car. However, deciding whether self-driving vehicles are real-world ready depends largely on whether the society is ready to embrace this new technology, not just how we train and test them.  “Trusting self-driving cars is going to be a big step for people”, as it is said in the article. And this will come only when humans understand the limitations of autonomous vehicles.

Even though self-driving cars might ultimately eliminate 90% of accidents, that is not going to happen overnight. It is going to happen gradually which means there will be accidents regardless of how well the autonomous system has been tested.

The Written by Kateryna Melnyk,
RISE Mobility & Systems