Tech giants and carmakers have poured massive amounts of money and effort into developing cars that can drive themselves. But before Google, Tesla, Uber and others can persuade humans to share their streets with bots, they have to prove this technology—although definitely still learning and maturing—doesn’t amount to flooding the nation’s roadways with dangerously adolescent robot drivers.
“Sometimes I hear [the] industry talk about autonomous vehicles as though they’re about to put the safest driver on the road,” says Nidhi Kalra, senior information scientist at the nonprofit RAND Corp. “The reality is it’s more like putting a teenage driver on the road.” But she still thinks artificially intelligent autos should be able to improve their driving and decision-making skills very quickly—without having to be grounded.
In the self-driving world safety is an extremely complicated issue for a number of reasons. For starters, regulators will have to come up with a definition of “safe”—whether that means the machines must drive flawlessly or simply break fewer laws and get into fewer accidents than human drivers do. Further muddling matters, companies are developing many different levels of automation that range from assisting drivers with braking, parking and lane-changing (referred to as “level 1” abilities) to full autonomy (“level 5”), which is still several years away.
No single test can determine the safety of self-driving cars, says Steven Shladover, a research engineer and manager of the Partners for Advanced Transportation Technology program at the University of California, Berkeley. He has been encouraging U.S. regulators and industry members to follow the example of Germany's government, which is sponsoring research to determine how best to ensure the safety of automated driving systems. “There is a need for fundamental research to support the development of dependable and affordable methods for assessing the safety of an automated driving system when it is confronted with the full range of traffic hazards,” Shladover says.
The U.S. Department of Transportation (DoT) has given Silicon Valley start-ups and Detroit incumbents some guidance to help them build safer self-driving vehicles. But in the absence of federal law regulating the use of autonomous vehicles and driver-assist technology, the U.S. government could simply stand back and allow companies to put more self-driving cars on public roads to collect the necessary safety data, says Alain Kornhauser, a transportation engineer and adviser for Princeton University’s Autonomous Vehicle Engineering team. Kornhauser and some other experts contend these vehicles could ultimately make roads safer by reducing human error. With 38,300 deaths and 4.4 million serious injuries on U.S. roads in 2015 alone, it would be worth the risk to let autonomous cars roam more freely and “learn” faster, Kornhauser says.
Ride-sharing giant Uber seems to favor this bolder approach. After launching an initial pilot test in Pittsburgh in October 2016, Uber briefly tested some self-driving vehicles on the streets of San Francisco—before agreeing to halt after California’s regulators protested the lack of testing permits. But in the company’s view, “real-world testing on public roads is essential both to gain public trust and improve the technology over time,” says Chelsea Kohler, an Uber spokesperson.
The dangers of letting the market determine its tolerance for risk became apparent last May, when a Tesla Sedan S using its driver-assist Autopilot technology failed to avoid the side of a tractor trailer looming ahead.* The driver died in the resulting crash. Still, Tesla founder and CEO Elon Musk has pressed ahead with Autopilot, announcing in October 2016 that new Tesla Model S and Model X cars would be able to train their Autopilot technology while in “shadow mode” even when the Autopilot is technically switched off. Shadow mode enables each Tesla car’s computer to compare what its driving and braking decisions would have been with the human driver’s actions. The vehicles can then share their newly acquired knowledge with one another via “fleet learning” changes to their programmed behavior.
Another big challenge is determining how long self-driving vehicles must be tested before they can be considered safe. They would need to drive hundreds of millions—or sometimes hundreds of billions—of miles to acquire enough data to demonstrate their safety in terms of deaths or injuries, according to an April 2016 report from think tank RAND Corp. The report explains that existing test fleets of self-driving vehicles would take tens or even hundreds of years to drive the number of miles necessary for performing a statistically significant safety comparison. A fleet of 100 cars would have to drive 275 million miles without failure—approximately 12.5 years of round-the-clock driving at 25 miles per hour—to meet the safety standards of today’s vehicles in terms of deaths. At the time of the fatal May 2016 crash, Tesla car owners had logged 130 million miles in Autopilot mode.
One way to accelerate the learning curve would be for tech companies and carmakers to share their test data with competitors. “There is no doubt in my mind that if companies openly shared data, the development would go faster and the cars would take off,” says Sebastian Thrun, CEO and co-founder of online education provider Udacity and a self-driving technology pioneer who formerly worked at Google. Not surprisingly, companies are reluctant to share this information without extra prodding from regulators.
However safety gets defined and established, self-driving and driverless vehicles will need ways to overcome the skepticism of humans—drivers, passengers, cyclists and pedestrians—by being more transparent, says Brian Lathrop, senior manager of the Electronics Research Lab at Volkswagen Group of America. That means letting people on the road know when a vehicle is in self-driving or driver-assist mode. Autonomous vehicles will also have let the those in the cockpit know what they plan to do and give the person in the driver’s seat a chance to regain control if necessary, Lathrop says.
It is daunting to think about a future when human drivers will share the road with robotic vehicles that have varying degrees of autonomous capabilities, Lathrop says. It will eventually happen but the technology will have to earn our trust—just like a teenager with a brand-new junior license.
*Editor's Note (01/19/17): The National Highway Transportation Safety Administration (NHTSA) on January 19 concluded its investigation into the May 7, 2016, Tesla accident and found no safety defects in Autopilot's design or performance.