Early attempts at driverless cars have had little difficulty gathering the loads of data required to operate autonomously. Automakers and researchers—at Google, most notably—have logged hundreds of thousands of kilometers driven in vehicles laden with Internet servers, GPS, radar, lasers, cameras and a variety of other onboard sensors. The results are encouraging: Self-driven vehicles have demonstrated the ability to measure and maintain their distance from other automobiles and even obey traffic laws.

Still, there are good reasons these vehicles should always have a person behind the wheel, just in case. Humans are capable of making split-second decisions based on memory and the body’s collective senses. Despite all the advanced hardware developed to create autonomous automobiles, such vehicles lack a central processing system—a mind—that can make them truly able to make quick sense of and act on data their sensors collect.

Road to autonomy
One challenge in programming automotive brains to make the kind of snap decisions it takes human drivers years to develop is getting a vehicle to understand its surroundings rather than simply detecting objects, says Francis Govers, a former NASA engineer and designer of unmanned military vehicles. The vehicle does not need to avoid every object it encounters, he adds, "you can drive over a speed bump, but you don't want to drive over a dog—and yet they may be roughly the same size and shape.”

The hurdle is making the right decisions in a constantly changing environment, according to Raj Rajkumar, professor of electrical and computer engineering at Carnegie Mellon University. Michigan’s hills are harder to navigate than the Nevada flats, and New England snow can obliterate the road markings a car's cameras use to stay in the lane. Even the change from daylight to twilight can throw off a car's sensors.

Traffic adds another dynamic dimension to decision-making. A vehicle’s software must act according to the data it receives, even as conditions surrounding the vehicle are continuously in flux. "The software can do the right thing under a particular scenario but the wrong thing in others," says Rajkumar, who has overseen eight generations of Carnegie Mellon autonomous cars, including the Boss SUV that won the DARPA (Defense Advanced Research Projects Agency) 2007 Urban Challenge. For instance, to avoid a collision, is it better to speed up or slow down?

What cars can do now:

  • Identify a big enough parking space
  • Park (with a human foot on the gas and brake pedals)
  • Maintain a safe distance from the car ahead during highway driving
  • Brake to avoid a forward collision at city speeds


What cars can't do yet:

  • Obey traffic signs
  • Identify pedestrians in the street
  • Stay in a highway lane under low-visibility conditions
  • Decide course of action to avoid hitting a shopping cart or a stroller


Most major automakers have shown off demos or at least announced plans to build cars that can drive without human guidance. Many of today’s cars come equipped with features that automate various driving tasks and enhance visibility of their surroundings. Some vehicles can park themselves and brake to avoid pedestrians. Cadillac’s Super Cruise option may be the closest thing on the market to going driverless—it can take over the accelerator and steering during highway driving."For long trips, you can be hands-free and foot-free," says Nady Boules, director of General Motors' Electrical and Controls Integration Lab.

Yet as smart as today's cars may seem, they are cognitive toddlers. In a car brain, software, processors and an operating system need to run algorithms that determine what the car should do, and these decisions must be made quickly. Sensors and processors made by automotive brake and electronics supplier Continental Automotive Systems, for example, typically transmit and recalculate their algorithms once every 10 to 60 milliseconds. Fast, but not as fast as the human nervous system, which can pass a message from a sensory neuron, through several interneurons, to a motor neuron within several milliseconds.

Virtual test track
Much of the training and testing needed to smarten a driverless car’s brain can be done using computer modeling. "You can mimic the operating world in software, run the vehicle virtually in the environment and inject all things it might encounter into it," Rajkumar says.

Continental has spent years categorizing sensor data using a combination of human labeling and machine learning. First, people go through the data captured from millions of kilometers of driving, matching information from cameras and radar, and identifying the most important elements—in particular pedestrians and other vehicles. These manually created labels are used to train software that can then begin to classify images of pedestrians and other vehicles on its own. As the software generates these new labels, Continental developers step in to verify their accuracy. "The first initial inputs are pretty laborious," says Zach Bolton, a Continental project engineer. "The more output you are able to do, the smarter [the software gets], and the less laborious it becomes in the long run."

Eventually, via this labor-intensive training the automotive brain knows that a Mini Cooper and a Ferrari are both cars, and it can tell whether that object ahead is a speed bump or dog. Cars may not be too clever today, Rajkumar notes, but they are well on their way to becoming better drivers than we are.