This visual (computational) restriction, which LAGR founder Larry Jackel likened to a person driving in a dense fog or blinding blizzard, motivated the program managers to challenge the depth perception of the contestant programs. In San Antonio, this was done by placing a goal (a set global positioning system, or GPS, point) directly behind a cul-de-sac formed by four-foot- (1.2-meter-) high plastic barriers. With a starting point several feet from the entrance, a program with short range vision would drive straight to the goal—and toward the dead end—only to encounter a barrier, forcing the clueless robot to aimlessly search for a way out by navigating along the wall. A smarter robot with greater depth perception would have seen the dead end from afar and instantly adjusted its course to go around the barrier to reach the goal sooner.
Many teams failed to equip the standard-issue LAGR robot with sufficient long-range vision (that would have allowed perfect execution of the cul-de-sac challenge), but the LAGR participants still took advantage of a mapping system that stored acquired information about the barrier. This way, the robot adapted and modified its behavior to avoid repeating the same mistake. After two runs, the robots usually mapped a complete picture of a continuous wall and figured that it had to go around the obstacle to reach its goal.
In addition to the obstacles, a portion of the final LAGR challenge, called the "petting zoo," allowed contestants to demonstrate the specific strengths of their robot algorithms. Lecun exhibited his program's quick response to obstacles that suddenly popped up. This trait reflects a design that is akin to the human reflex by using a faster (but less analytical) system that searches six times per second for any obstacles within 15 feet (4.6 meters) as well as a slower process that processes long-range data in more detail once every second. "We ran the robot through the crowd," he says, referring to spectators and LAGR teams who attended the event. "People weren't afraid of it since they saw it was driving really well and didn't bump anyone. It drives itself better than we can."
The LAGR competition is different from the sportier and better-publicized DARPA Urban Challenge, which features a course that resembles city streets, or the agency's Grand Challenge in which autonomous vehicles race through the desert. Both competitions allow vehicles to use cameras, sensors, GPS, radar and lasers, whereas LAGR vehicles essentially use stereo cameras, GPS and onboard computers.
The goal of autonomous vehicle research is to make unmanned transport an option during dangerous situations, such as war, to avoid putting a person's life at risk. Great strides are being made in visual navigation, thanks to projects like LAGR, but ever more sophisticated systems will eventually have to be developed to deal with increasingly complex problem-solving demands.
Now that LAGR has wrapped up, researchers are unsure if DARPA will pony up any more cash for more such research. "It's hard to tell whether [LAGR] will be perceived as a great success or failure because the devil is in the details," says Lecun, who points out that the best systems ran 2.5 times faster than the baseline ones already built into the robot. "I think there is a huge potential in some of the techniques that were developed during this program. It would be a shame if people disappeared into the woods and nothing came of it."