But imagine how quickly that frustration could turn to fear if some of those drivers were robots running on software built to. Navigating such a complex environment is largely uncharted territory for artificial intelligence, let alone big steel.
"It's a little bit scary to think about (our robotic car among) other human drivers or other really large vehicles," Mike Montemerlo, senior research engineer at Stanford University's Artificial Intelligence Lab, said here Thursday.
Montemerlo is one of the lead scientists developing Stanford's newest robotic contestant in DARPA's Urban Challenge, a road race of artificial intelligence set for November. The contest is a follow-up to the defense group's 2005 desert race, the Grand Challenge.in a milestone of AI in the 21st century. Finishing fastest, in under seven hours, Stanley was one of only five vehicles to complete the 132-mile Nevada desert course in 2005; the previous year, all of the competitors failed entirely.
Now, Stanford's AI team has built Stanley's successor, named "Junior" (after Stanford University founder Leland Stanford Jr.), a modified 2006 Volkswagen Passat wagon in a bright, German-manufactured blue. Montemerlo and team gave CNET News.com a preview of Junior and its technology at Stanford before the annual conference of the American Association for the Advancement of Science, where Stanford's Sebastian Thrun, AI director and head of the Stanford Racing Team, will give Junior its public unveiling.
Junior is still in the development phase, but the robot is already far ahead of its parent in terms of technology. (Stanford Racing Team plans to begin its testing phase in March.) Junior has to be smarter if it is to meet the stiff challenge of navigating city streets alongside other vehicles, including other robotic contestants and human-driven cars from DARPA.
In the desert race,only had to process terrain in front of it, like rocks or bumpy roads, because it wasn't driving among other robots. But in this race, Junior must be aware of fast-moving objects all around it, including its robotic rivals; and it must understand street signs, traffic lights and other basic rules of the road even when other robots are breaking those rules. As Thrun puts it, "The current challenge is to move from just sensing the environment to understanding the environment."
As a result, Junior must have much more sophisticated sensors that can "see" the world in a 360-degree view and process that data in as close to real time as possible. The Junior prototype, for example, has a new, high-definition lidar detection system by Velodyne, which spins around to give the robot an omnidirectional view of its surroundings. It also has a Point Grey Ladybug 2 video system, with six video cameras to capture near high-def video in all directions.
As opposed to Stanley, which built a 3D model of the world over time, Junior will attempt to use its more sophisticated sensors to create a picture in real time. The speed of response is crucial in a city setting.
Junior's software also must include new decision-making and predictive abilities that Stanley didn't possess. For one, Junior will need to be able to identify objects and make decisions based on that information. For example, if Junior were to encounter a curb, it would need to swerve around it to avoid a collision. But it wouldn't want to swerve in order to pass another robot if it meant crossing a double line because that would be breaking the rules of the road.
For that reason, Junior has new software components that deal with perception and decision making. One algorithm the AI lab has developed is for object tracking, which helps the robot understand when it sees a bike, car, curb, road markings or other moving objects. The algorithm will classify objects--e.g., that is a car moving 10 mph--and run that through a planning tool that can match the data to rules of the road in order to make a decision about how to proceed.
Stanford's team has also built simulation software to test its algorithms in a virtual world, a much less dangerous way to try out new software for robotic city-driving, according to Montemerlo. For example, in a simulation of 20 cars all running the same copy of the software, driving in a figure 8 with a two-way stop sign, the robots eventually get stumped. "You can't test 20 robotic vehicles, but you learn a lot about the software from the simulation," he said.
Also this year, the car has an electronic power-steering system, unlike the hydraulic system of Stanley, which will give the vehicle more control in an urban environment. The electronic steering booster gives the Stanford engineers more control over how much torque the car can exert over the steering wheel, for example, and that's much more necessary in a crowded environment of city streets.
Despite DARPA's mission of advancing robotics for military vehicles via its challenges and grants, the Stanford team is highly focused on advancing AI for consumer cars. More than 40,000 people die in vehicular accidents every year, and Thrun's team believes that if it can build effective autonomous controls for cars, it could significantly reduce that number.
Since Stanford won the 2005 Grand Challenge, DARPA-sponsored races have become much higher profile. This year's competitors are stronger as a result. New contestants include a handful of defense contractors and universities such as MIT, known for its AI department. AI stalwarts like University of California at Berkeley, Georgia Tech and Carnegie Mellon will also be among the contenders.
Also, more universities have teamed with car manufacturers for this race, much like Stanford did with Volkswagen in 2005. Carnegie Mellon, for example, has partnered with General Motors.
"The level of competition is definitely getting tougher every year," said Montemerlo.