But imagine how quickly that frustration could turn to fear if some of those drivers were robots running on software built to. Navigating such a complex environment is largely uncharted territory for artificial intelligence, let alone big steel.
"It's a little bit scary to think about (our robotic car among) other human drivers or other really large vehicles," Mike Montemerlo, senior research engineer at Stanford University's Artificial Intelligence Lab, said here Thursday.
Montemerlo is one of the lead scientists developing Stanford's newest robotic contestant in DARPA's Urban Challenge, a road race of artificial intelligence set for November. The contest is a follow-up to the defense group's 2005 desert race, the Grand Challenge.in a milestone of AI in the 21st century. Finishing fastest, in under seven hours, Stanley was one of only five vehicles to complete the 132-mile Nevada desert course in 2005; the previous year, all of the competitors failed entirely.
Now, Stanford's AI team has built Stanley's successor, named "Junior" (after Stanford University founder Leland Stanford Jr.), a modified 2006 Volkswagen Passat wagon in a bright, German-manufactured blue. Montemerlo and team gave CNET News.com a preview of Junior and its technology at Stanford before the annual conference of the American Association for the Advancement of Science, where Stanford's Sebastian Thrun, AI director and head of the Stanford Racing Team, will give Junior its public unveiling.
Junior is still in the development phase, but the robot is already far ahead of its parent in terms of technology. (Stanford Racing Team plans to begin its testing phase in March.) Junior has to be smarter if it is to meet the stiff challenge of navigating city streets alongside other vehicles, including other robotic contestants and human-driven cars from DARPA.
In the desert race,only had to process terrain in front of it, like rocks or bumpy roads, because it wasn't driving among other robots. But in this race, Junior must be aware of fast-moving objects all around it, including its robotic rivals; and it must understand street signs, traffic lights and other basic rules of the road even when other robots are breaking those rules. As Thrun puts it, "The current challenge is to move from just sensing the environment to understanding the environment."
As a result, Junior must have much more sophisticated sensors that can "see" the world in a 360-degree view and process that data in as close to real time as possible. The Junior prototype, for example, has a new, high-definition lidar detection system by Velodyne, which spins around to give the robot an omnidirectional view of its surroundings. It also has a Point Grey Ladybug 2 video system, with six video cameras to capture near high-def video in all directions.
As opposed to Stanley, which built a 3D model of the world over time, Junior will attempt to use its more sophisticated sensors to create a picture in real time. The speed of response is crucial in a city setting.
Junior's software also must include new decision-making and predictive abilities that Stanley didn't possess. For one, Junior will need to be able to identify objects and make decisions based on that information. For example, if Junior were to encounter a curb, it would need to swerve around it to avoid a collision. But it wouldn't want to swerve in order to pass another robot if it meant crossing a double line because that would be breaking the rules of the road.
For that reason, Junior has new software components that deal with perception and decision making. One algorithm the AI lab has developed is for object tracking, which helps the robot understand when it sees a bike, car, curb, road markings or other moving objects. The algorithm will classify objects--e.g., that is a car moving 10 mph--and run that through a planning tool that can match the data to rules of the road in order to make a decision about how to proceed.