DUBLIN, Ireland -- Sure, dealing with lane changes, firetrucks and construction projects is difficult for engineers building self-driving cars. But what about deciding which people to kill when an accident is unavoidable?
That's the kind of the thorny problem that's a real issue for the auto industry as it moves to vehicles that steer, brake and accelerate for themselves. Perhaps because computer-driven cars are so closely compared to human-driven cars, though, people have begun wrestling with those moral issues.
One of them is Chris Gerdes, an associate professor of mechanical engineering at Stanford, who's brought philosophy colleagues in to help get a grasp on emergency life-or-death decisions. One approach is to give rules in advance -- don't hit pedestrians, don't hit cars, don't hit other objects. Another is to try to project consequences -- the approach humans generally take. "The car is calculating a lot of consequences of its actions," Gerdes said, speaking here Tuesday at the Web Summit conference. "Should it hit the person without a helmet? The larger car or the smaller car?"
And what if the fewest people will be killed if a car's driver and passengers are the ones to die? A computer might make a very different decision than a human driver less inclined to self-sacrifice. That's a twist that goes beyond the classic trolley problem of deciding to kill one person to save others.
Although that's not the kind of programming they teach in universities these days, the auto industry has wrestled with life-and-death matters for decades. In some sense self-driving cars are just a new wrinkle.
That high-tech change to our automotive way of life is getting closer all the time. Nevada has given the green light for testing of autonomous vehicles on public roadways, and. Carmaker Nissan, meanwhile, has promised the arrival of self-driving cars by the end of this decade, and others including Audi and equipment supplier Continental are also pushing hard toward robotic capabilities. Then there's technology giant Google, which has long dabbled in self-driving cars and earlier this year unveiled a two-seater prototype that provoked some controversy over just how necessary a might be.
Self-driving cars use an array of sensors -- lasers, radar, 2D and 3D cameras -- to sense what's around them and make decisions 200 times per second about the best course of action. Eventually, Gerdes believes, it's likely those decisions will come to resemble those that humans make.
That includes breaking laws -- crossing a double-yellow line to pass a stopped car, for example, or breaking the speed limit to pass another car as swiftly as possible.
"If we want to actually drive with these cars as other participants in this dance of social traffic, we may need them to behave more like we do, to understand that rules are more like guidelines. That could be a much more significant challenge."
Who's at fault?
Issues of life and death inevitably bring up discussions of car insurance. Is the car manufacturer or the driver responsible for paying when somebody is injured?
Self-driving cars actually can bring some clarity to the issue by virtue of the immense amount of sensor data they gather, including a record of where cars were and how fast they were going, said, a former leader of Google's self-driving car plan who now is in charge of similar work for automotive technology supplier Continental.
"The beauty in this technology is that logs, logs, logs are kept. You will have a repository of data that says exactly what happened and who is at fault," Oz said at the conference.
Brad Templeton of Singularity University sees things differently, looking at the question more globally.
Today, whether a carmaker or driver is deemed to be at fault, consumers end up paying for it either in more expensive cars or more expensive insurance premiums, he said. If self-driving cars fulfill their promise of a lower accident rate than human drivers, then ultimately, they're the right technology to pick.
That moral argument clearly holds sway with Templeton, who consulted for Google on its self-driving program. There will be difficulties when, inevitably, a self-driving car is found responsible for someone's death. But it's important to consider what happens if we let humans keep on driving, too, he argued.
"People do not like being killed by robots. They'd rather be killed by drink," Templeton said. "We'll be choosing between our fear of machines and our non-fear of being killed by drunks because we're so used to it."
One big motivation for self-driving cars is to help people who can't drive for themselves, or who can't drive well.
"Who is being deprived? The aged, the weak, the handicapped, people with macular degeneration or eye problems. We have a global population that is aging," Oz said. Self-driving cars fit into this moral equation, too, she said.
"Think what happens to people who are diminished and can't move around. Eventually they atrophy and die," Oz said. "We shouldn't be judged on how we treat our top 5 percent but how we accommodate our bottom 20 percent."