X

Self-driving car advocates tangle with messy morality

It's not all about 3D laser scanners and infrared cameras. Self-driving car engineers also must decide whether it's better for a car to kill one person to save five.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
4 min read


Stanford engineering professor Chris Gerdes has been examining the complexities of programming self-driving cars to make moral decisions -- in this case, the "trolley problem" involving a decision that saves some lives at the expense of another's.
Stanford engineering professor Chris Gerdes has been examining the complexities of programming self-driving cars to make moral decisions -- in this case, the "trolley problem" involving a decision that saves some lives at the expense of another's. Stephen Shankland/CNET


DUBLIN, Ireland -- Sure, dealing with lane changes, firetrucks and construction projects is difficult for engineers building self-driving cars. But what about deciding which people to kill when an accident is unavoidable?

That's the kind of the thorny problem that's a real issue for the auto industry as it moves to vehicles that steer, brake and accelerate for themselves. Perhaps because computer-driven cars are so closely compared to human-driven cars, though, people have begun wrestling with those moral issues.

One of them is Chris Gerdes, an associate professor of mechanical engineering at Stanford, who's brought philosophy colleagues in to help get a grasp on emergency life-or-death decisions. One approach is to give rules in advance -- don't hit pedestrians, don't hit cars, don't hit other objects. Another is to try to project consequences -- the approach humans generally take. "The car is calculating a lot of consequences of its actions," Gerdes said, speaking here Tuesday at the Web Summit conference. "Should it hit the person without a helmet? The larger car or the smaller car?"

And what if the fewest people will be killed if a car's driver and passengers are the ones to die? A computer might make a very different decision than a human driver less inclined to self-sacrifice. That's a twist that goes beyond the classic trolley problem of deciding to kill one person to save others.

Although that's not the kind of programming they teach in universities these days, the auto industry has wrestled with life-and-death matters for decades. In some sense self-driving cars are just a new wrinkle.

That high-tech change to our automotive way of life is getting closer all the time. Nevada has given the green light for testing of autonomous vehicles on public roadways, and California is moving quickly in that direction. Carmaker Nissan, meanwhile, has promised the arrival of self-driving cars by the end of this decade, and others including Audi and equipment supplier Continental are also pushing hard toward robotic capabilities. Then there's technology giant Google, which has long dabbled in self-driving cars and earlier this year unveiled a two-seater prototype that provoked some controversy over just how necessary a steering wheel might be.

Self-driving cars use an array of sensors -- lasers, radar, 2D and 3D cameras -- to sense what's around them and make decisions 200 times per second about the best course of action. Eventually, Gerdes believes, it's likely those decisions will come to resemble those that humans make.

That includes breaking laws -- crossing a double-yellow line to pass a stopped car, for example, or breaking the speed limit to pass another car as swiftly as possible.

"If we want to actually drive with these cars as other participants in this dance of social traffic, we may need them to behave more like we do, to understand that rules are more like guidelines. That could be a much more significant challenge."

Who's at fault?

Issues of life and death inevitably bring up discussions of car insurance. Is the car manufacturer or the driver responsible for paying when somebody is injured?

Seval Oz, formerly leader of business development for Google's self-driving car project, speaks at Web Summit of similar work she's now doing for Continental.
Seval Oz, formerly leader of business development for Google's self-driving car project, speaks at Web Summit of similar work she's now doing for Continental. Stephen Shankland/CNET

Self-driving cars actually can bring some clarity to the issue by virtue of the immense amount of sensor data they gather, including a record of where cars were and how fast they were going, said Seval Oz, a former leader of Google's self-driving car plan who now is in charge of similar work for automotive technology supplier Continental.

"The beauty in this technology is that logs, logs, logs are kept. You will have a repository of data that says exactly what happened and who is at fault," Oz said at the conference.

Brad Templeton of Singularity University sees things differently, looking at the question more globally.

Today, whether a carmaker or driver is deemed to be at fault, consumers end up paying for it either in more expensive cars or more expensive insurance premiums, he said. If self-driving cars fulfill their promise of a lower accident rate than human drivers, then ultimately, they're the right technology to pick.

That moral argument clearly holds sway with Templeton, who consulted for Google on its self-driving program. There will be difficulties when, inevitably, a self-driving car is found responsible for someone's death. But it's important to consider what happens if we let humans keep on driving, too, he argued.

Singularity University's Brad Templeton argues that self-driving cars will be safer than human-driven cars in a talk at Web Summit.
In a talk at Web Summit, Singularity University's Brad Templeton argues that self-driving cars will be safer than human-driven cars. Stephen Shankland/CNET

"People do not like being killed by robots. They'd rather be killed by drink," Templeton said. "We'll be choosing between our fear of machines and our non-fear of being killed by drunks because we're so used to it."

More mobility

One big motivation for self-driving cars is to help people who can't drive for themselves, or who can't drive well.

"Who is being deprived? The aged, the weak, the handicapped, people with macular degeneration or eye problems. We have a global population that is aging," Oz said. Self-driving cars fit into this moral equation, too, she said.

"Think what happens to people who are diminished and can't move around. Eventually they atrophy and die," Oz said. "We shouldn't be judged on how we treat our top 5 percent but how we accommodate our bottom 20 percent."