X

How will driverless cars make life or death choices? Google exec admits he doesn't know

As Uber rolls out its driverless cars in Pittsburgh, Google futurist Ray Kurzweil says he's still working out the moral dilemma in the case of a potentially fatal accident by an autonomous vehicle.

Chris Matyszczyk
3 min read

Technically Incorrect offers a slightly twisted take on the tech that's taken over our lives.


Autonomous autos
Enlarge Image
Autonomous autos

Where on the dashboard is its moral compass?

KAREN BLEIER/AFP/GettyImages

I think of morality as a feather that shifts in the wind and occasionally brushes your face delightfully as it falls to earth.

I'm still perturbed, however, whether technology has a fine grasp of it. Or whether it can.

I'm moved by the news that Uber has launched its driverless vehicles in Pittsburgh.

It crossed my mind whether these cars have already been programmed whom to kill if they face a moral dilemma.

For example, the car is about to hit another car with three people inside. It can swerve out of the way, but then it'll hit three children standing on the sidewalk.

How does it make the choice?

Coincidentally, Google's director of engineering, Ray Kurzweil, offered an answer just this week.

My brief translation: "We don't know."

Kurweil was speaking at the Singularity University, the place where they can't wait until robots become us and we become them. Until one of us thinks the other unbecoming.

Kurzweil explained that the commandment "Thou Shalt Not Kill" was simply wrong. If someone is about to blow up a building or a city, he said, of course it's morally right to take them out.

When it comes to driverless cars, he said there was a need for "moral programming." He said that Isaac Asimov's Three Rules of Robotics was a good "first pass."

These are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Kurzweil does wonder whether an act of omission -- when you could have done something to save another person's life -- is the equivalent of killing.

In essence, though, he admitted he had no ready answer at all to driverless cars' moral character.

"I'm going to think about that more. I haven't done that analysis yet," he said.

For its part, Google said that Kurzweil isn't part of the driverless cars project, as it's now part of Google X.

The company pointed me to the words of Andrew Chatham, a principal engineer.

"The main thing to keep in mind is that we have yet to encounter one of these problems," he began.

Is it the main thing? It may not have happened yet, but it would be nice to know how the car might decide who's going to breathe their last.

"Even if we did see a scenario like that," Chatham continued, "usually that would mean you made a mistake a couple of seconds earlier. And so as a moral software engineer coming into work in the office, if I want to save lives, my goal is to prevent us from getting in that situation, because that implies that we screwed up."

A noble goal, perhaps. Or a frighteningly self-regarding one. But what is the current, actual, real-world situation as these cars are rolling down our streets?

Chatham concluded that "the answer is almost always 'slam on the brakes.'"

Some may be touched by the open-endedness of the word "almost." And what if it's too late to brake?

He did concede that this might not always be the right answer, but said that it would have to be an extreme situation if "brake" wasn't the correct call.

Isn't the world full of extreme situations?

The implication is that no one is terribly clear when "brake" would be a really bad idea.

Not everyone wants to rest at Google's incomplete level of certainty.

Researchers at MIT recently created the Moral Machine. This seeks to see what real human beings -- as opposed to engineers -- would do in certain driverless car dilemmas.

The researchers don't merely want to know how humans might make such choices. They're looking for "a clearer understanding of how humans perceive machine intelligence making such choices."

This seems rather wise.

Uber didn't respond to a request for comment, so it's unclear where its moral compass might be.

At some point, one of these catastrophic events may happen.

When it does, how many people will look to see what the machine decided to do and ask: "Why?"