X

Will self-driving cars be programmed to kill you?

Technically Incorrect: Ethicists are beginning to muse about whether you will be more dispensable than the five people your out-of-control car is about to hit.

Chris Matyszczyk
3 min read

Technically Incorrect offers a slightly twisted take on the tech that's taken over our lives.


Will it choose to dispose of you for the greater good? CNET

Can the greater good be a little evil?

Will the coming technological dictatorship make grandiose decisions about an individual's worth to humankind?

These depressingly subtle questions are now being asked by those who foresee a difficult future when, for example, self-driving cars are compulsory. (And there's every prospect of that becoming so, at least according to Elon Musk.)

One question posed by scientific ethicists at the University of Alabama, Birmingham revolves around what happens when an out-of-control car is about to hit a group of schoolchildren.

This so-called Trolley Problem has fascinated philosophers for many a long evening. It poses the dilemma of someone at the controls of a trolley track. A bus full of kids is stuck at a level crossing. Simultaneously, your own child has fallen onto the rails. (It was Bring Your Kid to Work Day.)

Your switch can either save your child or the bus full of kids. Which will it be?

That scenario might be less probable than one involving self-driving cars. Your self-driving car's steering in on the fritz. It's heading for a group of kids on a sidewalk. You are in the car.

Should your car's software decide to divert your car into a wall, in order to save the kids? Yes, you would die. But there are several kids and you are just one person. The kids have a future and you -- well, you've seen better days, haven't you?

What criteria should be used to dictate such a decision?

UAB alumnus Ameen Barghi offered: "Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people." Translation: "Sorry, you're dead."

There is another strain of philosophy, however. It's called deontology. This suggests that there are certain absolute values. The notion that murder is always wrong may be one. Therefore, Barghi explained of the Trolley Problem: "Even if shifting the trolley will save five lives, we shouldn't do it because we would be actively killing one."

Of course, there are further and even more twisted kinks. What if the person in the self-driving car is, say, the King of England, LeBron James or someone society deems (rightly or wrongly) superior? Do they merit additional consideration? Or is it always just one life versus many?

I have contacted Google to ask whether its self-driving software team has considered some of these painfully difficult issues and will update, should I hear.

At the heart of this debate is how much the very nature of human life will change, as technology becomes ever more sophisticated, intrusive and in many ways controlling.

Will we accept, as Google's Ray Kurzweil predicted will happen by 2030, that we are mere hybrids -- part human and part machine -- and therefore in some sense less animate?

Will rationality, powered by ever more detailed algorithms, simply be the dominant philosophy, so that it would be clear that five lives saved is always better than one lost? (Or, on the other hand, that five lives lost will help reduce the population.)

Or will we cling to the notion that some things in life are simply fundamental, and one of those is that awful things sometimes simply happen.

(Via Science Daily)