X
CNET logo Why You Can Trust CNET

Our expert, award-winning staff selects the products we cover and rigorously researches and tests our top picks. If you buy through our links, we may get a commission. Reviews ethics statement

Google's vision of self-driving cars is wrong, says MIT professor

Technically Incorrect: MIT's David Mindell says NASA moon missions weren't self-driving, so why should cars be? He describes handing over power to an "opaque" corporation like Google very troubling.

Chris Matyszczyk
3 min read

Technically Incorrect offers a slightly twisted take on the tech that's taken over our lives.


Shouldn't we still be in control of these clever cars? Wayne Cunningham/CNET

Automation is terribly attractive.

So attractive, in fact, that some people act as if there should be no limits.

One can feel Google's shrieky frustration that its self-driving cars are getting into accidents that are all -- so Google insists -- the fault of humans.

Now, the company says that it's teaching its cars to behave more like us.

This is surely a first step. However, a professor at the Massachusetts Institute of Technology says that it would be terribly silly for cars to ever be entirely self-driving. In an interview with MIT News, David Mindell says there's no evidence that complete automation would improve humanity's lot.

"We need to rethink the notion of progress, not as progress toward full autonomy, but as progress toward trusted, transparent, reliable, safe autonomy that is fully interactive: The car does what I want it to do, and only when I want it to do it," said the professor of the history of engineering.

Some have adopted a utopian (for them) view that there will be perfect order, as cars all stick to the speed limit and behave politely toward each other.

Mindell, author of "Our Robots, Ourselves: Robotics And The Myth Of Autonomy" -- published on Tuesday -- said this is a very peculiar dream. He concedes that self-driving cars might "reduce the workload" on us overwrought humans, but insists that ought to be the limit.

He noted that it was once thought unmanned, fully automated submersibles would be the perfect way forward for underwater exploration. It turned out that they needed humans to guide and control them in order to get the right information.

What about NASA's missions to the moon? Again, Mindell said, it was imagined that automation would handle everything, but astronauts still needed to make vital inputs into the steering.

Perhaps the most common example is, of course, commercial air travel.

"There are a lot of highly technical systems, but those systems are all imperfect, and the people are the glue that hold the system together," he said. "Airline pilots are constantly making small corrections, picking up mistakes, correcting the air traffic controllers."

Mindell even accused Google of being a little old-fashioned, saying the idea of complete automation is a touch 20th century. (He knows how to hurt.)

"The notion of ceding control of something as fundamental to life as driving to a big, opaque corporation -- people are not comfortable with that," Mindell said.

Google declined to comment. However, the lead engineer behind Google's self-driving project, Chris Urmson, offered some of the company's vision in a TED talk this year.

In essence, Google thinks that too many people die on the roads and traffic jams are getting worse, so let Google control it all. At least that's my impression of Urmson's talk.

Humans, said Urmson, just aren't competent enough -- save those who work at Google, one assumes. Time is being wasted, lives are being wasted, Google believes, so automating everything optimizes everything.

If only that was true. Still, as far as Google is concerned, any human pleasures associated with driving should be eliminated for the greater good.

It's quaint, modern and very beautiful to hear a large brain like Mindell champion human-centered existence. Some critics, such as Jaron Lanier, have long hoped that we wouldn't allow ourselves to live at the behest of machine systems.

Machines ought to exist to make life more human, not less.

That won't, however, convince some Google execs like Ray Kurzweil. He believes that as soon as we have robots in our brains, we'll be godlike.

Wouldn't it be terrible if HAL turned out merely to be Shallow HAL?