A call for machine morality

Wendell Wallach, lecturer at Yale University's Interdisciplinary Center for Bioethics, hypothesizes about what we face in an age of AI machines such as self-driving cars or household robots.

SAN FRANCISCO--Prediction: We are just a few years away from a catastrophic disaster brought about by an autonomous computer system making a decision--a disaster that will provoke a political response on par with 9/11.

That prediction is from Wendell Wallach, lecturer at Yale University's Interdisciplinary Center for Bioethics, who hypothesized about the challenges and opportunities we face in an age of artificially intelligent machines, such as self-driving cars or household robots. Wallach spoke here Saturday at the Singularity Summit, a two-day conference about AI and the possibility of developing smarter-than-human machines.

"I'm your friendly skeptic. I'm not convinced that we understand enough about intelligence to know whether we can pull this off," he said, referring to computers that can out-think people.

Wallach's specialty is bioethics, so he talked at length about the subject during his speech, "The Road to Singularity: Comedic Complexity, Technological Thresholds, and Bioethical Broad Jumps." Wallach said that we can't underestimate the political power of fear when it comes to research and development of intelligent systems.

"How will this be handled from a public policy approach? Fear's not likely to stop scientific research but it's certainly likely to slow it down," he said. "We need some mechanism for evaluating real potential dangers and (help) leaders and the public to discriminate against what are the real challenges and the speculative ones?"

As a result, a new field of inquiry is emerging, he said. It's referred to by various names, including "machine morality" and robo-ethics, which was coined by people in the European Union. The field, Wallach said, is about coming up with and implementing moral decision-making facilities for artificial agents. Such standards are necessary for a world in which autonomous systems can make choices.

"Computers will have to be explicit reasoners. We must build AI to be sensitive to our moral systems.

"Our intelligence emerges out of emotion and instincts. Computers start as logical platforms and if they have emotions and instincts, it's only because we elect to insert them," he said. "Computers will need "suprarational faculties, not just emotions--they'll need things like social (skills) and a theory of mind," he said.

Wallach acknowledged that concepts like machine morality and rights for robots are speculative. "These are fascinating thought experiments."

Barney Pell, founder of natural language search engine Powerset, added later during a panel: "We can talk about (these things) and no one can prove us wrong for 20 to 30 years."

 

Join the discussion

Conversation powered by Livefyre

Don't Miss
Hot Products
Trending on CNET

HOT ON CNET

Find Your Tech Type

Take our tech personality quiz and enter for a chance to win* high-tech specs!