X

Artificial intelligence experts sign open letter to protect mankind from machines

The Future of Life Institute wants humanity to tread lightly while developing really smart machines.

Nick Statt Former Staff Reporter / News
Nick Statt was a staff reporter for CNET News covering Microsoft, gaming, and technology you sometimes wear. He previously wrote for ReadWrite, was a news associate at the social-news app Flipboard, and his work has appeared in Popular Science and Newsweek. When not complaining about Bay Area bagel quality, he can be found spending a questionable amount of time contemplating his relationship with video games.
Nick Statt
2 min read

artificial-intellgience-getty.jpg
"Charlie" is an ape-like robotic system that walks on four limbs, demonstrated here in March 2014 in Hanover, Germany. The robot could conceivably be used in the kind of rough terrain found on the moon, or it could be a stepping stone toward humanity's destruction. Getty Images

We're decades away from being able to develop a sociopathic supercomputer that could enslave mankind, but artificial intelligence experts are already working to stave off the worst when -- not if -- machines become smarter than people.

AI experts around the globe are signing an open letter issued Sunday by the Future of Life Institute that pledges to safely and carefully coordinate progress in the field to ensure it does not grow beyond humanity's control. Signees include co-founders of Deep Mind, the British AI company purchased by Google in January 2014; MIT professors; and experts at some of technology's biggest corporations, including IBM's Watson supercomputer team and Microsoft Research.

"The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence....We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do," the letter said in part. A research document attached to the open letter outlines potential pitfalls and recommends guidelines for continued AI development.

The letter comes after experts have issued warnings about the dangers of super-intelligent machines. Ethicists, for example, worry how a self-driving car might weigh the lives of cyclists versus passengers as it swerves to avoid a collision. Two years ago, a United Nations representative called for a moratorium on the testing, production and use of so-called autonomous weapons that can select targets and begin attacks without human intervention.

Famed physicist Stephen Hawking and Tesla Motors CEO Elon Musk have also voiced their concerns about allowing artificial intelligence to run amok. "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," Hawking said in an article he co-wrote in May for The Independent. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."

Musk in August tweeted "we need to be super careful with AI. Potentially more dangerous than nukes."

"I'm increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish," he told an audience at the Massachusetts Institute of Technology in October.

The Future of Life Institute is a volunteer-only research organization whose primary goal is mitigating the potential risks of human-level manmade intelligence that could subsequently advance exponentially. It was founded by scores of mathematicians and computer science experts around the world, chiefly Jaan Tallinn, a co-founder of Skype, and MIT professor Max Tegmark.

The long-term plan is to stop treating fictional dystopias as pure fantasy and to begin readily addressing the possibility that intelligence greater than our own could one day begin acting against its programming.