X

Killer robots? Cambridge brains to assess AI risk

Sure, fears of human-zapping robots might be a "flakey concern," but that doesn't mean technology is short on existential risks. The University of Cambridge is out to find the truth.

Zack Whittaker Writer-editor
Zack Whittaker is a former security editor for CNET's sister site ZDNet.
Zack Whittaker
3 min read
Aww, it's a cute little robot. And it's saying, "I love you" in Japanese sign language. But how long will humans and robots peaceably co-exist? Honda

Remember the cuddly Furby? Imagine it's grown a killer case (literally) of artificial intelligence and decides your house and your family are far better than its own, and decides to murder you for it.

OK, so researchers think that such a scenario is a "flakey concern" and wildly far-fetched. Still, the U.K.'s University of Cambridge is setting up a new center to analyze the dangers posed by artificial intelligence and increasingly non-human interactive machines.

Founded by distinguished philosophy professor Huw Price, cosmology and astrophysics professor Martin Reess and Skype co-founder Jaan Tallinn, the project will aim to separate fact from science fiction to determine whether supersmart technology, fueled by artificial intelligence, could be a threat to humankind, reports the Associated Press (via NBC News).

The prospective Cambridge Project for Existential Risk has set a wide field of study for itself, ranging from gadgets gone bad to global warming. "Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change," the founders said on the project's Web site earlier this year.

"The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake," the joint statement adds.

Speaking to the AP, Price questioned what happens when "we're no longer the smartest things around," and warned that we could be at risk from "machines that are not malicious, but machines whose interests don't include us."

From "2001: A Space Odyssey" to the "Terminator" trilogy, many films have considered the mayhem that could be unleashed by a killer robot-type-thing. And while many will no doubt scoff at the thought of malevolent robot on the loose decades down the line, Price insists that the potential risks inherent in the development of artificial intelligence should not be dismissed so easily.

"It tends to be regarded as a flakey concern, but given that we don't know how serious the risks are, that we don't know the time scale, dismissing the concerns is dangerous. What we're trying to do is to push it forward in the respectable scientific community," he said.

One of the examples given was when a computer begins to achieve some level of self-awareness and puts its own self-centered goals ahead of those of its human creators or "masters." Just as humans have evolved and slowly taken over the planet -- from chopping down great swaths of forest land and causing the extinction of dozens of species in our living memory alone -- Price warns that computer intelligence could mimic human evolution through the years at an accelerated rate.

Consider, too, that Human Rights Watch and Harvard Law School's International Human Rights Clinic have called for the end to the "development, production and use of fully autonomous weapons," such as those found in South Korea to protect the demilitarized zone.

While these automated weapons are far from boasting artificial intelligence, there is concern that these algorithm-controlled "killer robots" could eventually overstep their bounds.

The University of Cambridge risk center is planned for launch next year.