CNET también está disponible en español.

Ir a español

Don't show this again

HolidayBuyer's Guide
Culture

Friday Poll: Could AI threaten humanity?

Elon Musk opened up a wave of discussion with his assertion that artificial intelligence has the potential to do horrible things to humanity.

Terminator Salvation
Deadly AI from "Terminator Salvation." Warner Bros.

Science fiction is littered with stories of artificial intelligence gone awry. There are the unhappy replicants from "Blader Runner," HAL 9000 from "2001: A Space Odyssey" and the bad-boy robot portrayed by Arnold Schwarzenegger in "The Terminator." Elon Musk, the man behind SpaceX and Tesla Motors, believes dangerous AI could potentially pop up in the real world.

Last week at the MIT Aeronautics and Astronautics Department's 2014 Centennial Symposium, Musk said, That's a dramatic statement to make, but it's one he's been building up to over a series of comments. In a tweet earlier this year, he wrote that AI is "potentially more dangerous than nukes."

Musk isn't just warning of potential doom. He has suggested that government regulations might be needed to keep AI in check as the technology advances.

Musk's words have sparked some interesting comments from CNET readers. Reader captainhurt dismisses the comments as "fearmongering anti-tech propaganda." Others believe Musk is on to something. "As soon as we give self-determination to devices they will have cause to cheat, lie and go rogue," writes Alex20016.

Some people simply aren't concerned at the moment. "After so many years, AI is not yet out of infancy. We're talking about perhaps 50 years away. Why should we worry about it now?" writes CNET reader timaitoday.

Though we're not currently under threat by time-traveling liquid-metal robots with critical-thinking skills, it's at least an interesting exercise to consider what the future of AI might look like. Will it be man's best friend, or humankind's worst enemy? Vote in our poll, let us know if you agree with Musk's concerns and share your thoughts in the comments.