Bargains for Under $25 HP Envy 34 All-in-One PC Review Best Fitbits T-Mobile Data Breach Settlement ExpressVPN Review Best Buy Anniversary Sale Healthy Meal Delivery Orville 'Out Star Treks' Star Trek
Want CNET to notify you of price drops and the latest stories?
No, thank you

Boffins join Musk, Hawking in saying AI is threat to humanity

Technically Incorrect: Releasing a list of 12 threats to human civilization, academic researchers put artificial intelligence as an emerging and powerful risk.

Technically Incorrect offers a slightly twisted take on the tech that's taken over our lives.

And one day he'll have had enough of his creators. Documentary & Discovery HD Channel/YouTube screenshot by Chris Matyszczyk/CNET

If we're all going to die, there's something slightly exciting in the idea that the end will be unexpected.

For many, the most pulsating thought is that we'll build robots that will take a look at us one day and see us as mere detritus.

Stephen Hawking has warned of it. So has Elon Musk. Now artificial intelligence has appeared on the list of 12 Risks That Threaten Human Civilization.

Published by the Global Challenges Foundation and written by academics from Oxford University and elsewhere, the report seeks to identify risks to humanity that are, in its words, "infinite."

There, amid such well-known threats such as extreme climate change, nuclear war, major asteroid impact and nanotechnology is artificial intelligence. Yes, straight into the chart at No. 11.

The report's authors write of robots that we might create: "Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations."

The use of "probably" is interesting. Is it logical that any artificial intelligence created would necessarily want to boost its own intelligence -- because that's what humans try to do (allegedly)?

The report continues: "And if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts."

You'll have heard similar portentous notions before. How can we possibly know what we'll end up creating? It's a little like a relationship. You think your lover is perfect for you. Until, in year three, you discover their strange predilection for telling you what to do or else.

One rarely, it seems, hears a positive aspect to all of this doom-laden thinking.

These researchers, however, offered the marvelous thought that robots might be our ultimate salvation: "An intelligence of such power could easily combat most other risks in this report, making extremely intelligent AI into a tool of great potential."

There's a certain beauty about all this. We create robots. They cure global warming. They completely negate the possibility of nuclear war. Global pandemics are eradicated. Hurtling asteroids are deflected with just one laser beam from a robot's left eye.

When all this is done, the robots take one look at us, think "nah, bored of you" and smite us with one easy zap.