Scary 'Slaughterbots' video shows danger of autonomous killer drones

Commentary: An institute backed by Stephen Hawking and Elon Musk offers a graphic warning against machines that decide whom to kill.

Chris Matyszczyk
2 min read

Technically Incorrect offers a slightly twisted take on the tech that's taken over our lives.


Vultures are rarely people-friendly.

Stop Autonomous Weapons/YouTube screenshot by Chris Matyszczyk/CNET

When you're smart, you can make decisions all for yourself.

That could be a problem if you're a so-called smart weapon.

A video presented on Friday at a meeting of the Convention on Certain Conventional Weapons at the United Nations in Geneva shows the frightening power of tiny AI-equipped drones that decide for themselves whom to kill.

Released by the Future of Life Institute -- which counts Elon Musk and Stephen Hawking among its backers -- the video shows how technologies created to fight the bad guys can suddenly be subsumed by the not-so-good guys for appalling purposes.

The video, which underscores how seriously those who would ban killer robots view the issue, isn't for the faint of heart or stomach. 

It shows dark and unseen forces directing the AI-equipped microdrones to murder particular senators and students. 

It shows how accurately these killer bots can operate and how simply they can change the course of life and history.

As it ends, you can remind yourself that it's all a piece of well-made fiction. 

But then AI expert Stuart Russell -- a professor at UC Berkeley -- appears in order to offer these soothing words: "This short film is just more than speculation. It shows the results of integrating and miniaturizing technologies that we already have."

The Associated Press reports that the UN meeting agreed that something should be done to set limits on such potentially devastating technology.

UN meetings often decide that something should be done.

The pace of technological development, however, always seems to outpace the ability of law and government to regulate its potential consequences.

Musk is one of those who is already imploring governments to regulate all artificial intelligence

But once it can fly and decide for itself whom to kill, why should an AI drone care what governments think?