More than 2,400 scientists working on artificial intelligence projects have signed a pledge that they won't develop robots capable of attacking people.
The Lethal Autonomous Weapons Pledge is meant to discourage military firms and countries from producing AI-enhanced killer robots, It counts Google DeepMind's Demis Hassabis and SpaceX's Elon Musk among its signatories -- which range from students to business leaders.
"We the undersigned agree that the decision to take a human life should never be delegated to a machine," the pledge says.
"There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others -- or nobody -- will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual."
In April, a group of researchersof the Korea Advanced Institute of Science and Technology, which was working with a defense company on artificial intelligence for military weapons. Their letter described autonomous weapons as a "Pandora's box."
This isn't the first time that Musk, recently the subject of widespread criticism for hishas spoken out against autonomous weapons. The Tesla Motors and SpaceX CEO was among 116 specialists from 26 countries who in 2017. A few months later, he warned that .
Last month, SpaceXto help astronauts on the International Space Station.
SpaceX didn't immediately return a request for comment.
'Hello, humans': Google's Duplex could make Assistant the most lifelike AI yet.
Blockchain Decoded: CNET looks at the tech powering bitcoin -- and soon, too, a myriad of services that will change your life.