X

Killer robots can be taught ethics

Georgia Tech engineer develops "ethical governor" to help military robots decide when and who to shoot.

Mark Rutherford
The military establishment's ever increasing reliance on technology and whiz-bang gadgetry impacts us as consumers, investors, taxpayers and ultimately as the defended. Our mission here is to bring some of these products and concepts to your attention based on carefully selected criteria such as importance to national security, originality, collateral damage to the treasury and adaptability to yard maintenance-but not necessarily in that order. E-mail him at markr@milapp.com. Disclosure.
Mark Rutherford
2 min read
Signet

Adherence to the Three Laws of Robotics as put forth by Isaac Asimov has been, until now, entrusted to whoever held the joystick. That may change.

A robotics engineer at the Georgia Institute of Technology has developed an "ethical governor," which could be used to program military robots to act ethically when deciding when, and whom, to shoot or bomb.

Ron Arkin has demonstrated the system using attack UAVs and actual battlefield scenarios and maps from recent U.S. military campaigns in Afghanistan. (videos)

In one scenario, a drone spots Taliban soldiers, but holds its fire because they're in a cemetery--fighting there is against international law.

In another, the UAV identifies an enemy convoy close to a hospital, but limits itself to shooting up the vehicles so as to avoid collateral damage to the hospital. The mindful bot would also house a built in "guilt system," which would force it to behave more cautiously, after making a mistake.

While the work shows promise, it also draws attention to the inadequacy of trying to program machines with morals, especially ones expected to perform in a complex battlefield environment, according to experts.

"Robots don't get angry or seek revenge but they don't have sympathy or empathy either," Noel Sharkey, a roboticist at Sheffield University, U.K., told New Scientist. "Strict rules require an absolutist view of ethics, rather than a human understanding of different circumstances and their consequences."

Arkin acknowledges that it may take a while before we can trust predators and other unmanned killers with life and death decisions.

"These ideas will not be used tomorrow, but in the war after next, and in very constrained situations." Arkin is quoted in New Scientist. "The most important outcome of my research is not the architecture, but the discussion that it stimulates."