Utter the words killer robot and it's almost impossible not to picture the Terminator.
The time traveling autonomous murder machine that absolutely will not stop until it's target is destroyed.
The TA 100 is the work of science fiction.
But the campaign to stop killer robots.
A coalition of more than 15 non-governmental organizations believes nation's must act now to ensure a machine is never given the choice to take a human life.
The campaign is trying to get an international prohibition, legally binding, to stop autonomous weapon systems.
Production development and use of these.
An autonomous, is a system, a weapon that once it's been activated or launched decides to select its own targets.
Tracks and selects targets, and kill them without further intervention.
So our aim is to prevent the kill decision from being given to machines.
In other words We don't want machines to be delegated with decision to kill.
Launched two years ago the campaign to stop killer robots says we're all too close to giving all ready advanced robotic weaponry the responsibility of killing.
Campaign coordinator, the Human Rights Watch has previously pointed to hardware such as US antimissile technology.
South Korean sentry robots, or the UK's proto type unmanned Taranis aircraft as examples of the kind of systems that could be made fully autonomous letting a computer pull the trigger.
A scenario the campaign says would be pragmatically and morally problematic.
Can they weigh up the advantage against the potential harm to civilians.
Is that possible?
Who would be responsible for a strike when, essentially, it's the machine that's been preprogrammed to uphold the trigger.
Do we want to live in a world where we have given machines the power to take human lives?
Without a human being, even being there to pull the trigger, when the target is being attacked.
[MUSIC]
As technology advances, AI is poised to become a bigger part of our lives.
From everyday gadgets like robot vacuum cleaners, to emerging tech, like Google's self driving cars.
Robotics professor Noah Sharkey is keen to point out, however, that the campaign's gripe is with autonomous machines that kill, not with AI in general.
It's a develop
Autonomous bomb disposal robots that would be a great technology.
But there aren't the billions of dollars going into that, that there will be into autonomous weapons.
So the only thing we want to stop is the kill decision being made by a machine.
All the other technological developments before that point are fine with us.
The campaign hopes that the world's nations can agree to ban killer robots.
Akin to international agreements on other types of warfare, for instance, chemical weapons.
So that kind of global agreement is a huge task, but those involved say they are making progress.
So since the campaign's launch in 2013, the United Nations body tasked with banning weapons has taken it up, it's on the agenda.
They've had two international discussions Are on the topic, and they will decide at the end of this year whether to continue discussions or to move into a negotiating mode towards an international treaty.
That's what we're pushing for, and we think it's inevitable.
It's almost 100 years since the term robots was first introduced.
And humankind's fascination with the idea of a deadly automaton has never waned.
From theater to the silver screen, we get a giddy thrill at the notion of our own mechanical creations getting out of hand, turning against their human masters.
It's all rather poetic when you see it on screen.
But the killing machines of real life are more mundane, and a lot more frightening.
The real ones are worse in many ways.
They will look like tanks.
They will look like ships, battleships.
They will look like jet fighters.
They might look a bit like some futuristic, like something Batman would fly, but that's what they'll look like.
It's not so long ago that people didn't know what drones were.
The idea of drones flying around, firing missiles was totally crazy.
But now it's suddenly, completely An almost inevitable part of conflict.
And I think we need to be aware that actually what we're looking at are systems that are on the brink of emerging and we need to draw the line now, otherwise we're going to be sleep walking into a system where we have fully autonomous weapons and we can't turn back.
Do you agree with the campaign's goals?
Or do you have some reservations?
Let us know, and stay tuned to CNET!