Do androids dream of electric sheep? That's unclear, but I know for sure that every kid dreams of intelligent, thinking robots -- certainly every kid who goes on to work at CNET, in any case.
As we veer ever closer to the year 2016, my sci-fi-fuelled childhood fantasies of a bot with a "brain the size of a planet" are closer than ever to being realised. 2015 saw , while back on the ground artificial intelligence programs are achieving above-average scores on college entrance exams. Artificial intelligence (or AI) is the practice of making a machine behave in a practical, responsive way. It's already changing our world and is, by my reckoning, the most fascinating field of technology right now.
But, as one professor I spoke to for this story put it, the "audacity of the attempt to build an intelligent machine" comes with a responsibility to know what we're meddling with. For everyone who ever thumbed through a copy of "I, Robot", mouth agape, here's what you need to know about AI in the modern world.
Robots are very close to killing us
Mention the phrase "killer robot" in conversation and you'll almost certainly raise a smile, your peers doubtless imagining a glowing blue humanoid cyborg sadly pondering, "What is love?" before its eyes turn red and it self-destructs, obliterating the northern hemisphere.
Deeply ingrained in modern pop culture is the notion that some manner of AI uprising is on the cards -- James Cameron's iconic image of a Terminator stamping on a mound of human skulls is never far from any geek's thoughts.
That playful, cinematic and deeply poetic cultural artifact belies the very real threat humanity faces, however. Not from killer robots overthrowing their human masters, but from intelligent robots following orders.
The immediate threat, experts warn, comes in the form of autonomous weapons -- military machines capable of killing without permission from a human. From unmanned planes to missile defence systems to sentry robots, we've already got military hardware that functions with very little input from a human mind.
Groups such as the Campaign to Stop Killer Robots say we're inching ever closer to closing the loop and letting machines handle our killing for us -- a scenario that's legally, pragmatically, and of course ethically problematic.
The less sensationally named Future of Life Institute recentlysigned by hundreds of AI researchers and famous tech personalities, warning, "If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow."
The Campaign to Stop Killer Robots, a coalition of more than 50 non-governmental organisations, claims it's making progress towards a treaty, one akin to the international agreement outlawing chemical weapons. The UN has already hosted discussions on the subject of autonomous murderbots. One huge obstacle facing these groups, however, is that to rave about imminent robot slaughter makes you look like a crackpot who's watched "The Matrix" one too many times.
Humanity needs to revise its pop-culture-instilled notion that robots becoming self-aware and robots wiping out humanity will occur simultaneously. Machines that become smart enough to ponder their own existence may certainly be a problem decades down the line, but phenomenal advances in AI mean that robots that kill without even being programmed to understand the barest concept of mercy are uncomfortably close.
We're a long way from robot sentience
Artificial intelligence takes many forms, and while we've successfully programmed machines to clean our floors, set alarms on our phones, park our cars and take out military installations from above the clouds, things like introspection and self-awareness are proving a little tougher.
"Telling a joke, making an ethical judgement, deciding that you want to collaborate with some individuals and not others -- this rich texture of human life isn't there in our machines at all," said Sir Nigel Shadbolt, Professor of Computer Science at Oxford University.
For decades, humans have looked forward to the so-called "singularity", the moment of self-awareness that creates an explosion in self-improving machine intelligence. This will be triggered -- it's presumed -- by the exponential growth of computing power, coupled with advancing software complexity.
Futurist Ray Kurzweil predicted in a 2005 book that a model of human intelligence would be achieved as soon as the mid 2020s. What appears to be the case now, however, is that the complexity of our own minds, the key that gives rise to consciousness, is a lot more, well, complicated than we imagined.
"That spark of awareness in your head, we don't know where that comes from," Shadbolt said. "The complexity we embody that allows [consciousness] to happen isn't just by the fact that we've got this kind of cortex, this rational brain. We have an endocrine system, we're emotional, we have the three-layer brain...We are extraordinarily complex, and we have only begun to unpack just a tiny amount of that at this point.
"It's still the hard problem," Shadbolt said -- later joking when I ask what the biggest public misconception is concerning AI, "That it's just 10 years down the road."
That sentiment is shared by Murray Shanahan, Professor of Cognitive Robotics at Imperial College London, who told me, "The media often gives the impression that human-level AI of the sort we see in sci-fi movies is just around the corner. But it's almost certainly decades away.
"Two of the major problems," Shanahan explained, "are endowing computers and robots with a common sense understanding of the everyday world, and endowing them with creativity. By creativity I don't mean the sort of thing we see in the Picassos or Einsteins of the world, but rather the sort of thing that every child is capable of."
Robots won't be like us -- they'll be better
From the Terminator series to movies such as "I, Robot", "", "Ex Machina" and even "Short Circuit", the way we portray AI on screen has traditionally been human-centric. We tend to imagine a being that essentially looks and acts a lot like a person. As AI spreads into every aspect of our life, we should be prepared to broaden our horizons when it comes to imagining the bounds and types of intelligence that can be valuable. After all, we've got plenty of human-grade intelligence already.
"The point can't be just to replicate ourselves," Shadbolt said. "We've got very interesting biological ways of doing that, so why on Earth would we want to do it in silicon?"
From the humbleto , Siri or neural networks that oversee data centres, AI is branching out in ways we couldn't have imagined decades ago. "If you define intelligence in a way that's more machine-centric," Professor Alan Woodward told me last year, , "you'll find some very intelligent machines out there already."
That diversity in the kinds of AI now emerging may in part come down to the breadth of disciplines currently investigating machine intelligence. "There's a broad range of subjects now that look at the problem," Shadbolt said. "Psychologists look at it from a human context, there are animal psychologists, physiologists, neuroscientists, AI practitioners, all looking at it with a different angle.
"Fundamentally, we'll need an interdisciplinary approach, so for me there isn't one single discipline that will have all the answers."
That's the face of modern AI. Task-centric, wildly diverse intelligent systems, essentially mindless for now, but busily changing every aspect of human life nonetheless, whether it's public transport or patrolling the skies. The AI of today is nothing like the gloomy, glowing cyborg we once pictured -- it's weirder, more fascinating, more surprising. It's better than we imagined.