X

Why 'outdated' Turing test is no longer the gold standard of AI

A chatbot pretending to be a 13-year-old boy recently passed the famous artificial intelligence test, its creators claim. But does Turing's test really tell us anything about AI?

Luke Westaway Senior editor
Luke Westaway is a senior editor at CNET and writer/ presenter of Adventures in Tech, a thrilling gadget show produced in our London office. Luke's focus is on keeping you in the loop with a mix of video, features, expert opinion and analysis.
Luke Westaway
4 min read

artificial-intelligence-graphic-1.jpg
Shutterstock/VLADGRIN

A chatbot impersonating a 13-year-old Ukrainian boy is -- according to its creators -- the first program to pass the famous Turing test for artificial intelligence. But while smart machines may well be out there, experts say the Turing test isn't the tool to find them, and we need a new definition of "intelligence".

The most recent case of a machine passing the test, which was devised by British computer scientist Alan Turing and requires a computer to fool a judge into believing it's human, is fishy in itself. AI experts have criticised the attempt, in which each interaction with the chatbot lasted just 5 minutes, and the program's language slips were covered by its fictional background.

As computer expert and ex-ZDNet Editor Rupert Goodwins notes, "Constraining interaction to a 13 year old boy whose first language wasn't English... what exactly does that prove?"

eugenev2.jpg
Let's face it, this probably isn't the singularity. Eugene Goostman

Beyond the validity of this particular case, however, it's worth asking whether the significance that some have assigned to the Turing test is entirely deserved. 64 years on, the famous test may now be distracting us from much more exciting examples of artificial intelligence.

Behave yourself

Turing formulated his legendary test in his 1950 paper, "Computing Machinery and Intelligence". The paper sets out to answer the question, "Can machines think?" It begins by stating that the word "think" is highly ambiguous, so we could replace that part of the question with a more concrete concept, which Turing dubs "the Imitation Game". This is the language-based test by which a machine passes if it imitates a human well enough to confound a judge.

The word "imitate" betrays the biggest problem with using the Turing test as a test for intelligence -- it only requires a computer to behave like a human. This encourages trickery, such as instructing your program to make slow, deliberate errors when asked to solve maths problems, or (as in the most recent case) disguising a dodgy grasp of grammar by claiming not to have English as a first language. You might trick a human, but this no longer feels like the right way to build a genuinely smart machine.

"Since Turing's paper," Professor Alan Woodward of the Department of Computing, University of Surrey told CNET, "artificial intelligence has burgeoned into a much more detailed discipline, with many branches and subtleties that he could not have envisaged. Within that we have developed a greater understanding of how to define machine intelligence, and the various forms in which this can manifest itself.

"The test is a little outdated," Woodward adds. "Most people in the field of AI would say that things have moved on a long way. Some people take it as the gold standard, I'm not sure it is today."

Ego-centric intelligence

The view that the path to machine intelligence is to mimic humans isn't just outdated, it's ego-centric too, assuming that our own particular brand of smarts is what we should be aiming to replicate. Computer scientists and philosophers agree that we need a different definition of intelligence -- one that encompasses the different breeds of computer systems that are emerging today.

As philosopher Andy Clark explains in the introduction to his 1997 book "Being There: Putting Brain, Body and World Together Again", human minds aren't special because they have a vast vocabulary, but because they are incredibly efficient at getting things done.

"We imagined the mind as a kind of logical reasoning device coupled with a store of explicit data," Clark wrote. "A kind of combination logic machine and filing cabinet. In so doing, we ignored the fact that minds evolved to make things happen. We ignored the fact that the biological mind is, first and foremost, an organ for controlling the biological body. Minds make motions, and they must make them fast -- before the predator catches you, or before your prey gets away from you. Minds are not disembodied logical reasoning devices."

With the definition broadened like this, the world of artificial intelligence suddenly becomes a lot more colourful. You could argue, for instance, that there's more practical, elegant intelligence to be found in a room-navigating Roomba than there is in Eugene Goostman, the Turing chatbot that deceived a panel of judges earlier this week.

I, for one, welcome our new Roomba overlords

Believe it or not, we're already doing better than Roomba. Cutting-edge machine learning and decision-making programs are already being networked with flight data to accurately determine wind speed, while neural networks oversee data centres and Google is building sensor-packed, road-navigating self-driving cars that behave more like primitive animals than automobiles.

None of them speak a word of English, though that's not to say there's no room for machines that do use linguistics -- IBM's Watson supercomputer impressed us when it won the $1 million prize on "Jeopardy!"

"If you define intelligence in a way that's more machine-centric," Woodward says, "you'll find some very intelligent machines out there already."

googleselfdriving.jpg
The (adorable) face of modern AI. Google

Heuristic learning and neural networks are where we should be looking for signs of intelligent machine behaviour, Woodward says: "Things that weren't really envisioned in the early days."

When we finally construct an AI that matches humans in the intelligence stakes, don't be surprised if it can't speak. Besides, what could a neural network possibly have to say to puny humans like us?