Big news in AI this week: IBM's Watson project defeated "Jeopardy" champions Ken Jennings and Brad Rutter in a three-night prime-time demo match. What does that win mean for computing, and more importantly, for humanity? That's the topic for this week's Reporters' Roundtable, and to discuss it we have two great guests, both with current books on the topics of computer vs. human competition.
First up is Stephen Baker, author of Final Jeopardy: Man vs. Machine and the Quest to Know Everything. Baker reported on the development of Watson from inside IBM headquarters to write this book. He was BusinessWeek's senior technology writer before that.
And branching out a bit from the Watson news, we also have Brian Christian with us. He's the author of The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive, which will be out on March 1. He's also author of the recent Atlantic cover story Mind vs. Machine, which is a great primer for this topic. Both of these works tell the story of Brian's participation in the annual Loebner Prize, in which humans face off with computers in a Turing test competition to convince judges that they are human. Brian, it should be noted, was voted most human.
Ep. 65: Debating the nature of robot minds
Some of our discussion points
First, the "Jeopardy" tournament. What problems had to be solved to make Watson a viable contender? What human skill does Watson emulate? What did the team overlook?
It strikes me that the game was created for human contestants, and bringing a computer into the arena really upset the balance. The reaction time of pressing the buzzer is such a variable task, one that's not really interesting for computers since their reaction time can be so fast. If we took the buzzer challenge out of the competitive equation somehow, or made speech recognition or OCR part of the equation, how do you think Watson would have done?
On the other hand, Watson was disconnected from the Internet. He couldn't Google anything. Why this artificial constraint?
What did Watson learn from its older brother, the chess player Deep Blue? Was there a breakthrough that made Deep Blue competitive against Kasparov?
Let's talk about the Turing test, Brian. First, what is it, and what is the Loebner Prize?
How good are computer programs getting at the Turing Test? Are they open-ended?
How did you win against them? Can computers have personality?
What do demonstration projects like Watson and Loebner Prize competitions teach us about our own brains and minds?
I want to talk about adaptability. It's widely believed that computers are formidable foes because they are fast and highly adaptable, and that humans get stuck in intellectual and conceptual ruts. But Deep Blue never played again after beating Kasparov, and Brian, in 2009 the humans beat the machines in the Loebner competition by a much wider margin than they did in 2008. Why? What's going on here?
Where will we see computers competing with human minds in games next? Poker?
How about in other pursuits, say, war.
What's next for Watson? Health care? Or will Watson become a litigator?
What's next in human mind simulation--research projects like Racr?
Other projects we should be watching for?
- Stephen Baker's post-'Jeopardy' TED interview
- Jeopardy, Schmeopardy: Why IBM's next target should be a machine that plays poker
- Reddit By Request: We Are the IBM Research Team that Developed Watson. Ask Us Anything
- IBM researchers show love for 'Jeopardy' champion Watson
- IBM's Watson gets smashed on 'Conan'