The summer rendezvous in the Connecticut River Valley town of Hanover, N.H., served as a springboard for discussions on ways that machines could simulate aspects of human cognition: How can computers use language? Can machines improve themselves? Is randomness a factor in the difference between creative thinking and unimaginative competent thinking?
The underlying assumption was that, in principle, learning and other aspects of human intelligence could be described precisely enough that a machine could be programmed to simulate it.
Principal figures at the Dartmouth conference included such notables as Marvin Minsky, then of Harvard University; Claude Shannon of Bell Laboratories; Nathaniel Rochester of IBM; and Dartmouth's own John McCarthy.
It was McCarthy who put the name "artificial intelligence" to the field of study, just ahead of the conference. With Dartmouth hosting a 50th anniversary conference this month, McCarthy--now a professor emeritus at Stanford University--spoke with CNET News.com about the early expectations for AI, the accomplishments since then and what remains to be done.You're credited with coining the term "artificial intelligence" just in time for the 1956 conference. Were you just putting a name to existing ideas, or was it something new that was in the air at that time?
McCarthy: Well, I came up with the name when I had to write the proposal to get research support for the conference from the Rockefeller Foundation. And to tell you the truth, the reason for the name is, I was thinking about the participants rather than the funder.
Claude Shannon and I had done this book called "Automata Studies," and I had felt that not enough of the papers that were submitted to it were about artificial intelligence, so I thought I would try to think of some name that would nail the flag to the mast.
And looking back, do you think that that's the right term? It seems fairly self-evident, but would there be a better way to describe this kind of research?
McCarthy: Well, there are some people who want to change the name to "computational intelligence"...It seems to me I couldn't have used (that term in 1955) because the idea that computers would be the main vehicle for doing AI was far from unanimous. In fact, it would have been a minority view at that time.
At the time, in that proposal, you had said (about using computers to simulate the higher functions of the brain) that "the major obstacle is not the lack of machine capacity but our inability to write programs taking full advantage of what we have." So the machinery was there, but the programming skills weren't?
McCarthy: It wasn't a question of skills, it was a question of basic ideas, and it still is. One of them that comes up very clearly is when you compare how well computers play chess with how badly they play go, in spite of comparable effort having been put in. The reason is that in go, you have to consider the situation, the position...and furthermore, you have to identify the parts--and that's something that isn't really well understood how to do even yet.
So the attendees in 1956--and I'm sure you, too--were very optimistic about what could be done by, say, the 1970s with chess playing, with composing classical music, understanding speech. How far did we get in the 50 years? Were the initial expectations too optimistic?
McCarthy: Mine were, certainly. I think there were some others there who were rather pessimistic.
What was there to be pessimistic about?
McCarthy: Well, the thing is, you can only take into account the obstacles that you know about, and we know about more than we knew then.
What are some of the big things that have been learned over the last 50 years that have helped shape research in artificial intelligence?
McCarthy: Well, I suppose one of the big things was the recognition that computers would have to do nonmonotonic reasoning.