HolidayBuyer's Guide

Getting machines to think like us (Q&A)

AI pioneer John McCarthy discusses early goals, recent gains and how "artificial intelligence" got its name.

In 1956, a group of computer scientists gathered at Dartmouth College to delve into a brand-new topic: artificial intelligence.

The summer rendezvous in the Connecticut River Valley town of Hanover, N.H., served as a springboard for discussions on ways that machines could simulate aspects of human cognition: How can computers use language? Can machines improve themselves? Is randomness a factor in the difference between creative thinking and unimaginative competent thinking?

The underlying assumption was that, in principle, learning and other aspects of human intelligence could be described precisely enough that a machine could be programmed to simulate it.

We don't have human-level intelligence. However, I would say driving the car 128 miles (in the DARPA Grand Challenge) shows a considerable advance.

Principal figures at the Dartmouth conference included such notables as Marvin Minsky, then of Harvard University; Claude Shannon of Bell Laboratories; Nathaniel Rochester of IBM; and Dartmouth's own John McCarthy.

It was McCarthy who put the name "artificial intelligence" to the field of study, just ahead of the conference. With Dartmouth hosting a 50th anniversary conference this month, McCarthy--now a professor emeritus at Stanford University--spoke with CNET News.com about the early expectations for AI, the accomplishments since then and what remains to be done.

You're credited with coining the term "artificial intelligence" just in time for the 1956 conference. Were you just putting a name to existing ideas, or was it something new that was in the air at that time?
proposal to get research support for the conference

What's needed is to figure out good ways of constructing new ideas from old ones.

Claude Shannon and I had done this book called "Automata Studies," and I had felt that not enough of the papers that were submitted to it were about artificial intelligence, so I thought I would try to think of some name that would nail the flag to the mast.

And looking back, do you think that that's the right term? It seems fairly self-evident, but would there be a better way to describe this kind of research?
McCarthy: Well, there are some people who want to change the name to "computational intelligence"...It seems to me I couldn't have used [that term in 1955] because the idea that computers would be the main vehicle for doing AI was far from unanimous. In fact, it would have been a minority view at that time.

At the time, in that proposal, you had said [about using computers to simulate the higher functions of the brain] that "the major obstacle is not the lack of machine capacity but our inability to write programs taking full advantage of what we have." So the machinery was there, but the programming skills weren't?
McCarthy: It wasn't a question of skills, it was a question of basic ideas, and it still is. One of them that comes up very clearly is when you compare how well computers play chess with how badly they play go, in spite of comparable effort having been put in. The reason is that in go, you have to consider the situation, the position...and furthermore, you have to identify the parts--and that's something that isn't really well understood how to do even yet.

So the attendees in 1956--and I'm sure you, too--were very optimistic about what could be done by, say, the 1970s with chess playing, with composing classical music, understanding speech. How far did we get in the 50 years? Were the initial expectations too optimistic?
McCarthy: Mine were, certainly. I think there were some others there who were rather pessimistic.

What was there to be pessimistic about?
McCarthy: Well, the thing is, you can only take into account the obstacles that you know about, and we know about more than we knew then.

What are some of the big things that have been learned over the last 50 years that have helped shape research in artificial intelligence?
McCarthy: Well, I suppose one of the big things was the recognition that computers would have to do nonmonotonic reasoning.

Could you elaborate on that, on nonmonotonic reasoning?
McCarthy: OK. In ordinary logical deduction, if you, say, have a sentence P that is deducible from a collection of sentences--call it A--and we have another collection of sentences B, which includes all the sentences of A, then it will still be deducible from B because the same proof will work. However, humans do reasoning in which that is not the case. Suppose I said, "Yes, I will be home at 11 o'clock, but I won't be able to take your call." Then the first part, "I will be home at 11 o'clock,"--you would conclude that I could take your call, but then if I added the "but" phrase, then you would not draw that conclusion.

So nonmonotonic reasoning is where you draw a conclusion, which may be a correct conclusion to draw, but it isn't guaranteed to be true because some added facts may prevent it. Now, that was around 1980, or a little bit before, that formalizing nonmonotonic reasoning began, and it's turned into a fairly big field now.

What would be the biggest achievements in the last 50 years? Or how much of the original goals were accomplished?
McCarthy: Well, we don't have human-level intelligence. However, I would say driving the car 128 miles shows a considerable advance. [Editors' note: In last fall's DARPA Grand Challenge, the winning vehicle--Stanford's robotic car, "Stanley"--drove itself 131.6 miles across the Mojave Desert.]

There still isn't a robot that could move around confidently in a cluttered room and climb stairs, let alone climb trees.

What's the next big thing, then, to accomplish?
McCarthy: I would like to see further progress in formalizing commonsense knowledge and reasoning, taking context into account. That's something I've been working on for a long time and that some other people also work on, and which DARPA supports, but I think the ideas that are available are not sufficient to reach human-level intelligence.

A goal in AI is not so much to make machines be like humans, having human intellectual capabilities, but to have the equivalent of human intellectual capabilities, correct? In other words, not reinventing the human but creating something that thinks similar to humans and surpasses human thought?
McCarthy: That's the way I see the problem. There are certainly other people who are interested in simulating human intelligence, even aspects of it that are not optimal. In particular, Allen Newell and Herbert Simon tended to look at it that way.

Another sort of high-level goal that may or may not be reachable seems to be to try to program originality into machine thinking.
McCarthy: Yes. That would be worth some efforts. I did something that was so to speak part way to that in 1963, in which I talked about a creative solution to a problem, a solution that involved elements that were not in the problem, the statement itself. But that was just a start.

And originality--is that as simple as trying to introduce some randomness into the programs, or was it a different order of magnitude?
McCarthy: Well, in principle, in a logical system, you could generate sentences systematically or randomly...and any idea would eventually turn up, but the "eventually" is likely to be extremely far in the future. So that hasn't done much, either using randomness or otherwise. What's needed is to figure out good ways of constructing new ideas from old ones.

Going back for a second to the notion of having machine capability versus programming and the right source of ideas--today we have so much more computational capability than was available 50 years ago. What difference is that making, with the state of the art of computer chips and memory these days?
McCarthy: I would say that 50 years ago, the machine capability was much too small, but by 30 years ago, machine capability wasn't the real problem.

The real problem still being the basic ideas?
McCarthy: Yes.

How do robots factor into thinking about artificial intelligence? I guess in the popular vision, in movie images of humanoid robots, that's where people would tend to see human-level intelligence, but are robots a real factor, or does it really matter what shape or form the machine takes?
McCarthy: Certainly, robots present some problems. That is, they have to operate in an environment, and some of the even rather elementary problems have not been solved yet--that is, combining the ability to walk the way a human walks, which is falling forward rather than just shuffling, and with the ability to understand a three-dimensional scene and so forth. These ideas have been worked on sort of separately, but there still isn't a robot that could move around confidently in a cluttered room and climb stairs, let alone climb trees.

I think that most likely, the next major advance is going to be not made by one of us old guys but by some young guy.

The most obvious thing to do in a movie, when they write about robots, is give them human-like motivations of some kind or other, so that the robot can be a character in the movie. It's very easy to assume that robots would just naturally be like humans, like in where you have this sort of Pinocchio robot that gets lonely.

They create this imitation 10-year-old, and they don't even bother to think that, well, now what will happen when this woman who gets this imitation 10-year-old gets older--when she is 70 or 80--and she still has this imitation 10-year-old. From the point of view of the plot of the movie, it wasn't necessary to think about that, and so that's just sort of one more way in which people are misled by stories.

What do you think of Ray Kurzweil's notion of "the singularity" [which envisions a kind of melding of man and machine by 2045]?
When I was in Israel [in June], I met this young guy who likes my paper on ascribing mental qualities to machines, and well, I only got a few minutes to talk to him, but I think there is so to speak more hope from people like him than there are from people who have been in the field a long time.

And what about research into the brain--has that yielded any notions for artificial intelligence?
McCarthy: Certainly, we've gotten to know a lot about how the brain works. I don't think it's got very much that connects with artificial intelligence yet. Let me give you an example. The positron emission tomography [PET scan] has found a little area in the brain that uses a lot of energy when people are doing mental arithmetic. That's fine, but what goes on in that area when people are doing mental arithmetic is still beyond the present neurophysiology to determine.

I've been reading through some . You seemed to be very optimistic about the future--that material progress is sustainable. But it's a very pessimistic age we live in.
McCarthy: Public moods and journalistic moods can change very fast. Let us suppose that the only really short-term practical way of maintaining automotive transportation is to use liquid hydrogen as a fuel and to produce liquid hydrogen by nuclear reactors. That may very well be the case. I think that if the public, the Congress and the journalists are suddenly faced with really not being able to use cars unless we adopt this new technology, then all of a sudden, the mind will be concentrated, as Samuel Johnson says.

So as the problem gets to a point where we really need to deal with it, we'll deal with it.
McCarthy: Yes, I think so. I don't think that we will let ourselves suffer a real disaster if there is a way of doing [something about it]. You can look at the response of the U.S. and other countries in time of war as an example that shows that ideas can change very fast when there is a necessity.

You've also written that you think global warming can be avoided or even reversed, if it turns out to be a serious problem. You wrote that a few years ago--do you still think that's the case, given current research?
McCarthy: I think there is pretty good evidence that there is some warming. I guess there is controversy about the cause, but it can be reversed, if necessary. But it still isn't clear that it's harmful.

The way of thinking, even among the scientists, is predominantly doom-oriented. Not entirely, but predominantly. They still are not thinking of how we can fix things other than to refrain. I mean, scientists are affected by the same moods that affect the rest of the public.

Close
Drag