Their efforts to re-create human intelligence in hardware and software have led to some very smart machines--just think of IBM'sbeating chess grandmaster Garry Kasparov, whose genius for the game couldn't match the computer's high-speed calculations. But aside from that rarified skill, the machine would be no match for the average 3-year-old in figuring out how to get the best of a grown-up human.
The newer generation of is taking a more humble approach to the cognitive conundrum, according to Anne Foerst, who's a rare combination of computer scientist and theologian--two types that don't always see eye to eye. They recognize, she says, that it is impossible to rebuild human intelligence in machine form even as they labor to build robots and other devices that mimic real-world skills.
In her new book, "God in the Machine: What Robots Teach Us About God and Humanity," Foerst draws on her experience at MIT's Artificial Intelligence Laboratory to paint a picture of how people and robots can and should interact--and whether, at some point down the road from today'sand contraptions, the human community might confer "personhood" on robots.
Foerst spent six years at MIT, where she broke ground with her class, "God and Computers," and now teaches at St. Bonaventure University in Oleans, N.Y. She spoke recently with CNET News.com about changes in the field of AI, social learning for robots and the need for embodied intelligence--that is, the ability for thinking creatures, and machines, to interact with and survive in the real world.
Q: How does a theologian end up at the MIT AI Labs?
Foerst: Even as a small child I was always fascinated with machines and building stuff, but then I got hooked on theology because I just think this is the most interesting field when you want to learn about human ambiguity and human frailty--the fun stuff to being human.
So I studied theology, but I had space to do something else and so I thought well, why not do a little bit of computer science?...I went to MIT basically just to do research because that is where AI was founded. I met Rod Brooks (head of MIT's AI Lab and co-founder of) and a lot of other people--they really liked my research and they were surprised that I was not critical--I didn't attack them. But I could offer a very unique perspective because I was really studying why people are interested in AI, what they get out of that for themselves.
What did you find out about the
Foerst: What I found out was that there is this big wish to have a unified, coherent world view in which everything fits together, which is a desire you find a lot in science. In AI it's particularly strong because they include human nature, the whole idea that humans are actually logical--if we just can understand them. That there is a way to deal with our ambiguities and paradoxes and miscommunications, that ultimately those paradoxes and ambiguities can be overcome, which for a Lutheran theologian, for me, is kind of interesting because I define sin not traditionally, (as) guilt, but sin is really the living in ambiguity, the very fact that humans are not logical. I see the whole AI, and the classical AI, endeavor very much as an attempt to overcome sin.
Rod and other people...kind of criticized that classical camp, (which is) concentrated on high intellectual powers, on math and logic as the pinnacle of intelligence. They kind of embrace the whole embodiment stuff. I shared their critique of the classical approach.
I found out that they were much more tolerant toward religion, even though they weren't religious themselves--they were very supportive of me being religious and of me describing them in religious terms because they realize they don't know everything. What I really like about that--there was inherent modesty in them. They didn't think they would solve the world's problems, but they really realize it's so hard to build a humanoid robot and that actually made them appreciate human nature more.
In the book you described AI as a spiritual quest.
(There was a notion in) the more traditional approaches, "Oh! It's fun to play God"--that was completely gone in this embodiment camp...We really have undergone, not only in AI but in the general cognitive science in the last five to six, perhaps 10, years--slowly we're undergoing a paradigm shift where the understanding of humans goes toward more modesty because it is so complex, because we have to include the body and social interaction.
So Marvin Minsky's notion of a human as a "meat machine," is that a minority view now in AI?
Foerst: Basically Marvin Minsky says, "That is what we are, and we are nothing but that," while modern AI research says it makes sense in the context of AI to talk about us as meat machines--it just makes sense, but that doesn't mean we are. If you try to build artificial humans, you have to assume we are nothing but machines, otherwise you can give up your (effort), you can give up your hopes. But it's a pragmatic assumption and I think in the beginning of AI, it was an ontological assumption.
What does it take for robots to be like us, to make a robot that functions like a human being?
Foerst: I think the robot would have to have the capability to interact, to form meaningful relationships and to understand the value of those relationships, to understand the difference between me and other, to have empathy. Those would be the things I would describe as most crucial, and I do believe that we can build something like that. But I also do believe that if we cannot build it already ready-made, we have to build them in the way that they, like human babies, go through a process of social learning, and probably for the first critter to be built, that social process will take years and years and years, much longer than for a human baby.