HolidayBuyer's Guide

Divining AI, and the future of consumer robotics

Sebastian Thrun led his Stanford team to victory in the DARPA robot-car race. How long will it be till we see such cars on the street?

Last fall, Sebastian Thrun led the Stanford University Racing Team to victory in the DARPA Grand Challenge, sponsored by the U.S. Defense Department's Defense Advanced Research Projects Agency.

"Stanley," Stanford's robotic car, drove autonomously across 131.6 miles in the Mojave Desert. With its car averaging 19.1 miles per hour, the team took first place in the challenge, completing the course in six hours and 53 minutes--11 minutes faster than the second-place robot led by the team from Carnegie Mellon University.

Recently, Thrun was named a Fellow by the American Association for Artificial Intelligence.

CNET News.com sat down with Thrun to talk about artificial intelligence and the future of consumer robotics.

Q: Communications and information technology have advanced tremendously in the private sector over the last 50 years. But aside from products like the Roomba vacuum cleaner from iRobot, consumer robotics are not that visible to the average person. What, if anything, has been holding back the field?
Thrun: I think there are a number of factors that are still in the way. The most important one is cost. We can build wonderful applications at a price point that's completely unreasonable for a consumer to afford. And there has been an obstacle in robustness. If you look at robotics as a field, you can roughly group it into three different stages.

The way this will go into the marketplace is through a sequence of driver systems. Systems that will get more and more capable, so that at some point we all realize that we have a self-driving car.

Division one is industrial robotics. We have robot arms that very effectively operate on factory floors--about a million dollars apiece and they do something in very controlled environments.

The second group is professional service robots like the type that is being used to map the Titanic--or robots in space and robots in the military. They have to deal with more uncertainty and more diversity in their environment. Yet it's still somewhat regulated, and of course they cost more than a normal person could afford for a housemate robot.

The last stage--and I think there is a good of number of them--is commercial service workers. That is what is commercially available.

But since you asked about consumer robotics, well, we are not there yet. I think Roomba is a fantastic step in the right direction, but we still need to wait a little bit.

Briefly, what are needed to make a home assistance robot a reality?
Thrun: I think more robustness in perception and a better understanding of, for example, the domestic environment. Today's robots don't do very well at understanding what objects are in your kitchen, for example, or what people's intentions are and how to run a dishwasher. There is a huge perceptional problem, something people take for granted but robots have a really hard time with. It is called scene recognition. Scene recognition takes an image and labels the different objects in that image. A 4-year-old can do that, and still robots cannot do that very well. That's a burden, because as you go into a domestic environment, you find that the first step in moving an object is to recognize.

The second part that we are missing right now is in manipulation. We have made a lot of progress in navigating robots--the Roomba is a navigating robot that happens to pick up dust on the side--but none of them have to do something interesting with an arm. And the science of manipulating objects is in its infancy at this point. This (manipulation) is a big AI field, too.

What are the next likely applications we will see in consumer robots?
Thrun: Certainly the cleaning field will take off, and I hope we add arms to these robots so that they can carry things around in a domestic environment. Maybe clean up after a party, you know? Then I think you'll see in the domestic area robots being used for the care of elderly people. There are many different incarnations of that idea. And one could argue for a robot that's just an operational device for some health care professional, or some relative can interact with an elderly person through the robot.

I think the Roomba is a mix of curiosity and cutting-edge technology and actual utility. It's not the world's best vacuum cleaner, but it's incredibly cool to have a robot vacuum cleaner.

Can you expand on that?
Thrun: Suppose I want to make sure that the stove is switched off and windows are closed in my grandmother's apartment. Wouldn't it be good if I could have a robot where I could just log on to the robot remotely using the Internet, check things out, make sure the fridge is closed and so on? That's one possibility for interaction. There's a whole field of more social interactions, where people--mostly in Japan--are exploring robots to be kind of a companion.... I am ambivalent about this because it gives me a strange feeling to think about the future of human interaction as being humans and robots. But at the same time...elderly care is in such a disastrous state in this country that a robot might be the better alternative to a television set.

Then of course the last answer I give you is one that I am enthusiastically behind, and this is self-driving cars. It's something that I think will change this art fundamentally. And it's very technologically feasible and price-wise feasible.

You've said again and again that your goal is to produce self-driving cars. Aside from military use and safety reasons, why do you think this is so important? Why not just make cars that implement safety features to avoid crashing?
Thrun: If you work toward a self-driving car, you automatically work toward a safer car. I tend to say when I talk to the automotive industry that a self-driving car is the ultimate driver's assistant. I don't think that they are mutually exclusive. In fact, I don't think that the technology I have worked on for self-driving cars has a market right now. I think the way this will go into the marketplace is through a sequence of driver systems. Systems that will get more and more capable, so that at some point we all realize that we have a self-driving car.

What do you think the next likely applications will be?
Thrun: Today you have applications like active cruise control that keeps the distance fixed for you, and it will brake for you and accelerate. There are lane-departure warning systems that are being improved. There is in the mix, and already demonstrated, a number of parking assistant systems that park the car for you at very low speeds. There's technology in development to be an emergency brake system for you, so that when the car realizes that a crash is inevitable, it still tries to act for you in a way that diminishes the impact of the crash. There is just a progression of things going on at this very moment. So to predict that cars will become more and more intelligent doesn't take rocket science; that is just a realistic observation.

What do you think is driving these likely applications? Need? Novelty? (As in the case of the Japanese social-interaction robots.) Or is it driven by people like you because that is their interest?
Thrun: All of the above. In robotics there are amazingly many different driving forces.... People have always dreamed of having a robot, and (they) treat robots as something special, different from any other machines. For instance, to some extent a dishwasher is a robot, but people don't figure that a dishwasher is a robot. There is this decades-old dream of a robot as a replication of ourselves. There is a lot of work in robotics that (involves solving) our problems, for example, like driving cars. The benefits are very obvious. I think the Roomba is a mix of curiosity and cutting-edge technology and actual utility. It's not the world's best vacuum cleaner, but it's incredibly cool to have a robot vacuum cleaner.

I think taking that point to the extreme is the Sony Aibo, which I think intentionally had no purpose whatsoever. I mean, that robot had no purpose. You could not argue that this robot had a practical purpose that we couldn't do as well before. And I think Sony very cleverly stayed away from giving the dog a single purpose. It was...for entertainment, and to fascinate you and to (get people to) talk about adding to the dimension of human experience. What's great about robotics is that people have these wonderful experiences interacting with robots that they wouldn't have with a dishwasher.

How important is it for consumer robots to look and act like the consumers who buy them?
Thrun: I think no one knows at this point. And there have been many different directions in robotics. Some people humanize a robot in the hopes that the closer it is to a person, the more interesting it is. Another is just the opposite. For example, when I talk of the self-driving car, I think of the car as complimenting people. The last thing I want is something that replicates the person, or something the person is already strong in. If it is not even a recognizable robot in the science-fiction sense that's fine.

What's great about robotics is that people have these wonderful experiences interacting with robots that they wouldn't have with a dishwasher.

I think eventually we have to see whether there is a very strong desire to replicate human behavior and human looks and (whether we) feel (that will) really serve us well. Sometimes I don't want a machine that's smart and thinks for me; I need a machine that's really reliable and does the same thing predictably. The Microsoft paper clip, for example, was more of an annoyance to me than anything. So there is a fine line to be walked, and, like I said, no one knows yet. In the next 10 or 15 years we will find out whether, for example, an elderly person would be more accepting of a humanoid robot as a companion, or a robotic wheelchair or robotic walker that doesn't look like a human but provides specific functions.

Microsoft has launched a new robotics research group and its first-ever robotics software, Microsoft Robotics Studio, a Windows-based toolkit designed to let commercial and individual developers create intelligent applications for a range of products. What kind of impact will Microsoft's entry into robotics have on the field?
Thrun: I think there is a really good chance that this will significantly advance the field.... Like most technologies, a number of things have to come together. There has to be the right technology. There also has to be the right perception in the public and the right public support. Robotics has always been on the back burner for corporations. This might be the first time or second time that a large corporation takes robotics seriously. I have talked a lot to Microsoft. I think that what they have is a good first step, but I think they need to streamline the product and build up the people who use it: in the classroom, among hobbyists, the nonscientists and so on. Many of the things that Microsoft has done with this product are really, really good, really well chosen. So I am very hopeful that this is going to have a positive impact on the field.

What effect will the release of programmable robots, like Lego's Mindstorms NXT, have on the consumer robotics market?
Thrun: I think the hobbyist programmable robot is fantastic in that it opens robotics to a huge number of young people who will think differently about technology. I think of robotics often as (being like) the personal computer before the invention of VisiCalc. (VisiCalc) was bought by enthusiasts that didn't really have a purpose for it but were fascinated by the technology. The same is true for Lego Mindstorms right now. In the computer sector it's completely obvious that 95 percent of the interesting ideas came out after VisiCalc. It really changed the field. Word processing, networking and so on. All these things came much, much later. When we speculate about what robots can do, it is predictable that we will invent fantastic, great ways in terms of household that no one thinks about right now. And I think Lego Mindstorms will really help with that.

Do you think it's part of your job as a guiding force in robotics to choose to develop projects that could be beneficial to society instead of things that are of particular interest to you as an intellectual?
Thrun: To me the intersection of what's interesting for society and what's interesting to me is almost 100 percent. I see my mission as a scientist to advance society. So if I were to engage in something that was of no interest to society but would be interesting in its own right, I would not be interested in it. One of the things I am driven by is that I think robotics is incredibly young. We are like in the 16th century of robotics right now. And computer science is young--maybe in the 17th century. Society is so rapidly changing with these new technologies, all my work focus is on making the changes go in a positive way.

How many years away are we from a fully realized version of PEARL--a personal robotic assistant for the elderly that you worked on while at Carnegie Mellon?
Thrun: Oh that's a hard question. I have a much better handle for cars. Realistically, PEARL was much more of an explorative project where we tried to understand what needs exist for the elderly. In exposing ourselves to nursing environments, we learned more about what the actual needs were. For example, one of the offspring technologies out of PEARL was a robotic walker that is still being tested right now near a facility in Pittsburgh, Pa. That robot did not look like PEARL at all, it was much more of a mechanic device that provided guidance to people and could drive itself out of the way if it was in the way. These specialty devices have a much better chance in the next five or six years. A general-purpose kind of humanoid robot is 15 years or so away.

My fear with the current wave of humanoid robotics is that I don't think we have a good story as to why they are really useful to a lot of people.

Only 15 years?
Thrun: Or 20. I mean, technology has been accelerating amazingly fast in recent years, and given the money that has been put into it.... Almost all the interesting humanoid stuff came along in the last five years. But my fear with the current wave of humanoid robotics is that I don't think we have a good story as to why they are really useful to a lot of people. We are still in the phase of exploration and playing with them to see what happens. As long as we don't find a concrete use, there is the question. Do we find an answer? Do we say this is really useful? Once we find this, then I think the technology will come along in maybe 10 years or so.

CNET News.com interviewed Stanford professor John McCarthy about the 50th anniversary of artificial intelligence. In that interview, he talked about formalizing common-sense knowledge and reasoning as the next goal. What do you think is the next big thing to accomplish in AI?
Thrun: So, there're two views of AI that are different. I adhere more to the second view. The fist view is that AI is about human intelligence and about a universal machine. And I think that is a fantastic goal. It's a goal that well might be 200 years old. The second goal is to be at the forefront of information technology. For example, turning data into information, id="6095705">making sense out of the Web, which to some extent Google has achieved, making sense out of the genome and making sense out of robotic data.... So, I agree with my colleague John that there should be more activities in play to try to understand common sense, but I think there are many other worthy goals in artificial intelligence today. If Google, for example, had waited until they understood, say, common-sense knowledge, you would never have seen Google. And Google is a, what, multibillion dollar company at this point?

In my lab--of which I guess John is now a member-- we have a very liberal definition of artificial intelligence. And we believe that AI has been vastly successful. Almost everything people do with computers involves AI today. Google is just one data point out of many. If you go shopping at Amazon, some AI program finds out what your interests are and presents results for you. If you pick up a telephone, some AI program figures out data for you so that your voice is more crisp on the other end. There are endless uses since the start of AI.

Do you think the abundance of labor in terms of agriculture, manufacturing and domestic help has deterred corporate enterprises from being more interested in developing these types of robots?
Thrun: That's a good question. I think to the extent that you build robots that compete with people, you are in a very different business, because people are by and large not just very skilled, but very cheap. If you think of outsourcing labor, many of the visions of these (types of machines) fail under the simple effect that people are just better and cheaper. Although, if you look at past developing robots, you see that there is such a drop in prices that at some point there will be an inversion where mechanical labor will be cheaper than human labor, especially in places like the United States. That's the replacement vision. There is also the augmentation vision.

Can we build robots that are more effective? When I think to build a self-driving car, I don't think to replace people--I hope to make people more effective. And there the economics are different. There you can really think of robotic technology that goes out and does cool stuff. Sometimes people call softbots robots--things that go on the Web and spider the Web. That's a good example of a completely new thing that robots/softbots can do that completely compliments people's abilities and enables them to do something much stronger and entirely different.  

Close
Drag