X

Intel's resident Nostradamus

David Tennenhouse is one of Intel's big-picture guys, charged with figuring out what will be the next decade's big thing. What he forecasts may surprise you.

Michael Kanellos Staff Writer, CNET News.com
Michael Kanellos is editor at large at CNET News.com, where he covers hardware, research and development, start-ups and the tech industry overseas.
Michael Kanellos
8 min read
In the future, computers will be everywhere, including in your socks.

Micro-Electro-Mechanical Systems, or MEMS--small, networked computers that can relay data from the real world--are one of the current obsessions of David Tennenhouse, vice president and director of Intel Research. Through MEMS and other types of agents, Intel hopes to open the door to a new computing world where machines can sense changes in tectonic plates, automatically monitor the progress of a fire, or keeps tabs on several Elvis memorabilia auctions at once. Another problem on his agenda: how to handle all of the data traffic that will be generated by MEMS.


Intel's take on "proactive" computing
David Tennenhouse, vice president and director, Intel Research
Formerly the chief scientist for the Defense Advanced Research Project Agency's Information Technology Office (DARPA), Tennenhouse is one of the big-picture guys at the Santa Clara, Calif.-based chipmaker charged with coming up with the next decade's big thing. Just as important, he has to ferret out research at universities before competitors do.

Tennenhouse recently sat down with CNET News.com to discuss some of the projects under way at Intel, philosophical changes coming to computing, and how Intel handles its relationships with research institutions.

Q: First off, what are the main items on your plate?
A: If you think about the five- to 10-year agenda, the key thing is this aspect of getting physical, getting much better sensors and actuators so computers can be connected to the world around them, and getting deep networking. That is just going to open up this huge wealth of information.

It sounds like we are heading for a world where there is going to be far more data, but also a much larger constant flow of data. Managing it all will be an immense task.
Absolutely. If you think about it, you are the user, you are on your wireless link, and you want to tap into a million sensors. You can't get the bandwidth to get the readings from all those sensors. It is not only convenient, but it is necessary to have active processing in the network.

Looking at the classic PC, how is that going to change in the future?
You'll see the PC or the home personal server, whatever you want to call it, evolve to take on a bigger and bigger role. There will be millions of agents per person. Most of the time most of those agents will be dormant, waiting for something to happen. But many of them will be active on your behalf, reaching out to the network, etc. You are going to have an immense number of background activities going on.

We think that somewhere down the road, it won't be <i>a</i> robot, but it will be many robots in your house. It's the same thing with robotics. If you think about the robotic space, people today are wondering what to do with a robot. We think that somewhere down the road, it won't be a robot, but it will be many robots in your house. And how will the robots exchange information? Well, they are probably going to go through agents that are running on a traditional platform. That is going to be the cheapest way to do it. Those agents are going to be using this node (the PC) as a depository for their learning.

I don't see this as particularly threatening. In fact, it probably is going to be a huge driver to our existing businesses. What you are going to find is that all of these new applications are going to require housing for a tremendous amount of computation.

This world of millions of personal agents. When is this going to happen? Ten years from now? Five?
We're already seeing people using agents today, and they tend to not realize it. They essentially get on eBay and they will leave an auction bid. Is that an agent? Well, it depends on your definition. In my mind, an agent is something sitting there on your behalf with a trigger that has been empowered to do something subject to rules that are under your control or supervision.

The real limiting thing right now is essentially that they require too much of your attention. Getting to that next step will be getting them to negotiate with each other. When my agents run up against your agents, they want to do a deal. You certainly don't need to hear back from the agent that it got you seat 14C on the airplane, but you want to make sure it automatically deposited that into your travel records and know that you will be able to get on the plane and count on it. You're going to see e-business drive the charge on that.

XML (Extensible Markup Language, a popular Web standard by which businesses can easily exchange data between employees, customers, partners and suppliers) is a huge enabler in this space. It's not that XML itself makes anything possible that we didn't know how to do before, but it establishes this new platform (with) a common language. So when I go to a new project, I don't have to define what a name or an address is, right? It's all done.

Recently, Intel opened branch offices of Intel Research at Carnegie Mellon University, UC Berkeley and the University of Washington. Are these the first? And do you plan to do more?
I don't really think of them as branch offices. These are the first of what we hope will be a small network of research labs in universities. In the near term I would probably like to add one, maybe two more domestically and then get started internationally. The key thing with these labs is the people running them...We'll maybe grow the network to six, eight, maybe 10. We're not planning on having 50 of these. (In addition, Intel has research projects going on in approximately 90 universities.)

You also talked about how statistics is going to become more important in computing. Could you expand upon that?
Getting across this gulf from deterministic computing to statistic/probabilistic computing. Huge gulf, huge opportunity. The truth is that nature is not binary. We are not binary. There is noise in the signals that we sample. There is noise in the data that you enter. There is noise in this tape you are recording right now, and you won't be able to transcribe it perfectly. That's one side. That's the micro aspect of it.

The macro aspect of this is that we are finally getting large numbers. When you go out and search the Web, (getting the same result every time) is not that important. Many pieces of information are replicated on the Web. Do a search a second time and you might get a different set of servers responding, but you will still get your answer...Look at Google. It's got the numbers, and they are applying some tremendous statistics and machine learning to make it work. It's a great example of taking that technology and making it work.

How does this change how research is performed or approached?
It is an interesting change. Think about what it was like in the past to be a scientist. Your student comes to you and they've got their paper (idea). It's not that XML itself makes anything possible that we didn't know how to do before, but it establishes this new platform (with) a common language. Your question is, "What have other people done?" In the past, they could answer, "I went to the library and looked at these three journals and there are absolutely no papers on this." And they were very convinced, you were convinced, that they gave you a deterministic answer.

When the Web happened, I had scientists complaining to me. They sort of said, "You know, I search the Web but I'm never quite sure I've found everything." And the answer is, before you were fooling yourself, before you thought you found everything, but you hadn't. Now you're seeing ten, a hundred, a thousand times more of what's out there than you were before, and you're not quite catching everything, but you are doing a hundred times better job of doing your research.

People have to get comfortable with the sense that there is uncertainty and that, nonetheless, you can get predictability. You take large numbers and research on statistics and you apply them.

Will scientific papers become a "to be continued" sort of thing?
What you'll find is that we will get a new basis and a new level of sophistication, much like when we went from classical physics to quantum physics. It just changed the ground rules. And by the way, a lot of people were left behind (in that transition). That scares me. When I look at what our best physicists are doing at Intel, it's way beyond me. I will admit it. And I think that computer science, as it evolves, is going to get beyond me. In some sense I want to drive it beyond me.

The public in some sense, though, is uncomfortable with privately funded work at universities. People fear that research might become too product-focused. Is there a danger in that?
We're actually trying to drive it in the other direction. You might be surprised to find out that our position going in is, "Why don't you put it all in the public domain?" That's putting the shoe on the other foot. Our philosophy is that we want the universities out there doing the research that will grow the entire industry. Our job, we have a $4 billion R&D plan; if we can't get competitive advantage out of that and run faster than everyone else, we don't deserve to. So we want the universities running full tilt.

This is a bit of a change for the universities. We sort of back up when they start going, "Hmm. Mmm. We don't know about that." Then we tell them we at least have to get a nonexclusive license because I can't really explain to my shareholders why Intel is getting sued over something we funded. But we are not trying to appropriate intellectual property. We would want them pushing forward and getting more longer term in their work, and we want to see government involved in doing that. The government sort of provides the lubrication, making sure there are enough funds to get started and to convince other companies to get involved.

So you want them to stay out of the product development idea?
Absolutely. Get out of the start-up mode. In some sense, the universities are a huge opportunity collectively for economic growth, so there is an opportunity cost every time research gets appropriated (by a private party). We don't want to see that opportunity cost.