Pat Gelsinger, Intel's digital enterprise head, talks about tech in the developing world and the one-chip blade server of tomorrow.
Now that he's in charge of Intel's biggest and most important division, the Digital Enterprise Group, he's responsible for both the Core and Itanium processor lines.
In London for a conference on power management in data centers, Gelsinger sat down with CNET News.com's sister site ZDNet UK to discuss a wide range of issues, in particular how Itanium is converging with the new Core architecture, how quickly the developing world is catching up in supercomputing, and the debate he doesn't expect to see concluded before 2020.
Q: How are you getting on with moving developments in the Core architecture onto the Itanium?
Gelsinger: Itanium used to be a shared development process between HP and Intel. We've consolidated that with the agreement we announced two years ago, which allowed us to integrate all of the Itanium development activities and get a consistent development methodology. Since that move, we've basically hit all of our timings--the first Montecito slip aside, we're back on track.
Part of that on-trackness means we can leverage the same circuit design libraries, process technologies, all of those other things we were not doing a good job with before. So going forward, the circuit techniques, the power-management technologies, all those sorts of things are much better leveraged. The first realization of that is Tukwila (quad-core Itanium) in late 2008, the next step in the product family, where we move to common system architecture elements, as well as full alignment on design tools and process. It's still a different microarchitecture, a different instruction set, still aiming at a different market segment than the core of our product line. I'm driving for more convergence in Poulson (post-Tukwila Itanium) and beyond.
Presumably the cache architectures are converging as you move to a common bus?
Gelsinger: Yep. You just get more and more, and some of the differences that we had before weren't for good reasons, and we're bringing those together, so I'm pretty happy that this gives us much better leverage for the R&D investments. As you move to a common systems architecture, it's much better investment for the customers as well. HP can say: "I can do a platform development, so I have a lower-end Xeon platform that can be used to bring Itanium lower in my product line," so you start to get that not just in our developments, but also in the OEM (original equipment manufacturer) developments.
Do you see convergence continuing to the point where there's one chip with a mode bit (making it compatible with Core and Itanium)?
Gelsinger: I don't see it getting that far, but I am driving these things to be as common as possible.
How's the heterogeneous versus homogeneous multicore debate going within Intel?
Gelsinger: I expect that debate to be going until 2020, and I expect--in my crystal ball--different market segments coming to different conclusions in that discussion. You can clearly envision--and this is an easier discussion to have after IDF (Intel Developer Forum) than it is today, so we'll have to have the next installment of this discussion after April 17--but you can see the lower end of the product line having homogeneous, little cores.
You could imagine the midrange of the product saying: "We need some big cores, for performance, but little cores are more efficient for certain portions of the workload." You can imagine some embedded applications where you have big cores but with some special-purpose cores for other, specific applications, maybe XML acceleration or packet processing or other things like that--a range of building blocks, from little cores to big cores to special purpose cores. You now have a fabric of choices to mix and match for the market segment.
Won't you need some complicated design and verification tools to maintain a large library of very complex cores? Is that a limiting factor in the speed at which you can develop them?
Gelsinger: Sure. Verification is already a limiting factor. That ends up being the rate-limiting portion of new products coming to market. That continues to be the case looking forward, although I do expect that to be helped by formal verification methods and formalization of on-die interconnect. What happens is that near the CPU is a great sucking sound.
Some CPUs suck more than others.
Gelsinger: In multiple respects...Anyway, the CPU starts hauling everything in. What you saw as the system architecture yesterday, tomorrow is the on-die architecture. As that starts to come together, some of these formalizations, interfaces, etc. become part of the die. It's not that far away until you'll see the one-chip blade.
You've just announced a new transistor design on the 45-nanometer process. How far will it take you before you have to have another look at the transistor architecture?
Gelsinger: There's the structure of the transistor and the materials of the transistor. The ; the move to hafnium and metal gate is good for quite some time. We don't expect to change the material structure for a while--improve it, tune it, perhaps, but it's going to last us for several generations.
You've talked a lot about power and environmental factors in the data center. What's Intel doing?
Gelsinger: We've been doing a lot as a company ever since (Intel co-founder) Gordon Moore; he had a penchant in this direction. We plan our own operations in terms of environmental efficiency; we sponsor a lot of initiatives in the industry, and obviously our energy-efficient product line has been a big deal for us.
How well is the move to more efficient computing going?
Gelsinger: You need metrics to measure it. Like any of these kinds of things there's lie, lies and benchmarks. We've worked on SpecPower, vConsolidate, Ecomark, which have all been important efforts for us in defining how things work. We've had a good success with a number of the big data centers and started on our own operations. What we've seen is this incredible densification of the data center, and it's led to the compute space being compressed by something of the order of 20 times over the past decade.
Generally, the thermal envelopes have gone down by about two (times), but because the computing space is getting denser you're seeing almost 50 times the amount of power density. That's pretty stunning. Data center managers are putting 100 servers where they used to have 10, and the amount of compute you're getting in that space is typically two times what you had before, so with Moore's law and other microarchitectural improvements, the performance you're delivering is pretty stunning.
Where are the tools for power management?
Gelsinger: Intel wouldn't claim that we've solved all of those problems. But we're also working with the key OEMs, HP, IBM, and so on, as well as working directly with some key users, giving them our BKMs--our best-known methods--and applying them to their environments. You'll see a number of different announcements in the very near future, to put these ideas under a broader umbrella.
Are there major differences in data centers around the world?
Gelsinger: The developing world isn't as far behind as you might think. Their sophistication in planning and building their data centers is rapidly catching up to the mature markets, but there's still a gap. One unexpected key sign is that every one of the major emerging countries--Russia, India, Brazil and so on--has major high-performance computing projects as well as major megadata center efforts under way. You're seeing Baidu trying to position itself as the Google of China. You're seeing China and India putting petaflop programs in place to be in the front edge.
Why can't they leapfrog, as with communications, by taking everyone's best practices without their legacy?
Gelsinger: I don't see them leapfrogging at this time, but I see the five-year gap we used to expect become a much shorter gap in these scenarios, maybe a one- or two-year gap at this point. But they're coming on strong. It's amazing. You go and see a Baidu data center, and it's pretty impressive. But you look at India saying, "We're going to have a petaflop machine in 2008." That's pretty impressive for a country that not long ago wasn't even in the high-performance computing race, and they could be literally No. 2 or No. 3 in the world. They see the challenge in racing China as well as looking north, and both of those have brought a lot of impetus in installing IT infrastructures.
When do you see power becoming an important issue for smaller data centers, ones with handfuls of servers?
Gelsinger: If you're just talking 30 or 40 servers, then power's not that big a deal--only hundreds of dollars' difference per year. But people are environmentally concerned, so they're putting those priorities ahead of just the savings associated with them.
If I ran a Google data center I could be talking about millions of dollars of operational costs per year, plus as a company they're trying to position themselves at the front end as eco-friendly and environmentally conscious, as part of their corporate positioning, and I think you're going to see that trend increase.
We're seeing the digitization of industries. Amazon is becoming a retailer of mammoth proportions. Google's out to digitize the world. The environmental impacts of these data centers are increasingly concerning governments, as environmental issues become more important in general.
Rupert Goodwins of ZDNet UK reported from London.