X

Intel's summer of servers

CNET News.com asks Intel server chief Mike Fister what the looming debut of a faster Itanium processor will mean for servers. Also: How rapidly is corporate technology changing?

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
10 min read
Mike Fister once began a sales presentation on server benchmarks by applying paint to a wall. "Look at that! Paint drying. Now that is exciting!" he said to the roomful of astonished onlookers. Although Fister, a senior vice president and general manager of Intel's Enterprise Platform group, can joke about the often arcane world of back-end computing systems--he's dead serious when it comes to expanding Intel's share of that highly competitive business market.

Indeed, Intel has come to dominate the market for servers with four or fewer processors with its Xeon chip. With Fister, Intel has begun driving deeper into the server business as it attempts to drive rivals Sun Microsystems and IBM toward the fringes.

The Santa Clara, Calif.-based company has braved slow acceptance for its Itanium chip. An upcoming processor called Madison, however, may begin to speed customer adoption. Madison performs about 50 percent better than its predecessor and, so far, early indications show that more PC makers are adopting it.

Fister's challenge will be to win faster acceptance. To achieve this goal, Intel will need to work more closely with the chip's users and the software community to help them adapt to the new processor. That's the sort of thing that Sun has done for years but a task that Intel has traditionally left to computer makers.

Fister, who joined Intel when DOS (Disk Operating System) ruled the PC world, recently sat down with CNET News.com to discuss Itanium and Intel's other chip plans.

Q: This is shaping up to be a server summer for Intel. What's on tap?
A: We've got another version of Gallatin (the current Xeon) coming, and Madison will debut exactly where we said it would two years ago. We've got some very exciting big OEM (original equipment manufacturer) systems coming out. And we're moving up in the top 500 (supercomputer) list. There's lots and lots of Intel architecture-based clusters--a lot of Itanium, a few Xeons.

Itanium sales haven't been the greatest, but interest among computer makers seems to be growing with Madison. Is this because the performance is better, or is there more software now? There's always been a dearth of applications, according to some.
The performance is, on par, better than anyone thought it would be, and the applications mix keeps moving up. We have more than 300 production release (applications) out now--and they are the big ones.

Intel serves up new chips
A cheat sheet to Intel's plans for the server market

Madison
Aka: A new version of the Itanium 2; successor of McKinley.
Features: 1.5GHz; 3MB to 6MB of cache memory.
Details: Set to launch June 30, say sources.

Gallatin
Aka: Xeon for servers with four or more processors.
Features: 2GHz; a faster version is expected June 30.
Details: A version with 4MB of cache is expected later in year.

Deerfield
Aka: Low-voltage Itanium 2 for blade servers.
Features: 1GHz; 1.5MB of cache; planned for second half of 2003.
Details: Costs less and is more energy-efficient than Madison.

Prescott
Aka: An overhaul of the Pentium 4.
Features: 1MB of cache memory; more than 3GHz; claims better multimedia and uses LaGrande security technology.
Details: Plans to compete with AMD's Athlon64.

Nocona
Aka: A 2004 Xeon for one- or two-processor boxes.
Features: Has the same chip core as that of Prescott.
Details: Paired with the Lindenhurst chipset.

Potomac
Aka: A version of Nocona for servers with four-plus processors.
Features: Large cache memory.
Details: Scheduled for launch in 2004.

Sources: Intel and others

They are in the target areas: databases, business intelligence; supply chain management; ERP (enterprise resource planning), of course. What you see more and more of is testimonials from end users and different verticals.

With Itanium--and to a lesser degree the high-end Xeons--is the issue of volume. Intel has historically sold chips in very high volumes. When you talk about large monolithic systems with 32 processors, we're talking about a lower-volume, higher-cost product. Are there any changes that Intel has had to make in order to deal with that difference?
The process developers make it pretty seamless for us product guys to do that. We chunk them out in the same fabs.

The biggest structural change took place more than three years ago, when we said: "Hey, to really drive the penetration into this part of the market, we have to be more solutions-aware." We then dramatically increased the number of people who touch the software industry. We do a lot more end-customer engagement. When I go out, I do high-performance computing roundtables with people from NASA and whoever else. We talk to CIOs and CFOs at retail and financial institutions.

I was talking to an analyst a few weeks ago who was grousing that, given the humongous amount of cache on Madison (6MB) and the fact that Intel got to start from scratch with the EPIC (Explicitly Parallel Instruction Computing) architecture, you should have a much bigger performance gap over existing architectures.
Whomever you are talking about may be using a little bit of both sides of their mouth. McKinley (the current version of Itanium) has beaten everything that has been around for ten years. That's pretty impressive. And most of the techno-cognoscenti have said that's pretty neat...I won't even tell you what we are doing with the guys who came across from the Alpha development. That is an intact team working on a middle-of-the-decade Itanium family processor.

Weren't they working on Montecito, the dual-core version of Itanium?
No. They are working on one after that. I haven't told you the name of it yet. Maybe when we do the Madison launch. The development effort is rooted in the guys who were working on EV8.

What happened to Chivano? That was slated to follow Montecito earlier.
Chivano and the Montecito kind of came together to become what's now Montecito.

The EV8 guys were working on some fairly sophisticated multithreading. Are you going to continue to evolve that?
Threading is a big idea for the company. One of the more interesting things about the server part of the business is that it is a technology incubation ground. You take new technologies and pilot them and prove them to the industry, and then they waterfall their way down to the desktops--and then to the notebooks and, ultimately, to the handheld computers.

It is also another example of why we need to work with the end user community more and more: We are creating more computer process technology than we have historically.

How so?
Back 20-something years ago, when I was working on the 8086 as a design guy, we could look at IBM mainframes and look at hints about what you should do.

I guarantee you that the IA-32 product line lives for a long time yet.
As time has moved along, knowing exactly what to do has become less and less obvious.

The EV8 also contained an integrated Rambus memory controller. I don't know if we will see exactly that in the future, but what do you think of integrated controllers in general? Are they a good idea?
They can be, but timing is everything. We obviously have looked at it and haven't done it yet, because the memory technology evolves faster than the micro core--and when you get out of synch, that is a problem. And I think that somebody in the industry will figure out that that is a problem--somebody who has already done it. (AMD put an integrated memory controller on Opteron.)

We haven't said whether we would or wouldn't do it, but it is a natural thing to think about. Just like putting cache on a die is something we said we would do--or multi-core integration. Some people have done it already. Why? It was the only way they could drive performance.

When we do it, I bet that it will make sense to you. You've got to not only look at the speed characteristics, but also at the power characteristics of the process. The EV8 guys were talking about a part that would run in excess of 200 watts. It is almost beyond practical. When we do it, it isn't going to do that. We are holding a power threshold. It takes a ton of work.

What do you think of Sun's Afara technology--the multi-core, multi-thread chip they have coming out?
Well, you know, too much of a good thing doesn't do anything for you. I read it with curiosity and interest. Right now, the natural thread limits in a Microsoft environment are about 64, and in a Unix environment, it's 128 or 256. The tricky part is that you've got to work with the applications industry in order to be able to understand where you are going so that they can write applications that will use that kind of capacity.

In some respects, you seed the industry and pour some water on it, and maybe a tree happens. That is an extremely complex task. For someone to go out there with many hundreds of threads of capability--it is awe-inspiring to think about how that can work.

How big is the opportunity for Itanium? It's made to take on the Unix market, which Gartner says this year will be less than half of the server revenue and about 18 percent of the units. It is shrinking and shrinking, partly because of your own 32-bit chips.
It's a place where we have the opportunity to gain market share. It is also a place where the revenue for the computing elements--servers and storage--is absolutely huge. Even though it is a relatively small unit volume, it is mega, mega billions of dollars of capital equipment.

It is also a technology incubation area. I don't know when--maybe a while yet--but if history is right, that technology incubator sometimes finds a crossover.

You seed the industry and pour some water on it, and maybe a tree happens. That is an extremely complex task.
So someday, the Itanium processor family has the opportunity to cross over with our IA-32 line and take a large part of that segment. You can see that we are expanding the breadth of the product and creating more overlap.

You mean Deerfield, the low-power, inexpensive version of Madison?
That is the first indication of it. You drive the breadth of the product line, and you get overlap with some of it. Boy, I don't know when or if the client ever needs 64 bits, but servers--if that technology gets wide applications--could use it.

If you are ever going to drive overlap, you've got to be able to drive price consistency. People who think that Itanium is "niched up" in the high-end forever because of the size of the die or the power are wrong.

Do you expect to extend the physical addressing technology in Xeon? Right now it goes to 36 bits, which lets you have 64GB of memory. Will you extend that again or will you let Itanium encroach from above?
That's an often-asked question. We've kind of been guarded about all of the architectural features in the next generation of Xeon. But that it is a leap you shouldn't make, to say that the only benefit of Itanium is the 64-bit physical addressing. It is a cleaner sheet of paper. It works around some of the deficiencies that people have historically pointed out (with IA-32 chips): There is a limited number of registers and not enough arithmetic capability in the product--how do you involve the RAS features of components?

I guarantee you that the IA-32 product line lives for a long time yet. The coolest facet of it is the threaded technology. We have some virtualizing technology coming. You just started to see that with LaGrande (a coming security feature), and there's more stuff there.

Let's take a walk down memory lane for a second. Back in 1991 or so, when Intel was debating what to do, the development group in Oregon said it could extend the 32-bit family to 64 bits while a group in Santa Clara recommended to go with what became Itanium. What were the reasons that Intel tipped for Itanium back then?
It was a hotly contended subject. The question was: How could we continue to find ways to evolve a 32-bit architecture? Is there a way to live with the legacy and the artifacts of the architecture? You know--only so many addresses, segmentation in the addressing, bladee, bladee, blah. Could you do that and do like we did with the P6?

The P6 family is a marvel. I am proud to be associated with it. We took core elements and pretty much dropped out the parts that became the Pentium III, the Xeon and even the Celeron. That was pretty cool. The question is: How could you keep doing that in an environment where desktops and big computers are diverging? It is complicated. The desktop continues to have pricing pressure on it, and with big computers like servers, you continue to evolve into the niche of a niche.

I think we came to the realization that, to do that, we are really going to have to let our lines diverge--and you are going to have to build these kinds of dedicated server processors, make the leap to some new technology generation, and hope that some day it crosses back over.

But there was the problem of software incompatibility. Did people draw straws and say, "OK, Craig, you are the loser in this one. You go tell the developers that they have to rewrite their apps."
I don't know. I wasn't an Itanium guy in the beginning. I was on the other side. I was the 32-bit guy, and we had very powerful arguments on how we were evolving it. The company intelligently embraced the reality that we would have to do an architectural transition and include the software industry--and we seeded this with fundamental technology around compilers.

I'm curious how important you think the IA-32 Execution Layer, which lets you run IA-32 code on an Itanium, will be.
As you target crossover, or a consolidated environment, your future server may run some legacy code that you don't want to convert to native. I'm a student of history. Mainframes are running 20-year-old Cobol code for which the developer is nonexistent. Hell, he may have left the company or retired. You don't even know the genealogy of it--you just know that it works.

So we had compatibility mode in hardware in the first members of the Itanium family, and what we have come to show you is that we have been working on something else. It is a software environment that can progress with the evolution of the components.

Because the compatibility wasn't great. The compatibility was great. The performance was, uh, limited. We have some real clever ideas for how to augment the hardware in a different way.