Why Itanium still matters

As the processor underpinning Hewlett-Packard's Integrity line, Itanium remains an important component that can't be easily replaced.

Gordon Haff
Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
5 min read

It's been a long time coming, but earlier this week, Intel finally launched "Tukwila," the latest iteration in its Itanium family of high-end microprocessors.

Intel Itanium 9300 die Intel

Coming on the same day as IBM introduced both its Power7 chip and the first of an associated line of servers, Tukwila didn't garner as much attention as it might have otherwise. It's also true that today's Itanium is something of a specialty product. But that doesn't make it irrelevant.

Tukwila will be the first Itanium to incorporate Intel's serial processor communications link (QuickPath Interconnect, or QPI) and integrated memory controllers. These features boost performance considerably and are standard fare for the current generation of server microprocessors. They also mean that the Itanium 9300, as Tukwila is officially known, and the Intel's upcoming Nehalem-EX Xeon (x86) processor can, in principle, be supported by the same system design.

In practice, this convergence was a more interesting selling point in the days when Intel envisioned a broader market for Itanium processors. Nonetheless, it will still let Intel and its manufacturing partners take advantage of Xeon design work and dollars for Itanium. The specifics of the chip aside, though, it's not unreasonable to ask whether any of this matters. Given that both AMD and Intel's high-end x86 processors get more capable by the year, why does anyone need Itanium?

Certainly, Itanium's market position today is not the one envisioned by Intel and Hewlett-Packard, when they first started designing the processor in the mid-1990s. They had conceived of it as a 64-bit processor family running Windows and (perhaps) a united Unix that would emerge as the de facto standard, when the time came to move beyond the increasingly restrictive memory limits imposed by 32-bit processors.

The reasons why this didn't happen are numerous, and it would take an extended discussion to give them their due. However, some of the big ones include an overly ambitious concept; delays coupled with bad timing; a focus on instruction-level parallelism, when the world would soon move to more of an emphasis on threads; and AMD's introduction of 64-bit extensions for x86.

Today, by contrast, just one company, HP, accounts for about 85 percent of the market for Itanium processors, with the balance mostly going to several large Japanese computer system vendors. HP uses Itanium in its Integrity line, for which it mostly runs HP-UX (HP's Unix) and NonStop (the descendent of Tandem's fault-tolerant operating system) applications.

One company may not sound like much of a market but, in fact, vendor-specific processors were long the norm in the computer industry and only went by the wayside when x86 matured. And even today, IBM continues to aggressively roll out new Power processors, and Oracle says it plans to continue developing Sparc chips. Each of these cases is a bit different, but the basic point is that it's not outlandish to imagine that a major vendor's product line could support a unique microprocessor.

But why would HP want to, given that this is a company that also has a major x86 product line? In a word: software.

It is likely, perhaps even certain, that if HP could wave a magic wand, and have HP-UX and all its myriad applications run on Xeon tomorrow, it would do so. However, there is no such magic wand.

The closest to such a wand, dynamic binary translation (DBT), works in some limited contexts. IBM uses it for certain Linux applications on Power chips, and Apple used DBT to aid migrating from PowerPC to Intel chips. But, for the most part, IT shops won't or can't use it for the sort of critical applications that run on HP-UX today. Indeed, HP developed its own DBT technology, when it was initially moving applications to Itanium in the first place, under the name "Aries." Few used it.

It took many painful years--the better part of the last decade--for HP and its software partners to re-establish HP-UX's software catalog on Itanium when it migrated off PA-RISC. To start this process anew for Xeon is simply unthinkable.

And even if the features of Xeon have largely achieved parity with Itanium, the same isn't generally true of the platforms as a whole. HP-UX is a mature commercial Unix operating system in the mold of AIX and Solaris. Linux and Windows gain in capability and robustness with each passing year, but they're not yet at the same point. The contrast with NonStop is even more striking. This is, after all, a line of systems that powers about 75 of the 100 largest fund transfer networks around the world.

In short, Integrity brings a lot of money into HP, and it provides customers with capabilities that they can't necessarily get on Xeon-based platforms. And, in any case, HP-UX customers can't necessarily just pick up and move. Migrations take effort and money, and they have a degree of risk, even if the end state is ultimately a more desirable place to be.

In addition to introducing Tukwila, Intel provided additional detail about its successor, "Poulson." Scheduled for about two years hence, it will skip a process generation and launch using 32-nanometer technology. This should bring it more in line with the then-contemporary Xeon processors than the 65nm Tukwila is, relative to today's 45nm Nehalem. (The process generation is significant because it's closely related to the amount of real estate on the chip and therefore to features such as the number of cores and the amount of cache.)

Without providing much in the way of details, Intel also indicated that Poulson will have other architectural enhancements that go beyond simply being a process shrink. "Kittson" will be the generation after that.

Plans can change, of course, and processors can slip. However, barring seismic changes, Intel sketched out a road map for something like a decade's worth of Itanium processors. I don't really expect these Itaniums to set a lot of performance records, but there's no reason to think that they won't be "in the ballpark." It's worth remembering that Sun sold lots of Sparc systems long after they had a "hot" microprocessor. The inertia in applications, skills, and general risk aversion in high-end servers is enormous.

Itanium doesn't matter, when it comes to volume computing. It fought that battle and lost. But it remains an integral component in a major product line at a major systems vendor. And it remains a component that, in a world without magic wands, can't be easily replaced.