X

InfiniBand eight years later

The tech once touted as a faster and more efficient way to connect to servers may not have lived up to early promises, but it's got a solid niche.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
3 min read

In October of 2000, I hopped a Las Vegas-bound flight to attend a developers' event being thrown by the InfiniBand Trade Association.

By way of background, InfiniBand was one of the hot technology properties of the pre-bubble-bursting days. It was touted as a better (faster, more efficient) way to connect servers than the ubiquitous Ethernet. Its more vocal backers, of which there were many, went so far as to position it as a "System Area Network"--a connective fabric for data centers. A whole mini-industry of silicon, software, host bus adapter, and switch vendors supported InfiniBand. One sizable cluster resided in Austin, Texas, but there were many of them scattered around the U.S. and elsewhere--to say nothing of significant InfiniBand initiatives at companies such as IBM and Intel.

I don't remember all the details of that past InfiniBand event but it filled a decent-sized hall at the Mandalay Bay and was followed by a party that took over the hotel's "beach" on a balmy Vegas evening.

Last week, I attended another InfiniBand event, Techforum '08. It was also in Las Vegas. More modest digs at Harrah's reflected that InfiniBand hasn't exactly lived up to those past hopes. However, the fact that there even was a TechForum '08 also reflects that InfiniBand is still with us--primarily as a server connect for high performance computing (HPC) applications where low latency and high bandwidth are especially important.

Given that I've been following InfiniBand since its early days, this seems like a good opportunity to reflect on where InfiniBand stands today and where it may be going.

As with another Big "I" technology, Intel's Itanium processor, it's tempting to glibly dismiss InfiniBand as a failure because it failed to live up to early (probably unrealistic) hopes and promises. In fact, InfiniBand now dominates performance sensitive connections between servers in HPC. It's largely taken the place of a plethora of competing alternatives, most notably Myricom's Myrinet and Quadrics' QsNet. Plain old Gigabit Ethernet has successfully held onto its position of default data center interconnect and FibreChannel has remained the default for storage area networks. But InfiniBand has actually been quite successful at establishing itself as the standard interconnect for optimized clusters.

One also finds InfiniBand technology beneath the covers in a variety of products. Among other products, a variety of blade chassis use InfiniBand in their backplanes. This may not exactly be InfiniBand the standard, but it is InfiniBand the technology. And this type of use contributes to InfiniBand component volumes--which tends to drive down prices.

But, what of 10 Gigabit Ethernet? Isn't it inevitable that 10 GbE will replace InfiniBand? Indeed, most InfiniBand component suppliers, such as Mellanox, are covering their bets by embracing both technologies.

But 10 GbE, after many years in development, remains in early days. Costs are still high.  The converged 10 GbE that is most relevant to InfiniBand's future sometimes called "Data Center Ethernet" isn't even a single thing. It's at least six different standards initiatives from the IEEE and IETF (not including the related FibreChannel over Ethernet efforts). In many cases, 10 GbE will also require that data centers upgrade their cable plant to optical fiber.

In short, although 10 GbE will certainly emerge as an important component of data center infrastructures, lots of technical work (and political battles) remain.

So does Ethernet conquer all? Maybe. Someday. A lot happens someday. InfiniBand may not ever markedly expand on the sorts of roles that it plays. But 10 GbE is far from ready to take over when latency has to be lowest and bandwidth has to be highest.