X

New InfiniBand technology to remake servers--eventually

A group of the most powerful computing companies finishes the first version of InfiniBand, but it'll still be awhile before the new technology reshapes computer designs.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
4 min read
A group of the most powerful computing companies has finished the first version of InfiniBand, but it'll still be awhile before the new technology reshapes computer designs.


Gartner analyst Thomas Henkel says InfiniBand is a legitimate attempt to promote a least-common-denominator standard for network interconnection, but at the same time, the InfiniBand partners are not particularly willing to give up control over their proprietary products.

see commentary

A consortium of computing powerhouses has completed version 1.0 of InfiniBand, a standard that governs how CPUs in a server communicate with network cards and storage systems and eventually other computers as well. Computing companies have lofty aspirations for InfiniBand, expecting it not only to eventually replace today's PCI technology but also to completely rework corporate networks of computers and data storage equipment.

Though the standard is finished, it will be months before it shows up in the first hardware products. InfiniBand adoption is likely to lag further as customers wait for the technology to mature.

But most expect InfiniBand to prevail in the long run, based on promises of better reliability, faster communication speeds, and the backing of giants such as IBM, Intel, Sun Microsystems, Cisco Systems, Hewlett-Packard, Compaq Computer and Microsoft.

Intel, like IBM, foresees a day when InfiniBand connections won't just link CPUs and network cards but join together dozens or hundreds of servers. Large numbers of servers made of almost nothing but a CPU and memory will be joined with InfiniBand to large switches that handle the group's networking needs, predicted Jim Pappas, director of initiative marketing for Intel's enterprise platforms group.

Likewise, InfiniBand will connect these arrays of smaller servers to big back-end servers, the kind with dozens of processors and lots of memory to hold a large database, he said. InfiniBand also will connect all these servers to dedicated storage devices, Pappas said.

One predictor of how well InfiniBand will fare can be seen in PCI-X, an extension to the existing PCI standard that doubles the data transfer speed and makes a few other improvements. Compaq, IBM and HP announced PCI-X with great fanfare in September 1998, promising products using PCI-X would be available in the second half of 1999.

That timetable proved to be overly ambitious. Though products have yet to ship and chips supporting PCI-X are just beginning to emerge, there already is an 83-page list of corrections and updates to the standard.

"Is there anything that shows up on time?" asked Raju Vegesna, founder and chief executive of ServerWorks, a company building chips for servers that will use PCI-X. He said his company's chips are ready, but manufacturers of PCI-X cards such as network adapters are lagging. PCI-X will arrive next year, he said.

"I don't think there was really much prospect of 1999," said Tom Bradicich, director of architecture and technology for IBM's Intel servers. "Ratification of the specification took until about August 1999. It's more likely (products will arrive) in the beginning of 2001.

The InfiniBand schedule also has slipped. It was originally set to arrive at the end of the summer, said Bradicich, who also is co-chairman of the InfiniBand effort.

It simply took some time to review the draft InfiniBand specification and the numerous comments from the dozens of companies participating in the development of the standard. "We got 3,500 comments. We wanted to be very responsible," he said.

InfiniBand likely will show up in the end of 2001, Bradicich said. "It will be in...midrange to high-end servers," he said.

InfiniBand initially will be most popular with Intel's 32-bit Pentium CPUs, but in the longer term will be used with the 64-bit chips such as Itanium, Pappas said. Although he agreed products will be in customers' hands by the end of 2001, he said "volume won't start until 2002."

Technologies similar to InfiniBand have been used in proprietary computer designs for years. The difference with InfiniBand is that it's so widely backed. In addition to hardware makers such as Compaq and Cisco adopting it, a host of companies will produce chips to enable it. Big-name companies such as Agilent Technologies, Lucent Technologies, IBM and Intel will see competition from start-ups such as Banderacom, Mellanox and Crossroads.

InfiniBand differs fundamentally from "bus" technologies such as PCI. PCI's data pathway is shared among a number of devices. InfiniBand, on the other hand, establishes connections between one device and another--a CPU and the network card, for example--without having to share the connection with other devices.

InfiniBand establishes these connections by using what amounts to a miniature network within the computer. The brain behind InfiniBand is a switch, essentially a large high-speed chip that manages all the connections.

Another big difference between PCI and InfiniBand is political. IBM, Compaq and HP developed PCI-X on their own, with Intel joining in later. InfiniBand has more unified support, though it took months of wrangling before Intel on one side and IBM, Compaq and HP on the other reconciled differences to come up with a unified standard.

Though InfiniBand has widespread backing, it still faces competition. Motorola and other telecommunications hardware makers have backed a competing standard, RapidIO. Other backers include Cisco, Nortel Networks and Lucent.

RapidIO, for the most part, doesn't compete with InfiniBand, Pappas argued. RapidIO will be used chiefly within telecommunications equipment. "I haven't seen any indication that RapidIO is the right technology for the server interconnect," he said.