InfiniBand reborn for supercomputing
After failing to deliver on its promise to remake mainstream business computing, the high-speed networking technology may be making a comeback as a way to create supercomputers.
Los Alamos National Laboratory has installed a major supercomputer made of 128 computers interconnected by
The surge in support is a reversal of fortune for InfiniBand, a standard initially developed by computing giants including IBM, Intel, Hewlett-Packard, Compaq Computer, Dell Computer and Sun Microsystems to succeed the omnipresent PCI technology used to plug devices such as network cards into computers.
InfiniBand failed in its original mainstream mission, with Microsoft and Intel stepping away from the technology.
If InfiniBand catches on in supercomputing, it will threaten niche companies such as Myricom and
"The high-performance computing interconnects these days are really a hodgepodge of proprietary interconnects that all do basically the same thing," said Illuminata analyst Gordon Haff. "The idea of having a high-performance, low-overhead interconnect that everyone can agree on is pretty appealing in that space. I don't see how those smaller niche interconnects can prevail."
In recent years, "Beowulf clusters" have caught on as a way to assemble supercomputers out of interconnected inexpensive Linux servers.
Reader Resources CNET White Papers CNET White Papers | ||||
InfiniBand may not have dazzled the computer industry, but it has reached data transfer speeds yet to be attained through more ordinary networking technologies such as Ethernet or Fibre Channel. The "4x" version of InfiniBand can transfer data at 10 gigabits per second, and there's a 12x version in the works. Mainstream Ethernet adoption is just reaching 1 gigabit per second, while Fibre Channel is now standardized at 2 gigabits per second.
InfiniBand isn't cheap, but supercomputer customers are used to paying a premium for better performance. One appealing feature of Beowulf clusters is that the same basic software works on inexpensive models with Ethernet connections and a few computers, and on high-end models with fast networking and thousands of systems.
Hopping on the InifniBandwagon
Dell Computer, whose coming "modular" computers will incorporate InfiniBand, is testing InfiniBand clusters in its labs as an option for high-performance computing, the company said.
Other companies are also getting involved, many of them announcing their plans at the SC2002 conference this week. Among them are Paceline Systems and InfiniSwitch, which make high-speed switches to connect InfiniBand-enabled devices.
Paceline announced a promotional kit for high-performance computing customers and an agreement with Abba Technologies to sell its hardware to supercomputer customers.
Paceline supercomputer customers include Sandia National Laboratories and the University of Washington.
Paceline also is working with a smaller company,
The starter kit costs $9,995 for a system with a Paceline 4100 switch, four adapter cards so servers can be connected, the MPI/Pro software and cables. Evaluation units are available now, with general availability scheduled for February 2003.
A start-up called Topspin Communications is using InfiniBand networking processors from Mellanox to build a 72-port InfiniBand switch.
TopSpin, which wants to reach mainstream commercial customers as well as supercomputer buyers, also is working with MPI Software Technology. Its hardware is used in the cluster in Los Alamos.
The Los Alamos system uses 128 dual-Xeon computers from Promicro Systems.
Another company trying to benefit from the supercomputing market is JNI, which makes InfiniBand cards that plug into servers. The company announced two new cards--one using Mellanox chips and the other using IBM chips--each with two InfiniBand ports. MPI Software Technology supports the cards, JNI said.