InfiniBand meets storage networking

Storage networking via iSCSI gets data transfer boost from addition of high-speed communication technology.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors | Semiconductors | Web browsers | Quantum computing | Supercomputers | AI | 3D printing | Drones | Computer science | Physics | Programming | Materials science | USB | UWB | Android | Digital photography | Science Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
3 min read
InfiniBand, a high-speed communication technology, has been extended to speak the language of iSCSI networked storage systems.

The addition, called iSER, gives a high-end boost to the lower-end iSCSI technology. In its current incarnation, iSCSI uses conventional 1-gigabit-per-second Ethernet networking and is typically slower than its 4Gbps rival, Fibre Channel. With iSER, it can tap the performance of InfiniBand, which communicates at 10Gbps, but can reach 20Gbps if used with the InfiniBand DDR (double data rate) hardware introduced this year, said Len Rosenthal, the marketing representative for the InfiniBand Trade Association.

"The primary advantage over Gig E (1-gigabit-per-second Ethernet) is 10 times the performance, and 20 times the performance in DDR mode. You bring high bandwidth and low latency to the iSCSI standard," Rosenthal said.

Storage networking tech
at a glance
iSER: Stands for iSCSI Extension for RDMA; it lets computers use send iSCSI traffic over high-speed InfiniBand networking gear.
iSCSI: Stands for Internet Small Computer System Interface; it lets computers send SCSI hard drive commands over a conventional Ethernet network.
RDMA: Stands for remote direct memory access; using RDMA over Ethernet--a technology dubbed iWarp--cuts down on networking communication delays between computer systems

The technology bump will likely be of most use to large organizations that have big data centers, such as financial institutions or supercomputing sites.

There are some catches, though. To use iSER, customers must buy InfiniBand adapters for servers and storage. That means they don't get to use the networking equipment already built into their hardware, a major advantage of conventional iSCSI.

For another thing, software support so far is limited to Linux--though Microsoft Windows and Unix versions are in the offing, Rosenthal said. The iSER software ships as a standard part of the OpenFabric open-source software that supports not just InfiniBand but also a slower equivalent using Ethernet called iWarp.

Ethernet technology, particularly products with new 10Gbps capabilities, competes with InfiniBand, said Greg Quick, an analyst at The 451 Group. That version of Ethernet helps with one of the technology's weaknesses--communication latency, or the delay from when a packet of information is sent to when it arrives.

But the arrival of iSCSI support in InfiniBand, the real competitor to InfiniBand is likely to be the storage networking incumbent, Fibre Channel.

"Fibre Channel development--(which is) just now looking at the 8Gbps space--is falling behind in the performance race and could be the big loser in this fight," Quick said in a recent report. "However, it is very unlikely that an established Fibre Channel network would throw out all its gear to adopt an InfiniBand one, so the two will likely need to learn to coexist."

InfiniBand has long had faster communication speeds and shorter delays than rivals, but the technology has generally been relegated to high-end niches such as supercomputers.

Rosenthal predicted InfiniBand will become a mainstream server technology in 2007, when it chiefly will be used to provide high-speed links between servers that collectively house a clustered database.

He also said he expects InfiniBand's next version, QDR (quad data rate), will double bandwidth again to 40Gbps when it arrives late next year.