X

Computing powers finish connection spec

A host of computing industry heavyweights complete a specification for technology that could speed connections between different computers and storage systems.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors | Semiconductors | Web browsers | Quantum computing | Supercomputers | AI | 3D printing | Drones | Computer science | Physics | Programming | Materials science | USB | UWB | Android | Digital photography | Science Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
2 min read
A host of computing industry heavyweights completed a specification Friday for technology that could speed connections between different computers and storage systems.

The specification, called Remote Direct Memory Access, or RDMA, over Ethernet, borrows heavily from a similar standard called InfiniBand, but has the advantage of working over nearly ubiquitous Internet Protocol (IP) networks such as the widely used Ethernet.

The idea of RDMA is to send data over a network directly into a patch of a computer's memory without requiring much processing by the computer. That would be faster than the current method, which requires such data to be analyzed in a comparatively slow multistage process before it's stored in the right area.

One hope for the technology is that it will let companies build higher-powered databases by wiring low-cost servers together instead of having to buy a single, expensive machine. Another is that it will allow the use of regular networks rather than the comparatively rare and expensive Fibre Channel networks to connect computers and storage systems.

"It looks to be the direction that data center interconnects are going, rather than something new like InfiniBand. But we're a few years away from widespread adoption," said Illuminata analyst Gordon Haff. "Ethernet always--or at least usually--wins."

The technology has been under development at the RDMA Consortium. Founding members of the group include many big names in the computing industry: Adaptec, Broadcom, Cisco Systems, Dell, EMC, Hewlett-Packard, IBM, Intel, Microsoft and Network Appliance.

Now, however, the consortium has finished its version of the technology and has handed it over to the Internet Engineering Task Force standards body to oversee future development, the consortium said.

Next come products. HP, a vocal RDMA proponent, expects to start testing RDMA-enabled network adapters in January or February, said Paul Perez, vice president of storage, networks, and infrastructure for HP's industry standard server group.

Mainstream use, however, isn't likely until today's 1-gigabit-per-second networks are upgraded into 10gbps speeds, IBM and others expect.

The consortium finished work on the core parts of the RDMA specification in April, and since then has completed some significant additional components. One, called iSER, extends the iSCSI standard, which is used to build storage networks on IP networks, so that iSCSI networks can benefit from RDMA performance. Another, called the Sockets Direct Protocol, makes it easier for software to use RDMA without having to be rewritten, said John Gromala, marketing manager in Perez's group.