The Internet push is the latest effort by the backers of Future I/O, detailed today at a conference here. Backers include some of the world's biggest computer manufacturers, which seek to advance their server standard over a competing one called Next-Generation Input/Output (NGIO) from Intel and its cohorts.
The battle for the new standard is a major issue for software and hardware manufacturers, as well as their Fortune 500 customer base, who need to decide whether to support one technology or the other or both. The companies behind the victorious standard also stand to gain because they will, the theory goes, have an advantage in future product development because of their superior knowledge. Although the two camps have had peace talks they currently don't appear to be close to agreement.
While the competing standards have some similarities, technical disagreements exist about which method is best to transfer information to and from server CPUs. Currently, the Future I/O camp is criticizing NGIO for being too slow. However, Future I/O executives also today said they're pursuing a slower-speed "thin pipe" version of Future I/O designed to reduce component costs.
Future I/O and NGIO are designed to increase the reliability and speed of transferring data from a server CPU to devices such as network cards or storage arrays. Both technologies use a "switched-fabric" method of exchanging information that resembles a miniature network. A big advantage of the switched-fabric method is that it makes it harder for a malfunctioning component to crash the whole computer.
Future I/O will send data in the same format as the Internet Protocol (IP), said Frank Maly, director of marketing at Cisco. That fact will make it easier to transfer information straight to the Internet, giving a big boost to activities such as sending streams of voice or video data.
The support for Internet technology as a basis for Future I/O was a key part of Cisco's decision to become a promoter of Future I/O, Maly said. In addition, Maly pointed to the higher speed of Future I/O.
Milking the Internet Protocol
Using IP means that Cisco's vast line of hardware that routes Internet signals can be tied into server infrastructure as well. "We feel this technology has an opportunity converging data networks and I/O networks," Maly said.
Cisco equipment therefore could get a more prominent presence in high-end server tasks such as mirroring data on different storage systems or making servers disaster-proof by spreading servers across different locations, said Cisco's Duane DeCapite.
Future I/O will be based on the upcoming version of IP, called IPv6, and will likely further the adoption of that standard over the current standard, IPv4, Maly said.
Intel, though, has evaluated and rejected IP as the basis for NGIO, said Mitch Shults, marketing director of the NGIO initiative at Intel. Each chunk of data transferred with IP has 128 bits of addressing information at the front, meaning a lot of wasted space. "We're kind of scratching our heads" about why Future I/O chose to adopt IP, Shults said, because it increases the delays in shuttling data between servers.
Future I/O could provide the networking speed to unify several different server communication tasks that at present are typically split across several subsystems, said Martin Whittaker, manager of research and development at HP's enterprise NetServer division. Future I/O is able to unite the ordinary network that connects servers to computer users, the separate network that ties servers together in "clusters" to share tasks and protect against crashes, and the storage area networks (SANs) that are gaining popularity as a way to connect servers to their disk storage systems without burdening the rest of the network.
"You can't have a unified standard unless you go up in bandwidth," said 3Com's Bill Huber.
In addition, using IP means that computer management equipment currently designed to monitor Internet network traffic will be useful in Future I/O networks.
A higher-speed connection
The first version of Future I/O, which will be able to transfer data at the speed of 2,500 megabytes per second, is due by the end of the year. Future I/O participants will unveil a draft version of the specification, already 300 pages long, tomorrow.
By comparison, NGIO runs five times slower, at 475 MB/sec, said Karl Walker, vice president of technology development at Compaq's enterprise computing group. "Frankly, we think that Intel is pushing in the wrong direction," Walker said. "NGIO is actually slower than existing PCI implementations today and much slower than PCI-X coming out this year and early 2000."
However, at the same Future I/O is working on a thin pipe version, NGIO is working on a fat pipe version, Shults said. That initiative, led by Sun, will allow data transfer speeds of up to 2000 MB/sec, he said.
Intel believes one problem with Future I/O is that it will arrive too late. "We have almost complete convergence in terms of concept. Where we differ is in our perspective on the required time frame," Shults said. Intel has already produced prototype NGIO chips and hardware to developers today, Shults said, and wants the technology to be in place in time for the McKinley chip rollout, currently scheduled for late 2001.
Future I/O advocates insist they're moving right along, though. Version 1.0 of the Future I/O specification is due by the end of 1999, and the first shipping products using Future I/O are expected by early 2001.
Although future versions of Future I/O are planned with double and quadruple the speed of the first version, IBM, Compaq, and HP said today they're evaluating a slower standard as well.
Walker said the total cost of Future I/O won't be an increase over existing technology, but the scaled-down "thin pipe" version of Future I/O indicates cost still is an issue. The "thin pipe" version of Future I/O will "drive down the hardware cost," Whittaker said.
Future I/O is not an "expensive, gold-plated type solution," Walker said. "We're focused on making sure this is a very cost-effective implementation."
Across the great divide
Technical issues rather than legal issues appear to be what's separating the two camps from merging their standards now.
In the past, sticking points separating them have included legal issues such as handling intellectual property or voting on details of the specification, as well as financial issues such as royalties for licensing hardware.
The Future I/O camp has incorporated to form a special-interest group, or SIG, to design the specification, said Tom Bradicich, director of IBM's Netfinity server architecture and technology group. "There will be a single annual fee that will cover membership and access to membership, but it will not be an economic burden or inhibitor to companies wanting to join," Bradicich said. Becoming a member will grant a company access to the specification as well as a way to participate in defining the specification, he said.
The Future I/O SIG will finish the details of the membership agreement in July, Bradicich said.
In addition, Walker said the Future I/O camp has contracted with a management company that will be a "neutral holder of the specification" so that "no one single company is a point of contact or point of control."
The two camps still are talking, though, and each insists the other is welcome to come aboard. "There's always hope. The phone lines are open," Shults said.