X

IT's successful standards

Many standards fail. But some become so ubiquitous that they change how we use computers.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
6 min read

The nice thing about standards is that there are so many of them.

This old saw is arguably less true than in years past. Today, for a lot of reasons, there's more pressure to reach agreement on one way to do a certain thing. (Think the HD DVD vs. Blu-ray debacle for an example of what happens when vendors can't agree on a single approach.)

Standards aren't a single thing. Some have been blessed with the appropriate incantations by some official or quasi-official body. Others come from an industry consortium. And still others are "de facto" (or at least began life that way)--the result of a dominant company or just a default way of doing things.

USB Flash Drive Ambuj Saxena, Flickr (under CC)

The purist will argue that just being widely used doesn't make something a standard. I agree up to a point and only use the "standard" term in this case for things for are truly ubiquitous. Contrariwise, a rigorous formal ratification process is no guarantee of success.

But some standards do win big and become part of just how IT gets done. Here are some of them.

Like many other successful standards, Ethernet has remained a fixture in local area networks for so many years in part by adapting to many successive waves of technology. First developed in the famous Xerox PARC labs in the mid-1970s, it initially ran over coaxial cable but soon moved to twisted pair cable with the 10 Mbit/second generation. 10 Gbit/second Ethernet is now starting to roll out along with a variety of additions to the specification that make it more suitable as a high-performance unified fabric.

Ethernet's initial success resulted in no small part from coordinated standardization efforts beginning in the IEEE. This helped it beat out alternatives, most notably IBM's Token Ring. Over time, Ethernet's ubiquity and the cost benefits provided by this volume helped it largely stave off server interconnect challengers. InfiniBand has had wins in high-performance computing and certain other clustering applications, but it didn't displace Ethernet as a "server area network" as early promoters had hoped.

PCI, Peripheral Component Interconnect, had its beginnings as an Intel-developed bus for connecting internal cards within systems. The version 1.0 spec came out in 1992. Given the ubiquity of PCI these days, it's easy to forget that it only replaced a plethora of other busses both standardized and proprietary in x86 and, later, large Unix servers based on other processors over the course of nearly a decade.

Nor was the process steady. Although PCI was initially introduced in part to replace the VESA Local Bus for graphics cards--which it eventually did--PCI was itself replaced by AGP (Accelerated Graphics Port) for a time prior to the PCI Express generation.

PCI Express makes for an interesting case study in the marketing of standards. With technology bumping up against the limits of parallel I/O busses like conventional PCI, the Arapahoe Working Group--spearheaded by Intel--started pushing a new serial interconnect standard in about 2001.  Arapahoe's success was by no means pre-ordained. AMD's HyperTransport was one alternative among several.

Arapahoe required hardware that was largely different from PCI but it was compatible with PCI's software model in a number of respects. And this was enough to get Arapahoe adopted by the keeper of the PCI standard, the PCI-SIG, and get the SIG's imprimatur on what would now be called PCI Express. And that helped make it the obvious heir to PCI. Names matter. (Here's a more detailed accounting of PCI Express and its history.)

It's easy to forget just how painful it could be, in the years before USB (Universal Serial Bus), to connect external peripherals to a computer system. RS-232, a long-used and successful standard in its own right, was the most common way. It was also a way that could easily devolve into examinations of cable pin-outs, interrupt channels, and memory addresses.

USB was a cooperative effort by a group of large technology vendors who founded a non-profit corporation to manage the specification. Version 1.0 was introduced in 1996. Now up to version 3.0, USB has become the standard way to connect external peripherals to PCs; it's also commonly used on servers for devices such as printers.

There's a spec for wireless USB but, like other standards intended to connect peripherals to computers wirelessly, it's never taken off. The current such "personal area network" getting the most buzz is My WiFi from Intel.

USB's primary competition has been FireWire, Apple's name for IEEE 1394. Unlike USB, it does not need a host computer and is faster than the USB 2.0 generation. However, it didn't catch on widely in the computer industry outside of Apple (which is phasing it out in favor of USB) and video equipment.

TCP/IP refers to the combination of two protocols: Transmission Control Protocol and Internet Protocol. Together, they are among the most important pieces of software underpinning the Internet which transitioned to using TCP/IP in 1983. Work on TCP began under the auspices of the Defense Advanced Research Projects Agency (DARPA) a decade earlier but, along the way, the software stack was re-architected to add IP as the early Internet grew.

Like many of the Internet's building blocks, TCP/IP was firmly entrenched before commercial interests got involved to any significant degree and, indeed, before most of the world at large had any real notion of the Internet's existence. The general public came to know the Internet through the World Wide Web, an outgrowth of Tim Berners-Lee's development of HTML at CERN, in the 1990s. Thus HTML, as well, is a key standard.

At the time that TCP/IP was gaining momentum, the International Organization for Standardization (ISO) spearheaded a large project to standardize networking. The "OSI model" remains the standard way to think about layers of the networking stack. If you talk about a switch being "Layer 4," you're using OSI terminology. But the specific protocols developed to go with the model were never widely used. (TCP/IP largely maps to the layers defined in the OSI model.)

The x86 architecture is perhaps the canonical example of a de facto standard driven primarily by a single vendor: Intel. Microsoft Windows is also in the running, but it was very arguably x86's ubiquity in a segment of the market open to relatively low-cost packaged software that made the rise of Windows possible. Over the past decade, AMD has also driven x86 innovations--most notably 64-bit extensions. However, it was Intel that had the biggest hand in shifting the industry from a structure in which each company did everything from fabricating processors to writing operating systems to developing databases to one in which different companies tend to specialize in one part of the technology ecosystem.

x86 emerged as a dominant chip architecture for a variety of reasons. IBM designed Intel's 8088 into the first important business PC. It got this win and others at a time when the world was rapidly computerizing. And Intel optimized itself to ride key technology trends while divesting itself of businesses, such as memory, as they commoditized.

Finally, here are a few others that could make a list like this one:

Wi-Fi played a big role in making personal computers more mobile--which is why Intel pushed it so hard.

VGA is the computing video standard that finally helped merge a rather splintered landscape and had a good long reign. (The latest video interconnect trend is a shift to HDMI--representing a coming together of computing and consumer electronics standards.)

SCSI was the first storage interconnect to merge in a big way a disparate set of existing connection schemes, both proprietary and more or less standardized. However, storage has remained an area where different standards are used for different purposes. That's changing to a degree with SATA, however, which we now see in both PCs and data centers.