X

The other P2P revolution that wasn't

Today, "peer to peer" is inextricably linked to file sharing. But another P2P was once the subject of great buzz.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
3 min read

Today, "peer to peer" is inextricably linked to a variety of techniques for P2P file-sharing, whereby the recipients of a large file supply chunks of data to other recipients.

This distributes the load compared with everyone downloading a file from some central. For this and other reasons, P2P networks have proven popular for sharing MP3 music files although they're suitable for distributing any sizable digital content; for example, one also sees P2P employed to distribute Linux distributions, which can run into the gigabytes.

However, a few weeks ago I attended MIT Technology Review's EmTech07 Emerging Technologies Conference and attended a session where I was reminded that another "P2P" was once the subject of great buzz.

At the Fall 2000 Intel Developer Forum, outgoing Intel CEO Craig Barrett called peer-to-peer computing a "new wave which is going to have material impact on our industry." And he wasn't talking about file sharing.

Pat Gelsinger, who was Intel's CTO at the time, was even more enthusiastic in his keynote:

My subject for today is peer-to-peer--what we think is possibly the next computing frontier. Our agenda, we'll suggest, and hopefully by the end you'll agree with us, (is) that this is the revolution that could change computing as we know it.

P2P computing, as the term was popularized, was based on a pair of simple concepts: 1) There were lots of PCs sitting out there on desks doing nothing most of the time. (Laptops were far less ubiquitous in Y2K than today.) And 2) certain types of computing jobs could be broken down into a lot of small, distinct chunks. These generally fell into the realm of what's often called high-performance computing--tasks like looking at the different way molecular structures interact or fold.

Given those two facts, why not bring together the idle hardware and the computational need?

That's exactly what P2P computing did. There were a few efforts to use the technology for enterprise applications. Intel itself used P2P to power some of its chip design simulations. However, what really captured the public imagination was using distributed PCs in the homes of consumers or business desktops for causes like AIDS or other scientific research. The typical approach was to load the P2P application as a screen saver; when the computer was idle, it would start cranking the calculations, shipping them off to a central site as they completed.

SET@home was perhaps the canonical example. But there were many others such as United Devices, Entropia and Blackstone Computing.

At a February 2001 O'Reilly Conference on P2P Computing, there were 900 attendees. At the same conference, Larry Cheng of Battery Ventures estimated that there were more than 150 companies in P2P. There was even talk of monetizing the distributed computation like some form of electrical grid.

P2P computing never wholly went away; SETI@home remains an active project. Univa UD (formed by the merger of Univa and United Devices) has had some some success in pharma and finance (although it's less client-centric than United Devices' original vision).

But P2P, at least in the sense of harvesting excess client compute cycles, never amounted to something truly important, much less a revolution. There were security concerns and worries about the applications slowing PCs or hurting their reliability. One person was even prosecuted for running a P2P application on college computers. And, as much as anything, the whole thing just faded from being the cool flavor of the month.

Aspects of P2P computing live on. The basic concept that many computing jobs could be best handled by distributing them across large numbers of standardized building blocks was valid. In fact, it's the most common architecture for running all manner of of large-scale applications today from genomics to business intelligence. "Grid computing," a broad if variously defined set of technologies for harnessing and managing large compute clusters, shares common roots with P2P. In fact, The Grid by Foster and Kesselman was the bible of sorts for P2P computing.

But, as with so many other aspects of computation, the cycles are moving back to the data center. Perhaps we could summarize today's approach as being less about harvesting excess capacity on the periphery as not putting it out there in the first place.