Peer-to-peer (P2P) technology allows any computer on a network to share access to all or part of its resources with any other PC on the same network--without the use of intermediary servers.
Although the popularity of P2P has Napster to thank for the moment, the technology is about much more than grabbing a free copy of the latest pop song for download. The possibilities for P2P are varied and seemingly limitless. P2P applications can distribute antivirus software, or allow co-workers to collaborate on projects online. Additionally, peer-to-peer networks can harness the excess processing power of PCs to, for example, model the interaction of drugs or other scientific experiements.
There are a number of advantages to this type of network architecture, thanks mostly to its efficient use of computer and network resources. However, the peer-to-peer model faces significant obstacles. The threat of copyright infringement is most evident in the current case against Napster. Other problems include searching for a viable business model to harness the power of P2P, as well as considering bandwidth limitations on P2P networks.
Considering a P2P network's power to potentially transform industries such as music, film or publishing, companies in these markets will have no choice but to embrace peer-to-peer technologies.
Other industries, such as life sciences or financial services, may selectively incorporate peer-to-peer technology when the cost savings and productivity enhancements are significant and clear.
In the midst of the hype that has surrounded peer-to-peer, some real businesses are emerging. Companies such as DataSynapse, Entropia, Parabon Computation and United Devices have hit the scene with compelling propositions for a stable base of clients with the ability to pay.
Although most of these companies have not announced client?s publicly, they are gathering business and the overall outlook appears bright.
The benefits of distributed computing
Of the three applications that have been born from peer-to-peer ideas--distributed computing, search and file sharing, and collaboration--the first holds the greatest potential in the short term.
In the distributed computing model, a large computing project is broken into smaller tasks, which are then "distributed" to individual computers on a unified network. As the tasks are completed, they are transmitted back to a central server for analysis.
Distributed computing can produce measurable results (specifically, cost savings), and it can boost incremental revenue by wringing better productivity from existing assets. In other words, by using distributed computing methods, companies are able to get answers to their problems faster--and cheaper.
Over time, the aggregation of enough computational power will go beyond allowing companies to solve existing problems faster and at a lower cost to tackling tasks that have previously been too computationally intensive to address.
Distributed-computing companies tap power from computers within an enterprise (specifically, its own employees' computers) or from computers connected to the Internet.
The SETI@home project, an Internet-based distributed-computing application, includes more than 2 million consumers who have volunteered their computing resources to scour radio-telescope data for signs of alien life. The popularity of SETI@home notwithstanding, the enterprise solution holds even greater potential over the next three years.
Larger companies will be the first to embrace distributed computing; they will be far more comfortable with an internal solution that minimizes security concerns and allows the company to exploit their existing investments completely.