By shuttling data at a sustained rate of 101 gigabits--the equivalent of three DVD movies--per second between Pittsburgh and Los Angeles, the so-called High Energy Physics team shattered a record for data transfer and won the Supercomputing Bandwidth Challenge, a contest geared toward improving network connection speed for.
The team sustained the 101-gigabit speed for only a few minutes during the 90-minute demonstration, peaking at just above the 101-gigabit mark. But "we are confident if we did it again...we would sustain the 100+gbps throughput for hours," Harvey Newman, professor of physics at the California Institute of Technology and the head of the team, wrote in an e-mail. Transfer rates of 130gbps to 140gbps are likely possible, he added.
The research also points the way toward future applications for much faster transfers of audio, video and other data. "There are also profound implications for how we could integrate information sharing and on-demand audiovisual collaboration in our daily lives, with a scale and quality previously unimaginable," Newman said.
The team is made up of computer scientists, physicists and network engineers from Caltech, Fermilab, CERN, the University of Manchester, and universities from Korea and Brazil, among other places.
The old data transfer record, set by the same group a year ago, was 23.2gbps, less than a quarter of the current record. The amount of data transferred under the new record is also greater than the sum of all of the other marks in the contest in the previous two years combined.
Put another way, the data transfer speed is equivalent to transmitting the entire Library of Congress in 15 minutes. On the Internet, the record for data transfer is about, while the record on Internet2 stands at 6.63gbps.
The data transfer rates were achieved in part though the Fast TCP protocol developed by Caltech professor Steven Low, which prevents congestion better than standard TCP. Standard TCP gauges congestion by the rate that packets of data get dropped. FAST TCP observes the delay that packets experience as they travel through the network.
This technique "provides a more accurate and more timely measure of congestion than packet drops--i.e., the traffic source can react to congestion before it builds to such a severe level that buffers at routers overflow and packets are dropped," professor Low wrote in an e-mail interview.
Greasing the skids
A beefed up hardware infrastructure--including several 10-gigabit links, four dedicated wavelengths of , an all-optical network that links U.S. universities, Web services software and a vast array of other technology--also helped boost overall speeds.
The goal of the experiment is to create technology that will enable physicists across the globe to cooperate on massive, data-intensive projects that will involve computers located around the world.
CERN, for instance, will begin to conduct experiments in 2007 to search for Higgs particles, believed to be responsible for mass in the universe and other phenomena. The search will ultimately involve more than 2,000 scientists from 160 institutions exchanging terabyte-size data samples in an effort to look for unusual particle interactions.
Because many users will make requests worldwide for data, the transfer of massive files will have to occur within a few hours, not days. Data from the project will generate several petabytes of data fairly rapidly, researchers on the project said.
Besides particle physics, the networking technology will benefit researchers in bioinformatics, astronomy, global-climate modeling and geosciences.
Reducing network latency on global computing is also the focus of, an initiative led by Princeton University, Intel and the University of California at Berkeley. Others are looking at ways to create an Internet link to .
To demonstrate its technology, the group also transferred simulated physics data to CERN, the University of Florida, Fermilab, Caltech, U.C. San Diego and Brazil for processing. The results were then aggregated in Pittsburgh and transformed into a visual display of the data. In another demonstration, the organization transferred large data sets between Pittsburgh and Manchester.
Private companies that contributed to the project include Cisco Systems, Hewlett-Packard and Newisys, which makes servers based on Advanced Micro Devices' Opteron processor.