Group says popular technique threatens U.S. security by sidelining other approaches more suited to decryption and the like. Photo: IBM's unusual design Photos: The fastest computer on Earth--for now
Large clusters of conventional servers, machines that most often use mainstream Intel processors and the Linux operating system, are sweeping the industry and now account for 296 of the 500 fastest supercomputers, according to a list released Monday.
But the United States needs to underwrite research into new hardware and software to solve problems such as decrypting codes that clusters can't handle, said a group of researchers who are planning to unveil a study at a supercomputing conference in Pittsburgh on Friday.
The two-year, 222-page study, "Getting Up to Speed: The Future of Supercomputing," will be presented at the SC2004 supercomputing conference. The National Research Council, part of the National Academy of Sciences, performed the work under funding from the Energy Department.
The report calls for the government, including Congress, to take a more active role in the development of supercomputing. The National Science Foundation should spend $140 million per year on a variety of small and large programs, while overall government spending should ensure top agencies can meet their total supercomputing need of about $800 million per year, the report says.
Government subsidies have to be handled carefully, though, especially when cultivating work the mainstream market isn't interested in, warned Dave Turek, vice president of deep computing at IBM.
"Engendering investments in areas not synchronized with what the market wants...runs the risk of making the industry less competitive over time by stealing resources" that could have been put to more fruitful use elsewhere, Turek said. "The delicate issue is how far do you go before you go down the path of propping up uncompetitive companies. I think that the marketplace is a great place to shake out competing ideas to see what makes sense."
Big Blue is involved in one government-funded supercomputing project run by the Defense Advanced Research Projects Agency, which is funding IBM, Cray and Sun Microsystems to work on advanced supercomputer designs. Graham praised it but said it's only a one-time program and doesn't support research necessary for a successor.
The report praises clusters but says they're not sufficient for all tasks.
"The advances in mainstream computing caused by improved processor performance have enabled some former supercomputing needs to be addressed by clusters of commodity processors," the report says. "Yet important applications, some vital to our nation's security, require technology that is only available in the most advanced custom-built systems."
Study co-chair Marc Snir, head of the computer science department at the University of Illinois at Urbana-Champaign, pointed to decryption as one onerous task. "Clusters are good for problems that can be decomposed so you can work on chunks of the program reasonably independently without too much communication between the nodes. If the encryption can be decomposed, than the encryption isn't good because it didn't scramble things well," he said.
Here, though, mainstream business technology could be relevant. IBM is adapting its conventional processors so they can be yoked together into a virtual vector processor.
Snir believes clusters may even have set supercomputing back in some areas. "Because clusters coming from Dell or HP or IBM are so good, the market for custom machines has shrunk," Snir said.
But clusters are growing more advanced, said Don Becker, chief technology officer of Penguin Computing and a pioneer of the "Beowulf" idea of Linux clusters. "I think only a tiny number of problems won't be handled by clusters five years from now," he predicted.