X

Dell goes nuts for clusters

The PC company, which has been criticized by rivals for performing little of its own independent research, is gearing up to make a greater mark in the scientific world through clustering.

Michael Kanellos Staff Writer, CNET News.com
Michael Kanellos is editor at large at CNET News.com, where he covers hardware, research and development, start-ups and the tech industry overseas.
Michael Kanellos
4 min read
Dell, the PC company that's been criticized by competitors for performing little of its own independent research, is gearing up to make a greater mark in the scientific world--through clustering.

The Austin, Texas-based company and The University at Buffalo, the State University of New York (SUNY Buffalo) on Tuesday will unveil a cluster of 2,008 Dell PowerEdge servers running Red Hat Linux. Researchers will use the cluster to study the structure and orientation of human proteins, a crucial step in finding cures for many diseases. The Buffalo cluster, one of the largest of its kind in the world, is the latest in a string of high-tech projects for upstate New York.

Additionally, Dell announced it was setting up the Dell Centers for Research Excellence, a program under which the company will acknowledge clustering breakthroughs and engage in research interactions with chosen universities.

"The purpose is to promote the benefit of clustered computers in academia," said Reza Rooholamini, director of cluster and operating system engineering for the Dell Enterprise Systems Group. Clustering "has been sort of incubated since the early '90s. Now it is at the stage where it is moving to the mainstream."

Clustering--the art of tying together standard computers, switches and storage systems into functional supercomputers--has rapidly become more popular. Unlike traditional supercomputers, the development of which can eat up millions of dollars and several years of research, clusters are largely built using off-the-shelf items and can be assembled, or upgraded, fairly rapidly.

The Buffalo cluster cost only about $4 million, and the work of assembling it began just this past June, said Dr. Jeffrey Skolnick, who will use the system to conduct protein research in the university's Center for Excellence in Bioinformatics.

"It is the poor man's supercomputer," Skolnick said. "We're basically stringing together a large number of quasi-independent computers."

Adjusting knobs
Clustering also seems tailored to Dell's corporate personality. Competitors such as IBM, Hewlett-Packard and Sun Microsystems spend millions annually in research and development honing their own microprocessors and operating systems for high-performance computing. Dell doesn't, instead relying almost exclusively on Intel, Red Hat and other companies for basic research.

In clustering, though, that's not particularly a disadvantage. The Buffalo cluster can theoretically churn 5.6 trillion calculations per second (5.6 teraflops), making it one of the fastest clusters to date and conceivably putting it comfortably in the "Top 500" supercomputer list, Rooholamini said.

Instead, the competitive differences come down to fine-tuning and assembly. Researchers need to figure out how many processors and how much memory should go into each server, and what sort of interconnects will provide optimal data flow. Dell not only participates in the tuning process, it also uses case experiences on other projects.

"There are quite a few knobs that need to be adjusted," said Rooholamini. "Clusters also have a software component."

"This is absolutely emerging as a core competency for them," said Jean Bozman, vice president of research at market analyst firm IDC. "What Dell has done is made it easy to acquire clusters. They are leveraging their business model to deliver scientific clusters more efficiently."

Dell first began to promote Linux clustering about 1 1/2 years ago, Bozman said. While there are substantial differences between the humongous university clusters and the smaller ones used in businesses, the company will increasingly begin to apply the lessons from one field to another. The proliferation of commercial databases into the cluster market will increase the size and heft of the commercial systems.

SUNY Buffalo is the first university to receive Dell's research excellence award. The company has also funded ongoing research projects at the University of Texas, the Georgia Institute of Technology and the Pennsylvania State University, Rooholamini said.

Choosing between a supercomputer and a cluster depends on the research project. Clusters work best when researchers analyze a large number of independent, divisible tasks. Fully analyzing the structure of a single human protein within the human genetic code, for instance, could take 1,000 years on a single computer, Skolnick said. Using a cluster reduces the time to less than a year.

By contrast, supercomputers work better on projects such as weather prediction and nuclear simulation, where the results of one calculation will affect the results of others.

Although not as glamorous as supercomputers, clusters still carry heft. The Buffalo cluster consists of 1,900 two-processor Pentium III servers, 100 two-processor Xeon servers, four servers for managing data traffic between the 2,000 computing nodes and a 14-terabyte storage area network from Dell and EMC and four servers for monitoring overall workloads. Switches from Cisco and Extreme Networks, meanwhile, link the system together.

In all, it fills 41 computer racks and consumes about as much energy as it would take to heat 70 to 80 homes.

"We're talking 80,000 pounds of computer here," said Skolnick.