X

IBM nabs huge supercomputer deal

The computing giant signs what it hopes will be its biggest supercomputer deal ever: a contract for up to $224 million with the National Weather Service.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
3 min read
IBM has signed what it hopes will be its biggest supercomputer deal ever: a contract for up to $224 million to improve the National Weather Service's forecasting technology.

IBM Global Services will operate the 2,572-processor machine, to be installed in an IBM facility in Gaithersburg, Md., IBM and the National Oceanic and Atmospheric Administration (NOAA) said Friday. By July 2003, the system will completely take over the task of producing the National Weather Service forecasts distributed to Accuweather, the Weather Channel, and innumerable other media outlets.

The new weather system will improve forecasting, NOAA Chief Information Officer Carl Staton said in an interview. Today's system forecasts a week into the future. The new system will improve the detail and accuracy of those forecasts through better physical models and the incorporation of more data, and ultimately, Staton said, the system will be able to forecast two weeks in advance.

"As we add more data, more physics, and increase the model resolution--all those require more computing power, but we still have to do it in the same time frame," Staton said.

Supercomputers, mammoth systems that often take up large rooms, are used to tackle onerous computing operations such as cracking encrypted communications, designing nuclear weapons or developing new medicines.

NEC currently tops market analysis firm IDC's ranking of the world's fastest supercomputers, but IBM expects its sustained effort will eventually propel it into the lead.

"We have a lot of plans to beat that machine," said Peter Ungaro, vice president of high-performance computing at IBM, adding that IBM has the most supercomputers on the list.

The new weather system will have a sustained performance of 700 billion calculations per second, compared with 150 billion for the current machine, also an IBM model, Staton said.

NOAA's contract begins with a three-year plan but includes options that could extend it twice more through 2009 and add a backup computer as well. If all these options are awarded, IBM will receive $224 million. IBM refused to say how much it will receive in the three-year base contract.

If both extensions are granted, the final machine will have 48 times the computing power of the current system, NOAA said.

The supercomputer is made of dozens of p690 "Regatta" servers connected with a high-speed IBM communication switch and storing data on IBM storage systems with 42 terabytes total capacity. NOAA will first use 50 Regattas with 1.3GHz Power4 processors, and in 2004 will upgrade with an additional 36 Regattas, with 1.8GHz processors, Staton said.

The first option to extend the system would use IBM's Power5 processors, Ungaro said. Beyond that, the systems will use whatever is available to meet the performance requirements.

But rarefied supercomputer designs such as the one set up for NOAA are seeing competition from machines that cost vastly less.

These systems, called Beowulf machines, typically use groups--or "clusters"--of Intel-based computers running the Linux operating system.

IBM sells these cheaper systems, but it's not alone. Dell Computer, for example, has entered the fray and plans to announce a host of companies that have purchased its products.

Although Dell isn't known for in-depth research and development expertise, its entry into a particular market typically indicates that the company believes it's mature enough to dominate that area with mainstream computing technology. Though Dell has faltered with high-end storage products, it's shown the strategy to be effective with Intel-based servers.

Dell's supercomputer customers include Companie Generale de Geophysique, a major French petrochemical company; Swinburne University in Victoria, Australia; Johns Hopkins University; the Cornell Theory Center; the Georgia Institute of Technology; Penn State University; Sandia National Laboratories; the University of Alabama, Birmingham; and the University of Missouri, St. Louis.

Companie Generale de Geophysique has expanded its system to include 512 Dell servers running Red Hat's version of Linux. That's in addition to another one of CGG's systems three times that large in Texas.

And Johns Hopkins University's Department of Earth and Planetary Sciences is using Dell systems running simulations of the Earth's oceans, choosing the system based in part on its lower cost, according to professor Tom Haine.

"Building our cluster on open standards server technology and the RedHat Linux operating system was an easy decision when you compare it to a traditional supercomputer purchase," Haine said in a statement. "It allowed us to maximize cluster performance and robustness within our technology budget."