X

'Ultradense' server era dawning

Intel, IBM and Compaq Computer are furiously working on "ultradense" servers that cram vastly more computing power into each rack of servers.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
4 min read
The era when a server the height of a pizza box seemed thin is about to come to an end.

Intel, IBM and Compaq Computer are furiously working on "ultradense" servers, ones that will allow companies to cram vastly more computing horsepower into each rack of servers without taxing power supplies and air conditioning systems.

IBM, Hewlett-Packard, Compaq, Dell Computer, Sun Microsystems and others already sell servers just 1.75 inches thick, similar to the thickness of a standard pizza box. The measurement is known as 1U. Right now, a rack can hold 42 1U servers, each in its own enclosure, sitting horizontally. In the future, a rack of the same height will be able to hold hundreds of ultradense servers--basically exposed motherboards stacked vertically in groups inside enclosures that are each several Us high.

By using this configuration, IBM will be able to achieve the power of two to eight servers per U, said Tom Bradicich, a director in IBM's Intel server group.

Initially, these superskinny servers will be full-featured designs like today's models. But later, the devices will be blown apart into separate components, IBM and Intel say. The philosophy will resemble stereo components, in which different boxes handle different tasks--only instead of different modules for CD players and tuners, there will be different boxes for CPUs, storage and network communications.

Intel also is working on ultradense server designs, which Mike Fister, general manager of Intel's enterprise platforms group, will describe Thursday in a keynote speech at the Intel Developer Forum in San Jose, Calif.

"It's a definite phenomenon in the data centers," Fister said in an interview, referring to the superclean and climate-controlled rooms where dozens or even thousands of servers are bolted to racks. "They want to put more stuff in the same square footage."

For years, server designers focused on boosting the power of single servers by increasing the number of CPUs, squeezing every last iota of performance out of software and pushing to restrict crashes to less than five minutes a year. But with the arrival of the Internet, companies have decided to fill racks with dozens of less powerful machines to accommodate immense amounts of Web traffic.

Mary McDowell, head of Compaq's Intel server division, calls the new designs "hyperdense" and expects them to arrive in 2002. IBM's Bradicich expects the first designs to start arriving in September, though not that soon from his company.

In his keynote, Fister is expected to discuss "bladed" designs--servers with several thin electronics boards--based on extensions to designs that Intel obtained through its acquisition of Ziatech, a representative said.

Hot stuff, those servers
Superthin designs are difficult chiefly because of one problem: heat. Air must flow over CPUs to keep them cool, but faster CPUs also run hotter, and thinner designs leave less room for cooling fins that radiate heat away.

"It's extremely difficult to fit the heat sink, much less the processor itself, in there," Bradicich said. Though the servers use comparatively basic, low-end parts, "how to cool them is beginning to be a high-end problem," he said.

Indeed, Network Engines, a pioneer of skinny servers, hired an aerospace engineer from Raytheon to design "heat pipes" that use evaporating alcohol to cool the dual CPUs of its new Sierra server introduced Wednesday.

To achieve even higher density, the solution is to place several servers into a single enclosure. "We don't see a lot of paper-thin servers. But we do see a lot of density per U. The ones who will win in this game will be the ones who will be able to pack creatively," Bradicich said.

One start-up that is focused on the ultradense server market is RLX Technologies, which uses Transmeta's Crusoe processor.

IBM isn't evaluating Transmeta's chip, Bradicich said. Though Transmeta chips are cooler than Intel's, a server designer also weighs performance.

"It's a trade-off. It's a question of whether you use 100 turtles or 10 rabbits to get the job done," he said.

Initial designs will focus on computing nodes with single-processor servers, Bradicich said, but later models will include dual-processor machines. In addition, he said, servers within a single enclosure likely will be joined together in bunches that offer collective features not available to standalone systems.

Some of the future of the new designs has been pioneered by the telecommunications industry, Bradicich noted. In particular, the CompactPCI design that telecom companies favor has potential. "We are making some headway enhancing the performance as well as the reliability," he said.

Grouping several servers into one enclosure is an important first step to increasing server density, but more drastic measures will be required later, Bradicich said. This will be the era of the stereo component philosophy.

New communications technologies such as IBM's Remote Input/Output, InfiniBand and even Ethernet can be used to separate CPUs from storage and networking components, Bradicich said.