X

Enterprises build optimized data centers too

GE's new Platinum LEED-certified data center provides a great example of how highly optimized data center design isn't just something that Google and Facebook do.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
2 min read

Web giants and mega-size cloud-computing providers garner most of the attention when it comes to highly tuned and optimized data center designs. In April, Facebook shared the specifications for the servers it builds as part of an effort its calling the Open Compute Project. More recently, Facebook engineers have written about testing an extreme multi-core chip design from Tilera. Google has long been known for taking unique approaches to server and data center operations and design, although the company is generally secretive about the specifics.

This sort of hyper-optimization around scale was supposedly going to rapidly drive all computing to happen at a small number of very large providers. The economies of scale, so the reasoning went, would be such as to render anything smaller cost prohibitive.

It hasn't played out that way--for a variety of reasons. And one of those reasons is that enterprises can play the data center optimization game too--and, in fact, may be better off with an approach that optimizes for their unique situation rather than for a mass audience. GE Appliances & Lighting's announcement yesterday that it's opening a new data center at its Louisville, Ky., Appliance Park headquarters offers a nice illustration of this trend.

GE's data center includes 128 cabinets of high-density servers. GE

The new facility reuses most of the walls, floor, and roof of existing factory space at Appliance Park. The location is historically interesting because it's where the first commercial UNIVAC computer went in 1954. (For the computer history buffs, the UNIVAC 1 had about 5,200 vacuum tubes, used mercury delay lines--basically big columns of mercury--for storage, and ran at 2.25MHz.) The systems in the new facility are rather more advanced and even cutting-edge compared to the rackmount servers that are the norm in typical data centers.

Two 27,000-gallon thermal storage tanks that are part of the cooling system GE

The most common servers today are 1U (1.75-inch) or 2U (3.5-inch) high, contain two multi-core processors, and are packaged into 42U-high cabinets. By the time networking equipment and other gear gets added, a cabinet typically draws about 4 to 7 kilowatts of power and dissipates an equivalent amount of heat. This latter point is important because that sort of power density was, for a long time, considered to be about the limit for conventional air cooling. And few companies wanted to deal with the complexity of more sophisticated cooling techniques.

However, GE's data center houses servers designed to operate in the 18 to 24 kilowatts per cabinet range. High-density obviously means the servers take up less space. It also means that, combined with high-efficiency cooling systems, less energy is needed to cool the servers. This is one reason that the new GE facility is one of the 6 percent of LEED-certified buildings globally to achieve Platinum certification. (LEED is an internationally recognized green building certification system.)

For a long time, optimization and technical innovation at the chip and server level were the norm while data centers were more about real estate and a fairly standardized set of infrastructure related to power and cooling. That's changing.