X

IBM building 'green' data center at Syracuse

Big Blue and the New York school are working together to help research and develop techniques and tools for energy-efficient data center design and operation.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
2 min read

Not long ago the infrastructure pieces needed to construct a data center were pretty straightforward--Computer Room Air Conditioning (CRAC) units, power conditioning equipment, uninterruptible power supplies (UPS), and electrical and plumbing to tie it all together. It wasn't unimportant. But it was largely a well-understood extension to the HVAC infrastructure of a typical commercial building.

That's changing in a big way for two major reasons.

The first is that servers may have gotten smaller but IT shops are trying to cram ever more of them into a given space. The result is that more power has to be delivered to and more heat taken away from ever smaller volumes of space.

The second is that data center operators are starting to factor power efficiency into their buying decisions. Power Usage Effectiveness (PUE) has entered the lexicon as a metric for evaluating how much of the power delivered to a data center goes into running the computers themselves as opposed to the infrastructure needed to support them.

In short, figuring out innovative ways to build efficient data centers is suddenly sexy. I've been offered more tours of data centers in the past year by companies such as Intel intent on showing off newly developed approaches to cooling and modularity.

Thus, it's not especially surprising that IBM is now announcing that, together with Syracuse University and the state of New York, they "have entered into a multi-year agreement to build a new computer data center on the university's campus that will incorporate advanced construction and smarter computing technologies to make it one of the most energy efficient data centers in the world. The data center is expected to use 50 percent less energy than a typical data center today, making it one of the 'greenest' computer centers in operation."

The $12.4 million, 6,000-square-foot data center will have on-site electrical co-generation system that uses a natural gas-fueled microturbine engine to generate all the electricity for the center and provide cooling for the computer servers.

Syracuse will manage and analyze the performance of the data center, "as well as research and develop new data center energy efficiency analysis and modeling tools. IBM will provide more than $5 million in equipment, design services and support, which includes supplying the electrical co-generation equipment and servers such as IBM BladeCenter, IBM Power 575, and IBM z10 systems. The New York State Energy Research and Development Authority (NYSERDA) is contributing $2 million to the project."

This will be an operational data center, albeit a relatively modest-sized one compared to mega-service provider facilities. (New Microsoft and Google data centers are reportedly in the 100,000- to 500,000-square-foot range.)

This may not be a particularly surprising announcement given the level of activity in this area. But it's nonetheless notable that an aspect of computing that was, in many respects, a sleepy backwater of incremental advance and its own impenetrable jargon is suddenly the subject of lots of new fundamental research.