The technology uses a network of sensors--900 of them in a 2,000-square-foot HP data center in Palo Alto, Calif. The sensors feed data to a control system that governs the air flow and temperature setting of air-conditioning units.
So instead of running air-conditioning units at full speed and at the standard 70 degrees Fahrenheit, the units might be set so blowers run at slower speeds and produce 77-degree air when possible, said Chandrakant Patel, a distinguished technologist at HP Labs, in a meeting here with reporters Wednesday. He called the technology the dynamic smart-cooling controller.
Patel argues that there's plenty of room for improvement in the brute-force methods that prevail today in data-center cooling. "The state of the art is lacking," he said.
as servers are packed more densely and consume more power. and other companies whose products are now in the hot seat, while and trumpet their products' relative energy efficiency.
Customer trends spotlight the growing power problem, said IDC analyst John Humphries, who said his firm's customer surveys show:
Servers that consumed an average of 100 watts of power 10 years ago now consume an average of 400 watts.
Of the money spent to operate data centers, 15 percent to 20 percent goes toward power and cooling.
Each rack of computer gear 10 years ago held an average of seven servers but now holds an average of 20 to 22.
Electricity distribution systems 10 years ago were designed to deliver 5 to 8 kilowatts of power, but new data centers are designed for 20 kilowatts and up.
Ten years ago, there were about 6 million servers worldwide. Now there are 24 million, and IDC projects that number to grow to 35 million in 2010.
HP is building its sensor technology into a new 35,000-square-foot data center in Bangalore, India, and is looking at other internal sites as well, Patel said. Key to the technology is making sure no hot spots arise and cause server failure: Data center operators aren't willing to accept higher risk with a more flexible cooling system.