X

Electric slide for tech industry?

Rising power consumption puts pressure on server makers to tame gear that's running hotter and hungrier.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
6 min read
SANTA CLARA, Calif.--The computing industry is on a power trip, but it doesn't want to be.

Technology providers, customers, government officials and researchers gathered at Sun Microsystems' headquarters here Tuesday to try to tackle some of the problems posed by the runaway consumption of electricity by computing gear.

The problems arise from a confluence of business demands, rising energy prices and technology changes, which have led to chips and computers that consume more electricity. The result is a conflict of priorities. Some people try to pack servers in more densely to use floor space better, while others try to space them out to reduce overheating problems.

"We're at a perfect storm," Ben Williams, vice president of commercial business for chipmaker Advanced Micro Devices, said in a speech at the Global Conference on Energy Efficiency in the Data Center. "Business units are requiring IT departments to do more with less. We have (chief financial officers) question why all the racks aren't filled. The IT guys are saying 'I need another data center.' Then we have the rising cost of energy that's bringing all this to the forefront."

But it's in the interest of anyone consuming power to improve efficiency, argued Andrew Fanara of the EPA's Energy Star program. "Companies have to ask themselves, 'Am I willing to bet the cost of energy is going to go down?' That's the cost of doing nothing," Fanara said.

To get a grip on the problems, the tech industry has come up with solutions ranging from more energy-efficient Xeon processors to liquid cooling of high-end server systems.

Trends aren't promising. Four or five years ago, a 6-foot tall rack full of computing gear would consume 2 to 3 kilowatts, "but now we're talking about 10-, 15- or 20-kilowatt-draw racks," said Sun Chief Technology Officer Greg Papadopoulos. Google has warned that its server electricity costs soon might outpace its server purchase costs.

"Companies have to ask themselves, 'Am I willing to bet the cost of energy is going to go down?'"
--Andrew Fanara, Energy Star program, EPA

The problem is getting worse as data centers packed full of computers proliferate and grow in size. Data centers typically have raised floors with holes in them to direct specially cooled air coming from below straight into server compartments.

"Between now and 2009, we expect 12 million additional square footage of raised floor going into marketplace," IDC analyst Vernon Turner said. By comparison, the Mall of America in Minnesota, the world's largest shopping mall, measures 2.5 million square feet. "Think of that filled to the brim with servers," Turner said.

A large fraction of the energy consumed in data centers goes to waste, said Bob Sullivan, a data center design expert from the Uptime Institute. In a survey of 19 data centers, 1.4 kilowatts of power are wasted for every kilowatt of power consumed in computing activities, the research consultancy found.

Measure the problem
The first step, several at the conference agreed, is to develop a useful common measurement of system performance. The industry could then balance that against power consumption, to judge how bad the power efficiency problem is and how effective solutions might be.

"Most companies agree the priority...is an objective measurement of the service being delivered," Jonathan Koomey, a computer power expert who works at Stanford University and Lawrence Berkeley National Laboratory, said in an interview. He expects that measurement to emerge from discussions he's involved in, which have included server makers, the Environmental Protection Agency and others. "I would hope a year from now, we'll have at least a draft metric," he said.

But most speakers here were reluctant to suggest a performance measurement process, and several agreed it's a thorny issue. For one thing, different companies inevitably try to pick tests that make their own equipment look good. For another, it's never easy to pick tests that represent the performance of everything from processors to storage and networking.

"It's going to be tough to pick something that's broad enough and yet simple enough to be practical on a day-to-day basis," said Peter Bannon, vice president of architecture at P.A. Semi, a start-up that develops low-power processors.

Sun's Papadopoulos said that even something as apparently simple as a vehicle's fuel efficiency isn't straightforward to measure--and compare--in reality. A motorcycle beats out a sport-utility vehicle in raw miles per gallon, but an SUV can be more efficient overall because it can carry more passengers. But then SUVs often aren't always filled to capacity with passengers, so utilization also must be factored in.

One fix from decades ago that's come back into vogue is liquid cooling. Hewlett-Packard, Egenera, Silicon Graphics and IBM all have added liquid-cooling options to their hardware.

"Water cooling and liquid cooling is coming back," data center design expert Sullivan said. "From an efficiency standpoint, the closer you can get water cooling to the processor, the more efficient it's going to be."

Plugging processor leaks
Chip designers are working on ways to reduce power consumption. Processors have become a major electrical problem in part because newer manufacturing technologies have led to electrical current "leakage" rather than fruitful use.

For servers with two processor sockets, Intel's current "Irwindale" models of Xeon chips consume 110 watts. But Michael Patterson, a thermal engineer in Intel's Digital Enterprise Group, said a significant power improvement will arrive in the upcoming "Woodcrest" model. That chip, due in the second half of 2006, has dual-processing cores, employs an architecture taken from the Pentium M mobile processor and is built with a new manufacturing process with 65-nanometer features compared to Irwindale's 90-nanometer process.

Woodcrest CPUs will use 80 watts, Patterson said. "That's not a low-voltage part. That's the performance-optimized processor," he said.

In addition, Intel said last week that the next "Montecito" generation of its higher-end Itanium processor will consume 100 watts, compared with 130 watts for current models.

The next problem will be in the computer's memory subsystem, which will guzzle more than half of a computer's power by 2008, Papadopoulos said. "These things are pigs," he said, and they're only getting worse with the move to DDR2 memory and, later, fully buffered DIMMs," he said, referring to a newer version of the double data rate memory standard and to its higher-speed sequel.

Sun just introduced its UltraSparc T1 "Niagara"-based servers, which need much less power than most mainstream servers, and is working on two technologies it hopes will reduce electricity consumption further.

"Computer memory subsystems are pigs."
--Greg Papadopoulos, chief technology officer, Sun

One is proximity input-output, which replaces communications wires and their accompanying processing chips with direct connections between the bottom of one processor and the top of another. Another is technology that has optical, rather than electrical, communication links.

Regarding energy use, "proximity I/O is way favorable. You get much higher bit (transfer) rates, and the power-per-bit (cost) goes way down," Papadopoulos said.

Also under way are methods to increase server utilization, so that systems can run closer to top capacity. Turner said many customers sheepishly report their servers are only running at 17 percent capacity on average, but in fact that's better than most.

One tool in extending utilization is virtualization, a technology that enables several operating systems to reside on the same server, among other things. "Only 20 percent of data centers we survey aren't doing virtualization, and I think they're the 'going out of business' data centers," Turner said.

The quickest, easiest step to improve power problems today is to install more-efficient power supplies, Berkeley's Koomey said. The EPA lets manufacturers give such supplies, which convert AC power from the wall to DC power used inside the computer, an "80+" label if they're more than 80 percent efficient. That "Energy Star" label means that they lose less than 20 percent of the power they draw in waste heat. But efficiencies of only 70 or 72 percent are typical for power supplies.

Those EPA Energy Star labels won't work for servers, though, because customers order them in too wide a variety of configurations, Patterson said.

"There's a huge number of permutations of what it's going to look like. There's not going to be a yellow sticker for each one of those," Patterson said. "We need something, but that's not it."

Labels on the power supplies give customers a good leverage to persuade computer equipment suppliers to increase efficiency, Koomey said: "They should be leaning on the vendors, but I'm surprised how little it happens."