X

Nehalem servers to anchor Intel cloud computing

Chipmaker is readying Nehalem technology as company focuses on mega data centers with hundreds of thousands of servers.

Brooke Crothers Former CNET contributor
Brooke Crothers writes about mobile computer systems, including laptops, tablets, smartphones: how they define the computing experience and the hardware that makes them tick. He has served as an editor at large at CNET News and a contributing reporter to The New York Times' Bits and Technology sections. His interest in things small began when living in Tokyo in a very small apartment for a very long time.
Brooke Crothers
3 min read

New "Nehalem" servers will anchor Intel's renewed push into cloud computing, as the chipmaker focuses on mega data centers with hundreds of thousands of servers.

Intel's cloud-computing efforts this year will be centered on a new server that uses upcoming Nehalem technology, Intel said Tuesday in a teleconference on its cloud-computing strategy. Nehalem is Intel's new chip architecture currently used only in its Core i7 desktop processors.

Mega data centers potentially mean mega-growth. The world's largest chipmaker sees between 20 percent and 25 percent of server shipments going to mega data centers by 2012. Today mega data centers represent about 10 percent of the server market, according to Intel.

And what is cloud computing to Intel? A cloud architecture aimed at mega data centers with hundreds of thousands of servers that "can be balanced automatically. Automatically resized and scaled," according to Jason Waxman, general manager of high-density computing at Intel's Server Platforms Group. "Your service is stateless: it's not the same server every time. At any point in time I'm not necessarily accessing the same server."

Intel's goal is to optimize this massive mesh of server hardware. "Optimization is key. When you're talking about hundreds of thousands of servers, every server, every watt, every network connection represents cost," he said.

Waxman said Intel will use its upcoming Nehalem silicon to spearhead its renewed push into mega data centers. "We've designed a server for a Nehalem-based board that's optimized for our cloud-computing infrastructure," said Waxman. The "Willowbrook" motherboard will be launched later this quarter, according to Waxman.

Willowbrook is designed with "very efficient voltage regulation," he said, and "we've optimized the layout of the boards" so air can flow more efficiently across the board. Waxman added that "idle power" has been reduced--a crucial metric for mega data centers. "We've been able to take out power. At idle, a standard Nehalem platform consumes 110 to 115 watts, we've been able to get it down to the sub-85 watt range," he said.

Overall, optimization and power savings boils down to cost. For a large cloud service provider, 50 percent of the total cost is the compute infrastructure--servers and storage--and 25 percent is delivering the power and cooling, he said. "75 percent of the (total cost of ownership) is computer, power, and cooling. And this is what Intel is focused on. Optimize the servers and get every watt we can out the servers."

Waxman said repeatedly that Intel is not going to be a service provider but wants to enable customers to take advantage of Intel cloud-computing technology. "We're not trying to become a service provider but we bring all this core technology and expertise together. The capability to look at a cloud and optimize it," he said.

He cited Salesforce.com, IBM, and Microsoft as service providers and added that "it's sort of a wild west frontier" as many of the more comprehensive cloud-computing service products from major companies are not in production yet.

Other technologies that Intel will roll out with Nehalem server chips include Virtual Machine Device Queues (VMDQ) that allow traffic to be queued up and aim to resolve an outstanding problem in which one virtual machine can hog all the bandwidth. Waxman also discussed the "I/O hub" technology that Intel is implementing with Nehalem. "It has a tremendous number of PCI Express Gen 2 lanes. Gen 2 for speed and more lanes--that's kind of our strategy," he said. The Peripheral Component Interconnect or PCI bus is a data path to a computer's peripheral devices such as a network card or graphics card.

Waxman also discussed a Node manager. "Within a data center, I'm trying to figure out how to use as many servers as I possibly can and one of the challenges of optimizing a cloud is how do you make sure you don't overload a server and create a server hot spot," he said. The Node manager will reside in the motherboard BIOS, he said.