X

The new cloud infrastructure: Do you care?

This spring has seen some big advances in hardware infrastructure to support cloud-computing applications, with more on the way. As a cloud customer, should you pay any attention to them?

James Urquhart
James Urquhart is a field technologist with almost 20 years of experience in distributed-systems development and deployment, focusing on service-oriented architectures, cloud computing, and virtualization. James is a market strategist for cloud computing at Cisco Systems and an adviser to EnStratus, though the opinions expressed here are strictly his own. He is a member of the CNET Blog Network and is not an employee of CNET.
James Urquhart
4 min read

While cloud-computing news this spring has been dominated by the antics of individuals and small groups, a new class of services to support a new class of applications, and today the future of Java, there has been much less excitement about the advances being made in the world of data center hardware to support cloud computing.

Rackable CloudRack C2 Rackable Systems

This may be, quite possibly, for a very good reason: if you are a consumer of cloud-based resources, the mantra has long been that you can simply deploy or consume your applications/services without any regard to the infrastructure on which they are being hosted. A very cool concept for an application developer, to be sure, but I think it's a mistake to ignore what lies under the hood.

At the very least, the future of hardware ought to touch the inner geek in all of us.

What is happening in data center infrastructure is a complete rethinking of the architectures utilized to deliver online services, from the overall data center architectures all the way down to the very components that serve the "big four" elements of the data center: facilities, servers, storage, and networking.

Here's a quick breakdown of my favorites:

  • My employer, Cisco Systems, released one of the more visible examples of this, its Unified Computing System product set, which converges compute, network and virtualization into a single integrated infrastructure. I won't go into depth on UCS here--CNET has covered it in depth, as have others. However, the industry is rethinking the practice of buying each component in isolation, assembling systems in a custom fashion each time, and (most importantly) managing them independently. This converged infrastructure is probably the most disruptive change to infrastructure since the commodity x86 server platform itself.

  • Another company doing amazing things in this space is Rackable Systems, which is taking a wholly different tack. Rather than focusing on the management aspects of the infrastructure, it is focusing on maximum density for minimal energy consumption. Their new platform, the CloudRack C2, is targeted at cloud-computing providers, and is based on lessons learned from some of their largest customers--who in turn are some of the largest cloud providers in the world.
    A couple of weeks ago, I spent some time talking to George Skaff, vice president of marketing at Rackable. He talked me through what differentiates C2, and I have to say I was impressed. A tray-based architecture with isolated, temperature controlled, variable speed fans, no power supplies (12V throughout the cabinet, redundant rectifiers for AC-DC conversion) and all wiring on the front of the systems make C2 a truly dense, cloud-ready drop-in server infrastructure.

  • Google server design Stephen Shankland/CNET
  • Google added to the fun by revealing its own server architecture. The one thing that stood out here was the placement of a 12V battery right on the motherboard, supporting two processors and two drives per board. However, it is also interesting to note that the entire board is juiced with 12V (not the 12V/5V combination of most commodity boards), and that any power conversion happens on the motherboard.

    According to CNET News' Stephen Shankland, "Google's data centers now have reached efficiency levels that the Environmental Protection Agency hopes will be attainable in 2011 using advanced technology." That is extremely cool (no pun intended).

Now, why should you care (besides the aforementioned "geekness" factor)? In part, because these are the systems that your future depends on, whether you are a technologist or a business manager. Yeah, the chip sets are familiar and virtualization hides the vagrancies, but this is where your bread and butter lies as you move to data centers architected for the cloud.

Note that there are minor deviances from "traditional" server design here. What I wonder is if (when?) the large cloud vendors will begin to fork their infrastructure designs as they gain more and more control over the data centers that host global IT. When will it become more advantageous to take their custom server design in a direction that supports their custom management and virtualization software--and will that increase the risk to application payloads that should be portable between vendor platforms?

The need for interoperability standards remains great, in part due to this risk. The good news is we have time. I certainly don't think such a fork will happen soon. However, I do believe that it is important that those responsible for IT service level agreements keep track of what their cloud vendors (or their IT internal cloud infrastructure teams) are up to when it comes to hardware.

Whether you agree with me, you have to admit the disruption that cloud computing is having on the data center has made infrastructure somewhat fun to follow again.