Galaxy Z Flip 4 Preorder Quest 2: Still the Best Student Internet Discounts Best 55-Inch TV Galaxy Z Fold 4 Preorder Nintendo Switch OLED Review Foldable iPhone? 41% Off 43-Inch Amazon Fire TV
Want CNET to notify you of price drops and the latest stories?
No, thank you

Cloud server of tomorrow will look little like full-feature server of today

Google is heavily influencing server design for large data centers, as vendors offer systems with features like multiple server motherboards, single voltage power supplies, and heat-resistant chips. The best part is you don't care.

If you have an interest in the architectures that may very well come to dominate the world's most sophisticated data centers, you should take some time to check out an article in EETimes, entitled "Server makers get Goooogled."

The article, by Rick Merrit, describes new technologies being introduced by Rackable and other companies that are strongly influenced by Google's custom server designs over the last several years.

We're talking cool stuff here. As the article notes:

Google has not disclosed details of its motherboard design, but it did release a white paper calling for designs built on 12V-only power supplies. Besides such supplies, Google's design is said to use at least two full servers per board and remove many of the unneeded parts found in many mainstream server motherboards in an effort to shave cost, reduce power consumption and increase reliability.

Bob Warfield of SmoothSpan has a great post on the trend, and why on earth a data center would settle for more of less in server design:

One thing the cloud does, is it will force standardization and penny shaving at the hardware (and software) end. When Amazon, Google, or one of the others is building a big cloud data center, they want utility-grade computing. It has to be dense on MIPS value, meaning it is really compact and cheap for the amount of cpu power delivered. Designs that add 25% to the cost to deliver an extra 10% in power won't cut it. The Cloud will be too concerned about simply delivering more cores and enough memory, disk, and network speed to keep them happy. Closing a deal to build standard hardware for a big cloud vendor will be hugely valuable, and in fact, Rackable started out life building systems for Google.

(Bob also notes that the server world for cloud providers will be a little like the aircraft world for Southwest Airlines--a single standard server architecture utilized routinely throughout the data center for several types of workloads. That's an awesome analog, and one that I've considered many times when guessing at utility computing and cloud computing market trends.)

I've actually wondered for some time about when this would happen--the shift by systems vendors from focusing on enterprise IT to focusing on cloud providers. We are a long way off from that being the dominant model (if, indeed, it ever comes to pass that enterprise IT moves entirely to the cloud). However, Rackable seems to have its sights set on meeting the demands of the Googles and Amazons and Microsofts of the world.

As does IBM, according to GigaOm's Stacy Higgenbotham. She notes that IBM has the iDataPlex product line, which was expressly designed for the cloud:

(iDataPlex servers) have stripped away unnecessary hardware--a move aimed at reducing power-consuming components and saving space. Heat-tolerant processors allow a data center operator to keep air conditioning bills down, saving as much as 4 percent of total energy costs for each degree dropped. So as computing requires more scale, Google's innovations influence other buyers and sellers of technology even as the search giant slows its own data center construction.

Count another one for the Googleplex.

So which vendor should you go with if you are seeking the best cloud experience? Ah, here's the best part. You don't care. Period. As a cloud consumer, which underlying physical hardware comes into play should arguably be a non-issue, and at worst a fleeting thought as you review your cloud options. This is the beauty of the world we are moving to; it's someone else's problem now.

So, those nicely recognizable Dell 2950s or HP DL360 G5s you have stacked in racks in your dev lab will probably be replaced by weird mutant motherboards that couldn't read a USB stick if their life depended on it. You may never know the ultimate hardware architectures that you come to rely on every single day. Yet your livelihood will depend on hundreds or thousands of them.

I, for one, am actually OK with that.