The new virtualization design point

We're starting to see more and more system designs that clearly have as a target virtualized workloads. Dell's the latest example.

One of the sure signs that a new technology is having some real impact on the industry, as a whole, is when it starts changing other technologies, products, and processes that touch it. In part, this is a simple reflection that a vendor or a small group of vendors aren't the only ones who care about their new shiny-ness. Press releases, consortia, and partnerships are all well and good. But the real proof of acceptance is when other companies and customers start spending real money and changing their own plans and products.

We're seeing this happening with server virtualization. IT shops have started to rethink the processes that they use to allocate new computing resources to users. Some at the forefront have even made virtual servers, rather than physical ones, their default unit of computing.

We're also seeing changes in the way that servers are designed and built. In the x86 processor world, Intel VT and AMD-V attacked some of the most fundamental difficulties of virtualizing x86 hardware. Both companies are continuing to introduce hardware virtualization enablers to address things like I/O performance, virtualization's handling of memory, and compatibility of virtual machines across multiple generations of hardware.

We're also seeing changes in the way that servers are designed and built. Fundamentally, the issue is this. Server virtualization's first big win was in providing a path to consolidate x86 servers that were otherwise very lightly utilized--5 percent or below in many cases. Virtualization can up that figure closer to 50 percent.

So how does this change server design? Servers are designed and configured to be "balanced." This means that processing speed, memory performance and capacity, and I/O ideally don't limit each other. (In practice, fundamental technology limits dictate certain inequalities, but the system designer's job is to work around these as much as possible.) Consider if you put the latest quad-core screamer in a PC configured with only 256MB of memory and a slow serial port coming out the back. It wouldn't run most applications well--however speedy the processor.

Virtualization doesn't actually make the processor faster. But it does tend to make the processor do more work and thereby makes the other system components do more work as well. In practice this means that virtualized servers need correspondingly more memory and more network connections. And that's exactly the sort of thing that we're seeing.

To pick just one recent announcement, consider this September 10 press release from Dell:

The PowerEdge M905 delivers the ultimate four-socket blade-based virtualization performance and is the first blade server to support 11 tiles and 66 Virtual Machines (VM) in VMmark testing. The PowerEdge M805 delivers the same number of DIMM slots in a two-socket blade that requires a four-socket blade from either HP or IBM. With a choice of hypervisors including Citrix XenServer, VMware, and now Microsoft Hyper-V, PowerEdge servers can deliver the optimal platform for virtualized environments. The Dell PowerEdge M805 and M905 servers are now available worldwide with a starting price of $1,699 and $4,999 USD respectively...In addition to the new servers, Dell announced full, high-speed 10Gb Ethernet and 8Gb Fibre Channel switches and mezzanine cards designed to provide customers increased bandwidth and performance.

I'll just note a few things here:

  • More memory in servers is all the rage. That's because there's a fairly strong correlation between how much work a processor is doing and how much memory it needs to store the associated data and instructions. Memory requirements have been going up forever, of course, but server virtualization has accelerated the process.
  • 10Gb Ethernet may not yet be needed for many single workloads--especially in the volume server world. But as a pipe for an aggregated group of virtual machines? It's not mainstream yet, but it's clearly an early use case for high-bandwidth networking on x86.
  • Finally, observe that even performance claims are couched in virtualization terms. Sure, this is partly about using virtualization to lend a little dazzle to what might otherwise be taken as a just-another-server announcement. But the fact remains that system performance running a mix of workloads is increasingly a more important metric than how fast a system runs a single database application.

Enough proof that server virtualization is starting to change everything (or at least an awful lot) about the data center?

About the author

Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.

 

Join the discussion

Conversation powered by Livefyre

Show Comments Hide Comments
Latest Galleries from CNET
The best 3D-printing projects of 2014 (pictures)
15 crazy old phones from a Korean museum (pictures)
10 gloriously geeky highlights from 2014 (pictures)
2015.5 Volvo XC60: updated tech, understated design
Busted! CNET readers show us their broken devices (pictures)