As more companies move to virtual machines and blade servers to reduce space and costs, there are few mentions of the downside or "dark side" of virtualizing hardware and operating systems. I spoke with Bob Waldie, CEO and founder of Opengear, an open-source out-of-band management services provider, about the problems associated with the adoption of blades and VMs.
Q: So why do you think that virtualization is such a problem?
Waldie: The dark side of virtualization is the complexity of the environment it creates. While server virtualization improves flexibility and asset utilization, it also adds complexity to the environment, specifically by adding an extra hypervisor layer to the operating and management load and creating complicated virtual appliances and virtualized I/O. For example, when the enterprise VPN network is routed through software VPN appliances running on virtual servers and then migrated to virtual machines, the connection of the virtual machines back to the LAN presents a new layer of management challenges.
Well, what are these new management challenges?
Waldie: For the vast majority of enterprises, virtualization coexists with physical deployments and data center managers need tools for managing both physical and virtual environments. However, in addition to needing new virtual management tools, the added network complexity means the tools that previously used to manage the physical infrastructure pre-virtualization aren't appropriate. Adding to this management issue is the increase in disaster sensitivity that comes with consolidation. While prevalence of infrastructure outages may not increase, the consequence of a hypervisor or blade failure will, so managers have to find and implement a completely new set of tools while under incredible pressure to avoid any downtime or IT issues.
Even given additional management issues, isn't there a real benefit to the consolidation that virtualization brings?
Waldie: Sure, there are lots of reasons that people give for virtualizing a data center with the big one being increased infrastructure utilization as well as load balancing and power management, simplifying scheduled maintenance, and improving disaster recovery. But we've seen significant barriers to delivering such promises, and there are real costs in moving to more virtual environments.
Do you mean costs beyond the problems raised by managing virtual and physical machines?
Waldie: Yes, organizations are actually cutting their ROI on previous infrastructure management tools, and increasing upfront energy costs. For example, there is currently around $3 billion in KVM switches installed in the racks in enterprise data centers, and these tools are being progressively disconnected from the systems they used to manage. Server virtualization isolates the KVM switch from operating system and application. Not only does the sys admin lose a management tool, the organization also loses its investment in the switch.
As far as energy consumption goes, as virtualization moves server utilization up from its historical floor of 10 percent to 20 percent, power consumption also rises. There is also more risk that these increasingly mission critical servers can become hot spots, which can degrade server life and performance. Blade servers are also a problem; built for density, they ramp the power dissipation in each rack but demand sophisticated direct cooling solutions to ensure the servers run smoothly.
So what kinds of solutions out there help solve these problems?
Waldie: There's no integrated vendor-neutral solution that will monitor and measure power, direct cooling, control the physical and virtual servers that generate the load, and then enable the data center managers to load shed and balance power demand and power supply. On average, virtualized environments have 11 different platforms, technologies, and vendors present, and most proprietary tools can't deal with this level or heterogeneity, making open-source tools like Opengear's KCS6000 or Minicom's KVM.net II a good, flexible fit.