One of the dynamics of the server virtualization marketplace that doesn't get the attention it probably should is the question of where the hypervisor "lives" and gets delivered to buyers. Services, such as load balancing and replication, that leverage a virtualized foundation to construct what goes by names like Dynamic IT may be ultimately more important than the foundation's components. However, the choice of hypervisor matters today if only because it serves as a sort of control point for the profitable components above.
Hypervisors get delivered in three different ways.
The first is in the form of software purchased from an independent software vendor. This is the primary VMware model. And, with VMware still the 800-pound gorilla of virtualization, it remains the dominant delivery model today. These ISVs will argue, as does Simon Crosby of Citrix that if "the user expects to deploy a virtualization platform that is entirely guest OS agnostic, using a complete virtual infrastructure platform then a type-1 hypervisor that is OS agnostic (xen.org, Xen Cloud Platform, Citrix XenServer, OracleVM, VMware vSphere) is what they will go for." Former VMware CEO Diane Greene also argued this point with me--vigorously--after I suggested that the operating system might be a more logical virtualization entry point for some users.
The other primary way to acquire virtualization is as part of an operating system. This is the Microsoft model (and therefore the reason that Diane Greene got so irked at me for suggesting it might be a viable virtualization on-ramp). This model also describes Red Hat's approach with KVM.
From the operating system vendor's perspective, you could sum up this approach as "the path of least resistance." You're already buying the operating system anyway, so why not just get core virtualization as part of the package? (Of course, you then have to buy other pieces from the operating system vendor to effectively manage and make use of that virtualized infrastructure.) It strikes me as a powerful acquisition model for homogeneous environments or even environments managed as homogeneous pools. While OS-based virtualization has some catching up to do in areas such as management, I'm not sure limitations and newness, such as those noted by Andi Mann, are obstacles to the same degree they'd be if we were talking about a standalone product. As Crosby also notes:
It's important to realize that for a Linux vendor, KVM significantly simplifies the engineering, testing and packaging of the distro. KVM is a driver in the kernel, whereas Xen, even with paravirt_ops support in the Linux kernel, requires the vendor to pick a particular release of Xen and its tool stack, and then integrate that with a specific kernel.org kernel, and exhaustively test them together - rather than just getting a pre-integrated kernel and hypervisor from kernel.org. So it is entirely reasonable to expect that over time the distros will focus on KVM as a hypervisor. I think KVM is extremely powerful in this context.
The third delivery path is embedded, which can be based on either a standalone hypervisor or one based on a standard operating system kernel. We first saw this idea making the rounds in 2007 and, today, most of the major hypervisors are available in embedded form on various models of servers from the large x86 system makers. It seemed like an appealing idea at the time--amounting to virtualization as a server feature in the vein of a sort of super-BIOS. This was particularly true given all the ongoing work to standardize the way that hypervisors were monitored and managed.
To date, however, embedded hypervisors haven't really taken off. The standalone hypervisors exist in the context of a much broader suite of virtualization software from the ISV and customers find it more natural to acquire all their virtualization software from that source, rather than a system maker. For their part, the operating system vendors are already delivering an OS, so virtualization is just a natural extension of that. Perhaps as virtualization becomes more ubiquitous, embedding it in servers will seem more natural, but it hasn't played out that way to date.
What all this does show, though, is that for all the talk of the "commoditization" of the hypervisor, we're not at that point today. Commoditization implies, among other things, that a product from one source can be transparently interchanged with that from another. And that doesn't describe hypervisors--not even close.