X

I/O virtualization's competing forms

The emerging category is the latest to make headlines. But it takes many forms, some of which aren't even really virtualization as such.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
5 min read

Server virtualization means something fairly specific. Storage virtualization is a bit more diffuse. But it's I/O virtualization that really covers a lot of ground.

At a high level, virtualization means turning physical resources into logical ones. It's a layer of abstraction. In this sense, it's something that the IT industry has been doing for essentially forever. For example, when you write a file to disk, you're taking advantage of many software and hardware abstractions such as the operating system's file system and logical block addressing in the disk controller. Collectively, each of these virtualization levels simplify how what's above interacts with what's below.

I/O virtualization brings these principles to the edge of the network. Its general goal is to eliminate the inflexible physical association between specific network interface controllers (NICs) and host bus adapters (HBAs) and specific servers. As a practical matter in a modern data center, this usually comes down to virtualizing Gigabit Ethernet (and 10 GbE to come) and Fibre Channel links.

Virtualizing these resources brings some nice benefits. Physical resources can be carved up and allocated to servers based on what they need to run a particular workload. This becomes especially important when the servers themselves are virtualized. I/O virtualization can also decouple network and storage administration from server administration--tasks that are often performed by different people. For example, IP addresses and World Wide Names (a unique identifier for storage targets) can be pre-allocated to a pool of servers.

That's I/O virtualization conceptually. Vendors are approaching from a lot of different directions.

For starters, like many things, I/O virtualization has its roots in the mainframe. From virtual networking within servers to channelized I/O without, many aspects of I/O virtualization first appeared in what is now IBM's System z from whence it made its way into other forms of "Big Iron" from IBM and others. Thus, many servers today have various forms of virtual networking within the box whereby virtual machines communicate with each other using internal high-performance connections that appear as network links to software.

However, I/O virtualization in the distributed systems sense first arrived in blade server designs. Egenera was the pioneer here. HP's Virtual Connect for its c-Class BladeSystem and IBM Open Fabric for its BladeCenter are more recent and more widely sold examples. And virtualization, including I/O virtualization, lies at the heart of Cisco's Unified Computing System (UCS).

Blade architectures incorporate third-party switches and other products to various degrees. However, they're largely an integrated technology stack from a single vendor. Indeed, this integration has arguably come to be seen as one of the virtues of blades. In this sense, they can be thought of as a distributed system analog to large-scale SMP.

A new crop of products in a similar vein aren't tied to a single vendor's servers.

Aprius, Virtensys, and NextIO are each taking slightly different angles, but all are essentially bringing PCI Express out of the server to an external chassis where the NICs and HBAs then reside. These cards can then be sliced up in software and divvied up among the connected servers. Xsigo is another company taking a comparable approach but using InfiniBand-based technology rather than PCIe.

Whatever the technology specifics, the basic idea is to create a virtualized pool of I/O resources that can be allocated (and moved around) based on what an individual server requires to run a given workload most efficiently.

There's a final interesting twist to I/O virtualization. And that's access to storage over a network connection. While network-attached file servers are suitable for many tasks, heavy-duty production applications often need the typically higher performance provided by so-called block-mode access. For more than a decade, this has tended to translate into storage subsystems consisting of disk arrays connected to servers by a dedicated Fibre Channel-based storage area network (SAN).

However, with the advent of 10 GbE networks and associated enhancements to Ethernet protocols, we're starting to see interest in the idea of a "unified fabric"--a single infrastructure to handle both networking and storage traffic. One of the key technology components here is a protocol called Fibre Channel over Ethernet (FCoE) that allows block-mode storage access originally intended for Fibre Channel networks to traverse 10 GbE instead.

There's more to unified fabrics than that involving alternate protocols such as iSCSI and various acceleration technologies but for our purposes here, I'll use FCoE as a blanket term.

So what does FCoE have to do with I/O virtualization? After all, an adapter card optimized for FCoE can be virtualized alongside other NICs and HBAs. So, at first glance, you might think that FCoE and I/O virtualization were simply complementary.

At one level, you'd be right. Aprius, for example, advertises that it provides "virtualized and shared access to data and storage network resources (Ethernet, CEE, iSCSI, FCoE, network accelerators) across an entire rack of servers, utilizing the ubiquitous PCI Express (PCIe) bus found in every server."

However, considered more broadly, I/O virtualization and FCoE solve many of the same problems--that of connecting servers to different types of networks without a lot of cards and cables associated with each individual server.

Adapters that connect to converged networks will themselves converge to card designs that can handle a wide range of both networking and storage traffic. Furthermore, if Ethernet's history is any indication, prices are likely to drop significantly over time; this would make finely allocating networking resources among servers less critical.

To the degree that each server can get a relatively inexpensive adapter that can handle multiple tasks, the rationale of bringing PCIe out to an external I/O pool is, at the least, much reduced. There are still rationales for virtualizing I/O in some form--especially in an integrated environment such as blades. Cisco, for example, puts both FCoE and virtualization front-and-center with its Unified Computing System. But narrow justifications for I/O virtualization such as reducing the number of I/O cards required are significantly weakened by FCoE.

At the end, FCoE may not be I/O virtualization as such but it's closely related in function if not in form.