X

Cloud computing and the big rethink: Part 2

If operating systems were originally designed to be applicable across the widest variety of hardware, and VMs are designed as containers for them, is the homogeneity of cloud infrastructure the VM's bane?

James Urquhart
James Urquhart is a field technologist with almost 20 years of experience in distributed-systems development and deployment, focusing on service-oriented architectures, cloud computing, and virtualization. James is a market strategist for cloud computing at Cisco Systems and an adviser to EnStratus, though the opinions expressed here are strictly his own. He is a member of the CNET Blog Network and is not an employee of CNET.
James Urquhart
3 min read

In the opening post of this series, I joined Chris Hoff and others in arguing that cloud computing will change the way we package server software, with an emphasis in lean "just enough" systems software. This means that the big, all-purpose operating system of the past will either change dramatically or disappear altogether, as the need for a "handle all comers" systems infrastructure is redistributed both up and down the execution stack.

The reduced need for specialized software packaged with bloated operating systems in turn means the virtual server is a temporary measure; a stopgap until software "containers" adjust to the needs of the cloud-computing model. In this post, I want to highlight a second reason why server virtualization (and storage and network virtualization) will give way to a new form of resource virtualization.

I'll start by pointing out one of the unexpected (for me at least) effects of cloud computing on data center design. Truth be told, this is actually an effect of mass virtualization, but as cloud computing is an operations model typically applied to virtualization, the observation sticks for the cloud.

Today's data centers have been built piecemeal, very often one application at a time. Without virtualization, each application team would typically identify what servers, storage and networking were needed to support the application architecture, and the operations team would acquire and install that infrastructure.

Specific choices of systems used (e.g. the brand of server, or the available disk sizes) might be dictated by internal IT "standards," but in general the systems that ended up in the data center were far from uniform. When I was at utility computing infrastructure vendor Cassatt, I can't remember a single customer that didn't need their automation to handle a heterogeneous environment.

But virtualization changes that significantly, for two reasons:

  • The hypervisor and virtual machine present a uniform application programming interface and hardware abstraction layer for every application, yet can adjust to the specific CPU, memory, storage, and network needs of each application.

  • Typical virtualized data centers are inherently multitenant, meaning that multiple stakeholders share the same physical systems, divided from one another by VMs, hypervisors, and their related management software.

So, the success of applications running in a virtualized environment is not dependent of the specialization of the underlying hardware. That is a critical change to the way IT operates.

In fact, in the virtualized world, the drive is the opposite; to create an infrastructure that drives toward homogeneity. Ideally, rack the boxes, wire them up once, and sit back as automation and virtualization tools give the illusion that each application is getting exactly the hardware and networking that it needs.

Now, if the physical architecture no longer needs to be customized for each application, the question quickly becomes what is the role of the virtual server in delivering the application's needs. Today, because applications are written against operating systems as their deployment frameworks, so to speak, and the operating systems are tuned to distribute hardware resources to applications, virtual machines are required.

But imagine if applications could instead be built against more specialized containers that handled both "glue" functions and resource management for that specialization--e.g., a Web app "bundle" that could deal with both network I/O and storage I/O (among other things) directly on behalf of the applications it hosts. (Google App Engine, anyone?)

A homogeneous physical architecture simplifies the task of delivering these distributed computing environments greatly, as there is a consistency of behavior from both a management and execution perspective. However, as it turns out, a homogeneous virtual container environment has the same effect.

So, if the VM isn't hiding diversity at the hardware layer, or diversity at the software layer (which is hidden by the "middleware") what is its purpose? Well, there is still a need for a virtual container of some sort, to allow for a consistent interface between multiple types of cloud middleware and the hardware. But it doesn't need to look like a full-fledged server at all.

Thus, the VM is a stopgap. Virtual containers will evolve to look less and less like hardware abstractions, and more and more like service delivery abstractions.

In my next post, I want to look at things from the software layers down, and get into more detail about why applications will be created differently for the cloud than they were for "servers." Stay tuned.