Understanding how cloud computing differs from, say, virtualization comes down to understanding it as a model for how to use your IT technology, not as a technology in and of itself.
James Urquhart is a field technologist with almost 20 years of experience in distributed-systems development and deployment, focusing on service-oriented architectures, cloud computing, and virtualization. James is a market strategist for cloud computing at Cisco Systems and an adviser to EnStratus, though the opinions expressed here are strictly his own. He is a member of the CNET Blog Network and is not an employee of CNET.
One of the most common questions I get from those exploring cloud computing for the first time is "what is the difference between cloud computing and virtualization?" It is an excellent question, as most IT departments are currently exploring the ways in which virtualization enables automation and provisioning agility. Given the fact that cloud is often touted for providing similar benefits, it can be confusing to understand why the two terms aren't equivalent.
My response to that question requires a bit of explanation, so let's step through the differences between the two concepts.
Virtualization is a technology.
When you run software in a virtual machine, the bits that represent the program's instructions run through a layer of software that "pretends" to be a dedicated server infrastructure, the hypervisor. The hypervisor is the heart and soul of server virtualization, and is the enabler of the consolidation and agility values that virtualization brings to the data center.
It is because of the hypervisor that virtualization is the true disruptive technology that enables cloud computing on a massive scale. Hypervisors allow servers to be multi-tenant without rewriting applications to be multi-tenant. Hypervisors allow operating systems and applications to install to a consistent hardware profile, even though they end up running on a variety of actual physical system implementations. Hypervisors also allow servers to be manipulated by software APIs, which greatly simplifies the act of automating IT operations.
Cloud computing is an operations model, not a technology.
When you run an application in a public or private cloud, there is no "cloud layer" that your software must pass through in order to leverage the physical infrastructure available to it. In the vast majority of cases, there is probably some virtualization involved, but the existence of hypervisors clearly does not make your data center resources into a cloud. Nor is the fact that Amazon EC2 uses Xen hypervisors the reason that they are a cloud.
What makes a cloud a cloud is the fact that the physical resources involved are operated to deliver abstracted IT resources on-demand, at scale, and (almost always) in a multi-tenant environment. It is how you use the technologies involved. For the most part, cloud computing uses the same management tools, operating systems, middleware, databases, server platforms, network cabling, storage arrays, and so on, that we have come to know and love over the last several decades.
Specific technologies, of course, gain significant importance in a cloud computing environment, such as policy-driven automation, metering systems, and self-service provisioning portals. However, all of these technologies--with the possible exception of the self-service portal--existed before cloud computing became a much hyped paradigm.
There is no doubt cloud borrows much from long established technologies. It is also true that cloud has borrowed from many long standing operations models, such as mainframe service bureaus. However, the combination of on-demand, at scale, in a multi-tenant infrastructure is relatively unique for the post client-server era, and is the reason why cloud computing is disruptive, rather than just another operations fad.