There has been significant discussion over the short life of the term "cloud computing" about how little it differs from concepts like managed hosting and ASPs. And there is some truth to these observations; if you really look closely, what are the key differences between EC2 and a more traditional managed hosting provider? Some would say multi-tenancy, self-service and pay-per-use (including billing and elastic capacity). With specific regard to EC2, I would tend to agree.
(I would also hasten to point out that Amazon provides some very PaaS-like services in conjunction with EC2, such as Simple Queuing Service (SQS) and SimpleDB.)
However, if this is the great "paradigm shift" of cloud computing, as offered by smart people like Krishnan Subramanian of CloudAve, then let me offer that these basic extensions to existing hosting models will be peanuts next to a shift that will create one of the most significant market opportunities since the explosive growth of the Internet itself. I'm not dealing in hyperbole here; I honestly believe that there is a clear evolutionary step to the cloud occurring well after stand-alone self-service clouds are mainstream (which they arguably are today) that will inspire massive innovation.
That game changing technology disruption will be the federation of disparate clouds, and the distribution of software, data and billing across commercial and private cloud boundaries. In other words, the introduction of secure, reliable workload mobility in an extension of the Internet itself--an "Intercloud", so to speak.
Workload mobility is one of the key innovations of the virtual server world (though it borrowed heavily from its technical ancestry). Technologies like VMotion and other live migration technologies allow system administrators to move running workloads from one machine to another, but today they are generally limited to one subnet.
However, expand the reach of VM motion to cross not only subnet boundaries, but even organizational boundaries, and you get an interesting new world of possibilities. Some of these have been anticipated for some time, but as I talk to more and more people about what could happen here, more and more use cases crop up. For example:
- Follow the Sun: Move workloads to where they are being most utilized at a given time, usually the "day" side of the planet.
- Follow the Moon: Move workloads to where power is cheapest, usually the "night" side of the planet.
- Follow the Law: Move workloads to where the legal and regulatory environment is optimal for the task being executed or the data being stored.
- Optimize Latency: Move workloads to where network routing is optimized for a system of components.
- Optimize Utilization: Move workloads to where the optimal use of compute and/or storage utilization is achieved.
- Optimize Cost: Move workloads to where the cost of computing is as cheap as possible for the workload at hand.
There must be several, perhaps even dozens, of ideas workload mobility would trigger for entrepreneurs and established service providers alike beyond these. I won't deign to have thought through all of the possibilities. The truth is, though, we will probably end up creating complex assemblies of basic sets of policies, mixing and matching as required to meet service levels.
To get to this level of workload mobility, four key areas need to be addressed:
- The mechanism behind workload mobility itself. We've got a great headstart from the likes of VMWare VMotion, but there needs to be more motion aware infrastructure to make this happen ubiquitously. For example, how do you handle what I like to call impedence mismatches between different infrastructure providers, such as one using AMIs and another kvm guest images?
- Integrated and ubiquitous security and control mechanisms. Security for the obvious reasons, but giving the illusion of control is a big part of the workload mobility story. To the owner of the workloads, they should always have the illusion that they are running in their own data center, regardless of where the workload is actually running--though they should control that too.
- Service Level Automation. This is a critical aspect of trust, perhaps the most illusive enterprise requirement in the cloud today. Define service levels at least in part in terms that automation systems can use to tweak elasticity, availability and resource consumption. That automation, in turn, guarantees within reason that customer service levels will be constantly adhered to. Without service level automation across organizational boundaries, it will be impossible to trust systems that become distributed among multiple providers.
- Integration and interoperability protocols and services. We long ago left the world in which production software can be moved around in units called "applications". Almost any system today is comprised of multiple end user applications and back-end services that must coordinate to complete their respective functions. This does not even take into account the management backplane that exists to support those complex systems, that also must coordinate across the same organizational boundaries. All of this has to be available on the shared network in which workload is mobile. If we want workload to be mobile across the Internet, then it must exist as protocols or services on the Internet itself.
The final step of the cloud computing maturity model requires that these requirements be addressed. There is some debate about from what part of the compute landscape these services should be delivered, and how the various "impedence mismatches" of disparate cloud platforms will be handled (or even if they can be handled). Of course, I believe that the network will play a major role, but others see options in pure server software or virtual appliance implementations.
Any way you cut it, though, if you think self-service changed computing and created opportunities, wait until you see the "Intercloud".