X

'Cloudbursting'...or just portable clouds?

A rational debate over whether "cloudbursting" is a reasonable use case to build toward shouldn't obscure the infinitely less debatable desire for cloud interoperability.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
3 min read

Talk of "cloudbursting" makes Chris Hoff angry.

"It's used by people to describe a use case in which workloads that run first and foremost within the walled gardens of an enterprise, magically burst forth into public cloud based upon a lack of capacity internally and a plethora of available capacity externally," he wrote recently on his personal blog. Hoff is director of cloud and virtualization solutions of the security technology business unit at Cisco Systems.

More colorful language follows. But the gist is that, if an application passes the hurdles of being able to run in a public cloud--regulatory compliance, acceptable performance, legal implications, and so forth--then why wouldn't you just run the application in a public cloud. Period. After all, there's a sort of default assumption that public clouds like Amazon are cheaper than in-house IT. The only wrinkle is that they won't always meet your IT governance requirements.

It's important to be able to move workloads from one cloud to another, but moving them dynamically may be less critical. Gordon Haff

Put another way, private and public clouds have different operational models and it's unclear why you'd want to mix them.

I'm going to (mildly) take issue with the core argument here but, more broadly, suggest that cloudbursting, as the term is typically used, is something of a red herring.

My mild disagreement is this. It's unclear that the economics of running data centers are such that a large public cloud provider operates modern, standardized, well-managed data centers for markedly lower costs than does Fortune 100 Megacorp. Does megacorp have more operational complexities and associated costs? Probably. But that's also sort of beside the point. Unless a given workload can run on a standardized infrastructure, it can't run on an external cloud provider anyway.

If internally hosted capacity, especially excess capacity, can indeed be cheaper (at least in terms of marginal costs) than a public cloud, then cloudbursting may indeed make economic sense for certain applications. Put another way, it may be cheaper to use internal capacity first, but it may not be cheaper to build enough internal capacity to handle all spikes in demand.

That said, I agree with Hoff's basic point. There is some activity in spot markets for computing capacity--such as Enomaly's SpotCloud. However, as was the case during the spike of interest in peer-to-peer computing about ten years ago (think SETI@Home), the hurdles to mainstream adoption are considerable. Standards for interoperability are just the beginning. There are also all manner of trust issues. And then there are simple matters of efficiency. Computing isn't just about cycles; it needs associated data and moving that around takes bandwidth and time.

Debates over cloudbursting, though, obscure a broader point.

Cloudbursting debates are really about the dynamic shifting of workloads. Indeed, in their more fevered forms, they suggest movement of applications from one cloud to another in response to real-time market pricing. The reasoned response to this sort of vision is properly cool. Not because it isn't a reasonable rallying point on which to set our sights and even architect for, but because it's not a practical and credible near- or even mid-term objective.

What is both useful and achievable in the near-term, however, is the idea of portability across clouds. Perhaps not dynamic--at least in practice--but in the ability to deploy an application on one cloud, or in a virtualized data center, and then be able to move that application to a different cloud at a future point. In the context of enterprise applications, this includes things like carrying over the certifications of software vendors from one cloud to another.

Voltaire supposedly once said "Le mieux est l'ennemi du bien"--the better is the enemy of the good. Over the longer-term I'd argue that the better should indeed be the goal, with computing interoperability an important part of that advance. However, for more interesting and closer-in time horizons, I think we do ourselves a disservice by obsessing too much with "automagical" workload shifting--when what we really care about is the ability to just move from one place to another if a vendor isn't meeting our requirements or is trying to lock us in.