Workload mobility and the next Internet upgrade
Moving workloads across data center, organizational boundaries require a bandwidth level few understand today, but service providers investing wisely will profit greatly from "Intercloud."
The concept of workload mobility came up again recently in a discussion about the network requirements required to achieve that vision. My colleague Doug Gourlay recently posted several observations of what exactly networking in the cloud represents--and doesn't represent. In that discussion, he makes the following observation about the role of bandwidth in moving compute workloads around the Internet:
It's not all big pipes. I know, I wish the world were all 10Gb Ethernet too. I also wish I had 100Gb here today so we didn't have to focus so much on elegant link-bundling technologies. (this is a major area of network improvement in general in my opinion by the way, and may be worth another blog post on how to improve these...) Video is neat - it drives 5-10Mb/s, 15Mb/s for a big Telepresence. But moving a virtual-machine from one place to another may move up to 40GB of data, or 320Gb (sic). This would mean that in the course of an hour each VM movement is equal to about six concurrent TelePresence sessions in network demand. Compound this with VM sprawl, Dynamic Resource Scheduling, and data center consolidation and yes, there will be a heck of a lot of data moving between servers, between data centers, and with cloud computing from enterprises to service providers.
More than bandwidth though, which we can make the case for, how will the data move? Does the Internet itself have enough bandwidth and traffic management to support this data movement? And how will the addressing statefully move from one autonomous system to another? How will the security policy bound to a particular object (re: VM) stay consistent and coherent as the VM moves across the network and from one network to another. This is the longer term problem much more so than just the bandwidth issue, and one that is not currently being served by the hype-machines.
His observation about the immense bandwidth required to meet an open cloud with free workload mobility is a very interesting one. The live motion you know today typically bypasses moving data by leveraging shared network storage which is attached to a given VM regardless of which host it lands on.
The future is a bit different, however.
VMware, Cisco, and others are working on technologies to allow workloads to move across data center and even organizational boundaries,. Moving data, virtual server, software, policy and anything else that is required to make mobility work from both a trust and resiliency perspective in that scenario requires immense bandwidth.
Move frequently (for use cases like "Follow the Sun," "Follow the Moon" or "Follow the Law") and those bandwidth requirements are much greater than many imagine today.
So why would the Internet service providers be excited about this future? What on earth would propel them to invest heavily in upgrading infrastructure that they have argued is too expensive to upgrade to meet demand today?
The answer is "opportunity." Workload mobility, secure federated clouds, new collaboration scenarios, etc. create the need for new network services, which in turn can generate new revenue streams, attract new customers, and build new market disruptions. As I've said before, it's a way out yet, but workload mobility will be one of the next major disruptions in the information technology world.
Those I speak with on a regular basis about this concept have a name for it. It's a name that may take some getting used to, but a name that clearly reflects the parallels between workload mobility and data communications; parallels between this new world and what came to be known as "the Internet."
We call it "the Intercloud."