X

Hedge your bets in cloud computing

The future role of cloud computing is in many ways unpredictable and ever changing. What balance of traditional infrastructure, private clouds, and public cloud services will your IT department consume in the next three years? Five years? The trick is to hedge your bets wherever you can.

James Urquhart
James Urquhart is a field technologist with almost 20 years of experience in distributed-systems development and deployment, focusing on service-oriented architectures, cloud computing, and virtualization. James is a market strategist for cloud computing at Cisco Systems and an adviser to EnStratus, though the opinions expressed here are strictly his own. He is a member of the CNET Blog Network and is not an employee of CNET.
James Urquhart
5 min read

Debates flare up all the time about what is the "right" way to consume cloud computing. Public cloud providers push for ditching your data center in favor of pay-per-use services delivered over the network. Many hardware vendors claim that the enterprise's road to cloud computing is through the operation of private clouds. Still others argue that the whole concept is a crock of...well, you get the idea.

Flickr/Adrian Sampson

Which argument do you buy? How should you plan to deploy and operate your IT resources over the next 3, 5, even 10 years? In who's basket should you place your eggs?

In part, your answer will probably depend a bit on who you are, what your role is in IT delivery or consumption, and well-known factors such as sensitivity to data loss, regulatory requirements, and the maturity of your IT organization.

I would argue, however, that if you have existing IT investment, or you have requirements that push beyond the limits of today's cloud computing technology or business models, you should consider not choosing at all.

My argument starts with the simple fact that there are so many variables in the cloud computing equation, that no one can predict how the transition to cloud computing will take place--if it does at all. (I most certainly believe there will be a slow but inevitable change to IT, eventually dominated by public cloud services.)

If the public cloud providers are correct, and everything IT will be a public utility at some point, then predicting the next decade or two of transition is next to impossible.

If the vendors are right, and you must implement cloud in your existing facilities before understanding how to move mission critical systems to public clouds, then when and how to do so is itself complicated, and probably differs for each set of business requirements.

James Urquhart

If the "cloud is a fad" crowd is right, then implementing any cloud experiments at all will be wasted investment.

The odds are almost certain that the actual result for most, if not all businesses, will be somewhere in the mix of traditional data center, private cloud, and public cloud environments. Think of it as landing somewhere in the "Hybrid IT Triangle."

So how does one do this? How does a modern IT organization formally change its ways to be flexible to the uncertain future of its operations model?

The simplest way to do this is to embrace a few basic principles, many of which have been known for decades, and some of which are being made painfully clear in the cloud computing model:

  1. Focus on the application, not the server. In my earlier DevOps series, I laid out an argument for why virtualization and cloud are forcing both developers and operations teams to change their "unit of deployment" from the bare metal server to the application itself. This is a key concept, as you can manage the application in all three of the points on the triangle above.

    What does that look like? Well, virtualization makes it much easier to do, as you can build VM images for a single application, or a single application partition or service. At that point, it's not the VM that's the unit being "operated," as much as it's the file system or even the application code itself running in that VM.

    Thus, if you want to move the application from an internal VMware based environment to a Xen based cloud provider, your challenge is simply to get that same file system, or even just the application itself, running in the new infrastructure. Is this natural for most IT organizations today? No, but working to think this way has huge benefits in a hybrid IT environment.

  2. Decouple payload operations from infrastructure operations. Another key argument of the DevOps series is that cloud is forcing a change in operations roles, from the traditional "server, network, and storage" siloes to more horizontal "applications" and "infrastructure" designations.

    Infrastructure operators run the "hardscape" (servers, storage devices, switches, etc.) that makes up the data center, campus networks, and so on. They also manage the software systems that automate and monitor resource consumption, such as virtualization platforms and IT management systems.

    Application operators focus much more on the code, data, and connectivity required to deliver software functionality to end users or other application systems. These are the men and women that must choose where to deploy applications, and how to operate them once they are deployed. As public cloud systems don't allow them access to the bare metal, they have to design processes that don't depend on access to that "hardscape."

  3. Choose management tools that allow you to operate in all three options. There are so many management and governance options out there today that enable deploying, managing, and monitoring applications in virtualized data centers, private clouds, and public clouds. Use them.

    One of the biggest concerns about the cloud today is so-called "lock-in." In the cloud, lock-in has an especially insidious side; if a cloud vendor goes out of business, your infrastructure may disappear. One way to mitigate this risk is to choose an application-centric (or, at the very least, VM-centric) management tool or service that will allow you to take your data and applications elsewhere--quickly--should such an event take place.

    As cool as true portability between clouds and between virtualization platforms would be, relying on a management environment that can engineer solutions to portability is a much better transitional strategy. It's especially good if these tools or services help with things like backups, data syncronization, and disaster avoidance.

Now, the maturity of the tools and services on the market today might not make these strategies easy to implement, but I would argue that beginning the cultural and procedural changes behind these recommendations today will make your future life in a hybrid IT landscape much easier to deal with. Betting heavily on any one outcome, on the other hand, is a great way to miss out on the utility of the others.

(Disclaimer: I do indeed work for a systems vendor, Cisco Systems. However, these are my views, and not necessarily Cisco's.)