CNET también está disponible en español.

Ir a español

Don't show this again

Tech Industry

Exploring cloud interoperability, part 1

This spring has seen an explosion of activity around cloud-computing interoperability. What exactly is cloud interoperability, and who are some of the major players in this market making capability?

The announcement by the Distributed Management Task Force (a systems management standards group) that they were going to build an "incubator" to research and develop interoperability standards for cloud-computing management is just the latest step in an accelerating effort to unify "the cloud." Everyone is getting involved, from virtualization vendors to public cloud providers to the major enterprise IT systems vendors.

But what exactly is cloud interoperability, and what exactly are each of these efforts addressing? Where are the standards going to be created, or (perhaps more importantly) where is the technology going to come from?

I thought it would be useful to give my current understanding of the space, and to give you my 100,000-foot view of the cloud interoperability landscape today.

Cloud interoperability is one of those terms that actually covers more ground than it would appear to at first. However, at its heart, it refers to the ability of customers to use the same artifacts--management tools, server images, etc.--with a variety of cloud-computing providers and platforms. Something like "I want to reuse my Linux image from Amazon on Slicehost without changes" requires standards and agreements to enable Slicehost's platform to read Amazon AMI files.

I usually divide the concepts into three interoperability targets:

  • Application/Service

    • Definition: Some might argue with me that this is technically not cloud interoperability, but I think the cloud forces application developers to reconsider how they couple applications together. Thus, traditional application connectivity and integration issues come into play for me. The key question I ask myself in this category is how I can loosely couple applications and services while maintaining some resiliency to changes in connectivity and location. If I were to live-motion a portion of a distributed application system, would the cloud provide services to help me maintain the connections and contracts required for the application to remain viable?
    • Examples: There are a tremendous number of "traditional" distributed application standards and technologies at play here. Service-oriented architecture diehards SOAP and REST, for example, combined with good ol' DNS, will enable a lot of systems to "rediscover" dependencies as needed. More sophisticated integration, through innovative services like cloud integration from Boomi or Cast Iron, which allow the Internet to be seen as an enterprise service bus (ESB) of sorts.
  • Management

    • Definition: This is where a lot of early work is happening in cloud interoperability. What are the APIs by which a management application can control multiple cloud environments--both public and private? This includes how images can be delivered between providers (but not how they are packaged--see below), how servers and/or applications can be started and stopped, how storage can be manipulated, etc.
    • Examples: At this point, the Amazon Web Services APIs are looking like de facto standards in the management tools space right now. However, GoGrid has released their API under creative commons licensing, and there are at least three competing "open" efforts in this space, including the Cloud Computing Interoperability Forum, the Open Grid Forum's new Open Cloud Computing Interface Working Group, and the aforementioned DMTF incubator.
  • Image/Data

    • Definition: This is the one that most people assume when they say "cloud interoperability." How do you define a virtual server image, or a Java application, or a Customer Releationship Management (CRM) database, such that it can be deployed on another host, often a competitive host, without modification? For virtual machines, the DMTF's OVF standard is fairly far along in defining not only how to represent a "raw" server image, but also a "live" machine; the latter being necessary to move a workload across any network connection, even the Internet itself, without losing a client connection.
    • Examples: The DMTF's Open Virtualization Format (OVF) standard defines a format for describing a virtual machine in terms that can be interpreted by a variety of virtualization platforms, such as VMWare's vSphere and RedHat's KVM distribution. Interestingly, it looks like Google App Engine is making a concerted effort to keep Java applications developed there portable to a variety of middleware options.

Of course, the politics of interoperability will be playing out for some time yet. All of this activity is by no means a guarantee that we will see any interoperability in the near future. However, I hold out hope that at least the application/service and management interfaces can be defined and adopted in a few short years. I also believe that image/data portability can be achieved in the near term in many cases.

Image/data mobility, however, is another story. I'll leave that for part 2.