MOUNTAIN VIEW, Calif.--For me, one of the more interesting discussion threads at the CloudConnect event I'm attending here is that of lock-in.
If you move an application or write a new one targeting a specific cloud provider, are you pretty much committed to that provider, for better or worse, through good service and through bad, until your application becomes obsolete or the provider closes shop?
The answer is not a simple one. And it's made no easier by, on the one hand, vendors blithely throwing around terms like open, standard, and interoperable. And, on the other, by calls for an interoperability nirvana that's hard to see ever existing absent software becoming an undifferentiated commodity.
One of the problems with this discussion is that "interoperability" or "portability" in the "cloud" covers such a wide gamut that it's very difficult to make meaningful general statements. IBM's Bob Sutor noted in a panel on the topic that there are different taxonomies for interoperability. For example, you can migrate content, migrate processes, migrate code, or migrate machines.
And, as you move beyond the automagical movement of running applications from one cloud to another, there's a lot of vendor gloss around interoperability issues. Any application can (more of less) be moved from one platform to another. It's a ";simple matter of programming," as the joke goes--which is to say that there's nothing necessarily trivial or quick about something as simple as moving from one vendor's Linux distribution to another (or even between versions) when real-world factors like testing qualification are factored in.
Thus claims that "all" that's needed to move off one vendor's cloud platform to another is to remap some of the value-added application programming interfaces (such as to a specialized database) or to integrate with a different management framework ring a bit hollow. IT shops stick with legacy platforms today because of far less onerous porting requirements to move to something new.
If there is one sine qua non though, it's the ability to extract the data that you own. On that, there is broad agreement, even if there are practical issues over formats and, sometimes, what data is "yours," exactly.
The discussion is further complicated because cloud computing takes place at several levels.
Roughly, the emerging consensus is that we can throw it into three big buckets: complete applications such as a hosted Microsoft Exchange or SharePoint service; a developer platform such as Google App Engine or Microsoft Azure; or something that's more akin to a bare-bones operating system, storage, or a database such Amazon Web Services.
Common shorthand for these three levels are software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). (Hardware as a service is sometimes used for this last category, but given the role now played by virtualization here, infrastructure seems the more accurate term.)
And, as Alistair Croll of Bitcurrent noted on the panel, the higher you move up the stack--the more abstractions you make use of above the basic infrastructure--the more lock-in you tend to inherit. The fact is that those abstractions (such as mechanisms to handle scaling for you automatically) save you work, but they also tend to be platform-specific.
As Alistair put it: "Developers are lazy in a good way. That's another word for optimization. Any cloud is going to evolve a set of value-add services. And that's the devil's payment. You have to be conscious of the lock-in you inherit."
But this is nothing new. It's not specific to cloud computing. It's not even specific to proprietary software. Want to move from a PostgreSQL to a MySQL database? Want to be renegade, switching from Linux to FreeBSD? Want to switch open-source content management systems? No one is keeping you. It's just a simple matter of programming and a little thing called time.