Is the cloud computing maturity model unnecessary or simply misunderstood?
Roger Smith challenges the need for a maturity model for cloud computing. His points are worth consideration, but as he makes his case he clearly misunderstands the tenor and theme of the model I introduced.
In a recent post titled "Cloud maturity models don't make sense," Roger Smith of InformationWeek's Analytics Weblog takes umbrage to my recent " " post. In Roger's post, he quotes my model and the "cloud adoption model" of Jake Sorofman, and then goes on to use a post by Ron Schmelzer--in which Ron debunks an earlier SOA maturity model--to express a strong objection to any cloud maturity model.
Just for review, here is the graphic from:
Another way to look at the model is this: is it possible to have an open cloud market not formed from competing compute utilities, themselves profiting from the efficiencies of automating the management of abstract components in an optimized--or consolidated--physical infrastructure?
Unfortunately, I think Roger completely misunderstood the tenor and theme of the post. This core argument from his post I think best illustrates the problem:
It strikes me as more nebulous than Nirvana, if the highest level of a Cloud Maturity Model is a measure of an organization's overall skills, policies, consistency and practices when developing cloud applications -- but your measurement isn't capable of rating specific projects. If your Cloud Maturity Models does let you rate specific projects, how do you factor in projects where cloud-less local storage (or even cheaper, slower cloud storage) makes the most sense? Case in point is the real-world example I recently wrote about a cloud application that used both solid-state drives (SSDs) and hard-disk drives (HDDs) in a complementary Video on Demand application, where "hot content" that needed the fastest possible IOPS (streaming new releases or the most popular movies) relied on performance-optimized SSDs, while "cold content" that needed the largest possible capacity for storing thousands of classic movies used capacity-optimized HDDs.
That last sentence is exactly where I think he applied a few incorrect assumptions to his analysis. The model I presented was not in any way prescriptive on technology or even process. It just represented abilities to be gained by at least one project in the IT organization before the next step could be most successfully considered.
Unless I am completely missing something, what disk drives you use to achieve automation, utility, or even market is completely up to the individual. Perhaps he was thinking of Jake's model when he wrote that.
It is reasonable to believe that some organizations will achieve at least automation with some runbook automation triggering shell scripts (though I would recommend such a project look closely at Puppet as an alternative approach). Others will jump from from abstraction to utility by applying technology from the likes of Cassatt, 3TERA, Appistry, Enomaly, et.al. Heck, , when it arrives, should jump-start a lot of people from consolidation to automation.
I think the real problem Roger has is with my use of the term "maturity model," not with the model itself. So, fine, call it a "dependency model," or an "ability model," or an "evolutionary path." Whatever. If there is a formal term I should use in place of maturity model, I welcome anyone to leave it in the comments below.
In the meantime, I stand by the model itself, and the points I made about it in the accompanying post (As do some others, I might add). I would welcome criticism of the model, and welcome any comments that you may have here or on that post. But understand that I used "maturity model" to simply convey an organization "growing into" the cloud, not as a prescriptive grading system for the challenging, innovative years to come.