Storage predictions for 2011
Storage predictions for 2010 are everywhere. But here you'll find storage predictions for the following year.
My worthy competitors within the ranks of the storage analyst community have been out there for the last few weeks making predictions for 2010. Since I like to differentiate myself (and let's face it, I'm a bit late to the 2010 prognostication party) I've decided instead to get a jump on next year. Here are a couple of predictions for 2011:
Data deduplication (dedupe)--squeezing data objects down to a fraction of their original size--surfaced in 2003. Four years later, dedupe went mainstream as a process embedded within backup. Last year we saw emerging implementations of dedupe for primary storage, archival storage, and wide area networking, while the backup version turned Data Domain into a billion dollar baby. By this time next year, enterprise IT organizations will be deduping wherever and whenever they can. What's more, storage vendors will want to sell dedupe wherever and whenever they can. too. And that may seem counterproductive. Dedupe allows one to store less, and so storage vendors will presumably sell less. But in 2011 storage vendors will want to sell all the dedupe their enterprise customers want. Why?
Dedupe allows enterprise storage managers to continue to push a problem that has been looming over data centers for years--the eventual need to decide what data to keep and what data to delete, as in gone forever. Since the dawn of the glass-walled data processing center, the default policy on data retention has been to save everything somewhere. Yes that's an exaggeration but only a slight exaggeration. We know that "save everything" is not a sustainable storage policy. Yet, it's become easier for IT organizations to do just that than it is to go through a potentially contentious process of deciding what data get saved and for how long, and what (or who's) data gets dragged and dropped into the trash bin. Dedupe allows enterprise IT to forgo that confrontational process for the next five years at least. So storage vendors are happy when they can get customers to save their ever accumulating mountains of data--deduped or otherwise. Storing is good. Deleting is bad. Dedupe everywhere.
GBs of cloud storage for pennies
Last year we saw the old storage service provider model blossom once again. Cloud storage vendors are everywhere. Computing infrastructure by way of the cloud is now so abundantly available and inexpensive that the U.S. federal government may well mandate its use within the bureaucracy it controls as one way to reduce the federal budget deficit. Nothing better than a few big government contract deals to bring down the price of anything in IT. What's more, the standalone cloud services providers will face increasing competition from organizations that will carve off a segment of their IT infrastructure and offer it up to the cloud. Think Amazon EC2 and S3. Now think of that model multiplied by 50 or perhaps 100 as some other big enterprise IT shops on Wall Street for example get into the game. Infrastructure as a service in 2010 becomes infrastructure as a commodity in 2011.
And in keeping with commodity status, cloud computing services including storage in 2011 will be bought and sold by brokers as mainframe computing services were once brokered 30 years ago. Imagine: storage arbitrage? Maybe arbitrage is going a bit too far, but the street price for cloud storage in 2011 will fall to a nickel per GB per month or less and the added bandwidth charge will likely be history as well making the buying and selling of excess cloud storage capacity relatively easy. And where will the excess capacity come from? See the prediction above.
More on 2011 later. Right now I have to clear the decks and get ready for next week's Larry E Show.