If we examine the broader context, we find customers asking hardware and software suppliers to help them build more flexible, standardized platforms that provide scalable enterprise computing and eventually grids. This request is far different from outsourcing IT to a single company under the guise of a utility.
For starters, the utility analogy assumes every company uses technology in the same way. We know that is not true. Some companies invest most of their IT resources on development projects that accentuate their competitive advantage. Others are slower to adopt new technologies but have customized infrastructures that enable better service for their industry.
In a utility model, the first company would have to accept the same service level as the second--something that would be unacceptable to both. An alternative composed of a proprietary mix of the services is sure to cost more and be less likely to change with the dynamic business world.
Another assumption is that companies are willing to give up the keys to their technology infrastructure to an outside company. That leap of faith assumes outsiders will know the business as well as the insiders and will be similarly motivated to protect critical data and applications. As with the failed service provider model, we know that most customers simply don't buy this approach.
A third flaw in the argument is the assertion that the economics of IT and utilities are the same. The reality suggests otherwise.
While utilities may develop around products that require large capital investments to achieve optimal cost-efficiencies, you don't find many companies building their own private power plants. Most enterprises cannot use more than a fraction of the capacity of an optimal-size power plant. It is simply more economical to purchase electricity from an electric utility.
The utility analogy assumes every company uses technology in the same way. We know that is not true.
Customers want a pragmatic approach to utility computing closely aligned with business goals and the realities of available technology. They accept the notion that the assorted technologies associated with the so-called utility computing grid will provide improved flexibility, utilization and overall service. But rather than turn to a third party to buy IT capabilities, they still want to choose what to buy and manage their own IT infrastructure.
These underlying technologies, which include resource virtualization, dynamic resource allocation and policy-driven automation, are not standardized today. That's an issue the industry needs to resolve.
The standardization of the required technologies would provide a double boon. On the one hand, it would let customers meet their particular infrastructure goals in a way that maintains competitive advantage. It would also protect critical data the way they like it--via low-cost, standardized technologies implemented and managed internally.
All sources must be committed to open, standardized technologies if such approaches are to become ubiquitous.
If the emerging technologies for managing flexible data centers continue to follow the traditional path of standardization, I expect to see the realization of the goal of truly scalable enterprise computing in the next three to five years. But such a nirvana of corporate computing requires more than one technology provider. Single sources are not practical for building the types of specialized capabilities depicted in grids. All must be committed to open, standardized technologies if such approaches are to become ubiquitous.