X

The grail of utility computing

Dell VP Jeff Clarke says the contention that IT can be delivered to customers as if it were electricity on tap is a misreading of the tech map.

3 min read
One of the most popular industry topics of conversation surrounds the idea of "utility computing." Many technology providers are now spending millions of dollars in advertising and marketing to convince the world that IT can somehow be delivered by a single provider much in the same manner as electricity. But the utility analogy is flawed in several ways.

If we examine the broader context, we find customers asking hardware and software suppliers to help them build more flexible, standardized platforms that provide scalable enterprise computing and eventually grids. This request is far different from outsourcing IT to a single company under the guise of a utility.

For starters, the utility analogy assumes every company uses technology in the same way. We know that is not true. Some companies invest most of their IT resources on development projects that accentuate their competitive advantage. Others are slower to adopt new technologies but have customized infrastructures that enable better service for their industry.

In a utility model, the first company would have to accept the same service level as the second--something that would be unacceptable to both. An alternative composed of a proprietary mix of the services is sure to cost more and be less likely to change with the dynamic business world.

Another assumption is that companies are willing to give up the keys to their technology infrastructure to an outside company. That leap of faith assumes outsiders will know the business as well as the insiders and will be similarly motivated to protect critical data and applications. As with the failed service provider model, we know that most customers simply don't buy this approach.

A third flaw in the argument is the assertion that the economics of IT and utilities are the same. The reality suggests otherwise.

While utilities may develop around products that require large capital investments to achieve optimal cost-efficiencies, you don't find many companies building their own private power plants. Most enterprises cannot use more than a fraction of the capacity of an optimal-size power plant. It is simply more economical to purchase electricity from an electric utility.

The utility analogy assumes every company uses technology in the same way. We know that is not true.
In contrast, IT infrastructure is increasingly being built on industry-standard building blocks that provide more capability at ever-decreasing costs. This means that almost any enterprise can achieve a cost-efficient infrastructure dedicated to their exclusive use.

Customers want a pragmatic approach to utility computing closely aligned with business goals and the realities of available technology. They accept the notion that the assorted technologies associated with the so-called utility computing grid will provide improved flexibility, utilization and overall service. But rather than turn to a third party to buy IT capabilities, they still want to choose what to buy and manage their own IT infrastructure.

These underlying technologies, which include resource virtualization, dynamic resource allocation and policy-driven automation, are not standardized today. That's an issue the industry needs to resolve.

The standardization of the required technologies would provide a double boon. On the one hand, it would let customers meet their particular infrastructure goals in a way that maintains competitive advantage. It would also protect critical data the way they like it--via low-cost, standardized technologies implemented and managed internally.

All sources must be committed to open, standardized technologies if such approaches are to become ubiquitous.
Customers can start to move to grids by simplifying and consolidating their infrastructures. From my perspective, that means standards-based server and storage systems as well as the utilization of best practices such as common configurations and software images. They can then add more dynamic capabilities with clustering and standardized systems management. All this would set the stage for even greater automation in the future by building in flexibility, investment protection, and total cost of ownership benefits.

If the emerging technologies for managing flexible data centers continue to follow the traditional path of standardization, I expect to see the realization of the goal of truly scalable enterprise computing in the next three to five years. But such a nirvana of corporate computing requires more than one technology provider. Single sources are not practical for building the types of specialized capabilities depicted in grids. All must be committed to open, standardized technologies if such approaches are to become ubiquitous.