'Compute efficiency' and cloud computing

Cloud computing is all about reducing costs and increasing efficiency, right? A 19th century economist might disagree--and "compute efficiency" may someday be as important to achieve as energy efficiency.

Energy analogies abound with respect to cloud computing and its effect on enterprise IT operations and economics. Nick Carr's seminal work, "The Big Switch," laid out the case for why computing will be subject to many of the same forces as the electricity market was in the early 20th century. While I've pointed out the analogy isn't perfect, I will say there are often interesting parallels that are worth exploring.

NASA/Visible Earth

One example is the ongoing discussion about the effect of cheaper computing on the reduction (or lack thereof) of future IT expenditures. Simon Wardley, a researcher at the CSC Leadership Forum, has often pointed out that, while cheaper operations costs and reduced capital spending should signal a reduction in spending, the truth is quite the opposite.

Wardley points to a 19th century economist by the name of William Stanley Jevons, who outlined why this won't be so. The so-called Jevons paradox is explained as follows:

In economics, the Jevons paradox, sometimes called the Jevons effect, is the proposition that technological progress that increases the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource.

Author and blogger Andrew McAffee presented an excellent overview of Jevons' 1865 study of the effects of more efficient coal furnaces on the consumption of coal:

As coal burning furnaces become more efficient, for example, British manufacturers will build more of them, and increase total iron production while keeping their total coal bill the same. This greater supply will lower the price of iron, which will stimulate new uses for the metal, which will stimulate demand for more furnaces, which will mean a need for more coal. The end result of this will be, according to Jevons, "the greater number of furnaces will more than make up for the diminished consumption of each."

McAffee goes on to point out that the history of lighting--from ancient Babylonia to today--reinforces this effect. Does the fact that it now takes a tiny fraction of the man-hours it once did to produce an hour of light mean we consume less energy on lighting? No.

For me, there is an interesting parallel between computing and lighting, at least with respect to the Jevons paradox. Cloud computing is the latest in a series of innovations that have reduced the overall cost of a "unit of work" (whatever that is) of computing and data storage.

Yet, with each innovation, we have continued to increase the amount of work our computer systems do, and continue to increase spending on new computers. And, as with lighting, we don't always seem to make the most efficient decisions about when and where to use computing.

If you've ever seen one of those "dark side of earth" photographs from space (like the one above), you know we use much more lighting than we absolutely need. Electricity is cheap, so why not use it? Whether it is to increase safety, secure property, or simply make entertainment possible, we light because we can.

I would argue the same can be said about how we use computers in business. The larger the company, the broader the application portfolio, and--quite likely--the less efficient design of the overall IT environment. Redundant functions, duplicated data, excess processing--these are all rampant among our enterprise IT systems.

Many would also argue that cloud will make this worse before it makes it better. Bernard Golden, CEO of Hyperstratus, describes the difficulty cloud brings to capacity planning (and some possible solutions). Chris Reilly, an IT professional at Bechtel, uses the Jevons paradox to explain virtual-machine consumption data from a real-world IT operation.

Perhaps the most cautionary tale for me, however, is what happened to IT operations with the introduction of "cheap" x86 servers in the 1990s. I was doing software development back then, and I cringe thinking of all the times I or a colleague justified increasing the capacity of an application infrastructure with "hey, servers are cheap."

Does anyone remember how "complex" IT was before all those servers arrived? Anyone want to argue IT operations got easier?

Similarly, the availability of cheap compute capacity in the cloud is going to drive inefficient consumption of cloud resources. Yes, each app may optimize its use of resources for its specific need, but thousands of apps will be developed, deployed, and integrated just because its easy to do so, and "sprawl" will become a fun buzzword again.

The day will come, however, when your CFO will stop asking "how do we reduce the cost of IT infrastructure?" and start asking "how do we reduce our monthly bill?"

I want to plant a seed in your mind today, a tiny germ of an idea that I think will grow into a fully mature meme in the years to come. As you utilize cloud computing to meet latent business demands, remember to be "compute efficient."

Follow the model of FedEx and Bechtel: use infrastructure to increase the efficiency of IT systems architectures, not promote greater inefficiency. That's not to say you don't take chances on possibly innovative ideas that may fail, but that you understand the cost of compute inefficiency is more than money, but the pain of complexity in integration and operations.

In fact, if you are an enterprise architect, I think it lands on you to make sure your move to the cloud is responsible; that your company practices compute efficiency and avoids complexity at the business systems level. If you are a developer, it lands on you to think before you build and deploy, and that you make sure someone pays attention for the entire application lifecycle.

Then again, this has been the mantra of the service-oriented architecture world for over a decade now, hasn't it? Maybe compute efficiency isn't that important after all. After all, inefficiency is cheap.

About the author

    James Urquhart is a field technologist with almost 20 years of experience in distributed-systems development and deployment, focusing on service-oriented architectures, cloud computing, and virtualization. James is a market strategist for cloud computing at Cisco Systems and an adviser to EnStratus, though the opinions expressed here are strictly his own. He is a member of the CNET Blog Network and is not an employee of CNET.

     

    Join the discussion

    Conversation powered by Livefyre

    Show Comments Hide Comments
    Latest Galleries from CNET
    Best mobile games of 2014
    Nissan gives new Murano bold style (pictures)
    Top great space moments in 2014 (pictures)
    This is it: The Audiophiliac's top in-ear headphones of 2014 (pictures)
    ZTE's wallet-friendly Grand X (pictures)
    Lenovo reprises clever design for the Yoga Tablet 2 (Pictures)