Three lessons from the shipping container

The history of the shipping container has some useful lessons as we move toward virtualized IT and cloud computing.

As human beings we like analogies. Admittedly, we sometimes overextend them and end up obfuscating rather than clarifying. Such is arguably the case with cloud computing and the electric grid . However, a good analogy can not only make the new and unfamiliar more comprehensible but can even bring fresh insights based on history and past patterns.

Shipping containers in Clyde. photohome_uk CC flickr

Many of you are probably familiar with the computing-in-shipping-containers theme that Sun most popularized but that a variety of vendors has picked up on in various forms. The idea is that a shipping container is the largest thing that can be easily transported around the world and therefore it's the largest unit of computing that can be practically prebuilt at the factory.

Thus, in this storyline, the shipping container represents the new increment for large-scale computing infrastructures.

At one level, this shouldn't be taken too literally. Even if an increasing number of high performance computing and high-scale Web sites add servers in this kind of quantity, most aren't buying them actually installed in shipping containers; they're putting them in data centers a rack at a time. And vendors are designing new server form factors to reflect this shift.

However, a discussion with HP in the context of their ProLiant SL launch got me to thinking: Literal shipping containers aside, the evolution of containerization has a lot of interesting lessons for how technologies evolve more broadly.

Existing infrastructure matters. The size of container ships is largely constrained by the width and depth of the Panama and Suez Canals. A "Panamax" container ship is the maximum size that can go through the Panama Canal; a "Suezmax" the largest that can go through the Suez Canal. "Malaccamax" ships have the maximum draught that can traverse the Strait of Malacca. (Currently, there are bulk carriers and supertankers this large but not container ships.) In a totally different context, there's a good argument that the Segway failed, not so much because of price or poor design, but because it wasn't a good fit with either existing sidewalks or roads.

Standards matter. Containers have been around in various forms since at least the 1800s, beginning with the railroads. In the U.S., the container shipping industry's genesis is usually dated to Malcom McLean in 1956. However, for about the next twenty years, many shipping companies used incompatible container sizes and corner fittings. This in turn required different equipment to load and unload and otherwise made it hard for a complete logistics system to develop. This changed around 1970 when standard size and fittings and reinforcement norms were developed (with all the political jostling between the incumbents that you'd expect).

Process matters. At least as important as standards was changes to the labor agreements at major ports. When containers were first introduced, existing labor contracts negated much of their economic benefit by requiring excess dockworkers or otherwise requiring processes that involved more handling than was actually necessary. (For reason of both labor negotiations and infrastructure, containerization allowed the Port Newark-Elizabeth Marine Terminal to largely eclipse the New York and Brooklyn commercial port.)

Marc Levinson's "The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger" has lots of detail on the labor and other aspects of shipping containers. 

Cloud computing is one area where the story of the shipping container has particular relevance. Like the container, the basic concepts aren't new but they are being made more relevant to a wider audience by things like network infrastructure.

Standards will matter--at least to get to the point of interoperable clouds (which admittedly may not be as pressing a need as in the case of the electrical grid and the world's logistics system).

And the business processes are, as always, highly relevant to the computing resources that are ultimately there simply to support them. Processes that are rooted in manual approaches that have lots of human back and forth won't see much benefit from new technology no matter how virtualized, service-oriented, or self-service.

About the author

Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.

 

ARTICLE DISCUSSION

Conversation powered by Livefyre

Don't Miss
Hot Products
Trending on CNET

Hot on CNET

CNET's giving away a 3D printer

Enter for a chance to win* the Makerbot Replicator 3D Printer and all the supplies you need to get started.