Why virtualization is shaking up IT data centers

Unified computing architectures focused on hosting virtualized workloads will change the way data centers are provisioned--and as a result will change who runs them.

If you begin with the premise that the abstraction of data center resources into software representations (such as virtual machines) decouples IT workloads from the physical systems they rely on, then it makes sense to reconsider the way you buy and build your data centers.

Simply having a uniform (or near-uniform) software layer between the physical infrastructure and your compute workloads means you can begin to assemble a homogeneous physical infrastructure to support a heterogeneous abstract IT environment.

CNET News

No more custom-tailoring your systems for each application, only to find those systems difficult to alter to either meet the needs of a new workload or the changing needs of the existing one.

No more adding a unique network card to each server to support a shared management plane, just to find it locks you into that management architecture long after something better comes along.

No more trying to figure out which servers have storage area networking and which have local disk...they all can have both--making it much easier to reuse the physical system for workloads that require either one.

This is not a spiel for any one vendor or even for a group of competitive vendors. Instead, focus on what this evolution means to the way you will buy and operate enterprise computing equipment in the coming years. While the highly customized computing systems of our siloed past meant buying "pieces/parts" was the logical way to go, its been a little like buying a car by getting the engine from Honda, the chassis from Ford, and the wheels from Costco. You could probably build a pretty decent ride, assuming you could get it all to work together.

Furthermore, the standardization of data center "parts" over time made custom assembly increasingly easy to do, keeping this approach cost effective. However, assembling the parts into a whole took significant expertise from a variety of skill sets. Even basic distributed application delivery required some pretty deep knowledge of OS, network, and storage terminology and operations to succeed.

Virtualization, however, is a little like standardizing the driving controls on automobiles, allowing the same human to operate a wide variety of vehicles. In the automotive world, such standardization allows vehicle manufacturers to build complete, operationally ready "systems" to meet the demands of numerous drivers. The vast majority of automobile buyers haven't built their own cars for well over three quarters of a century now.

At the same time, vehicle manufacturers differentiate based on aesthetics (aka "user experience"), features, and price points. This gives us a variety of choices in the vehicle market. I expect the new generation of data center systems to move in the same direction: systems will share standardized ways of hosting workloads, but vary in terms of management features, performance capabilities, and total cost of ownership.

This metaphor is not perfect, however. For example, a large number of customers will buy compute systems that can be assembled into larger-scale systems. In other words, the metaphor breaks because the larger-scale customer is buying a semi truck made up of a whole bunch of Toyota Camrys with their steering, throttle, and brake systems tied together. A converged environment is fractal in a way vehicles never were...components are assembled into "pods" (a term I hate), which then may be assembled into a greater system.

The unit of procurement for large installations, however, will almost certainly be the "pod." In that context, adding capacity one server or disk at a time starts to make little sense. In my opinion, this means that concerns about who is entering the server, storage, or network market mean less than identifying those who are entering the unified computing market.

There is one other interesting side effect of virtualization and automation, which I wrote about in a Strategic News Service Special Letter earlier this week, and re-posted on Infrastructure 2.0. Data center culture is going to be profoundly changed.

The barriers between server, storage, and network administrators are going to quickly blur, and in 10 years or so, data center infrastructure operations will be the purview of a few hardware repair specialists, software developers themselves, and the strategic role of infrastructure architecture. Enterprise and solutions architects will design the virtual configurations of application containers, and the linkages between them.

The result is that the role of tactical system administrator, specializing in one data center technology and reacting to trouble tickets as fast as they come in, will fade away. CIOs will expect their administrators not to fix the data center as fast as it breaks, but to determine the policies that allow automation systems to keep the data center operational, as well as to recommend hardware and software that will enforce those policies. In fact, this is already happening in Web applications, as the term "Web operator" increasingly defines a "jack of all trades" type of administrator.

Those not finding such next-generation systems cost effective will likely migrate to external cloud environments that themselves use these concepts to deliver service. In fact, in terms of pure numbers, that may be the way a majority of companies choose to go. Many cloud experts note, however, that the technicians doing so will be developers, not system administrators, which again is a change in most data center cultures. More on that later.

If you haven't thought yet about how virtualization changes the very nature of the data center, I recommend doing so now. Some form of unified computing and/or cloud computing is in your future, and will have the effect of reforming your perceptions of how to build and operate data centers.

You can follow James Urquhart on Twitter.

About the author

    James Urquhart is a field technologist with almost 20 years of experience in distributed-systems development and deployment, focusing on service-oriented architectures, cloud computing, and virtualization. James is a market strategist for cloud computing at Cisco Systems and an adviser to EnStratus, though the opinions expressed here are strictly his own. He is a member of the CNET Blog Network and is not an employee of CNET.

     

    Join the discussion

    Conversation powered by Livefyre

    Don't Miss
    Hot Products
    Trending on CNET

    HOT ON CNET

    Mac running slow?

    Boost your computer with these five useful tips that will clean up the clutter.