X

The yin and yang of system specialization

Storage and networking devices increasingly leverage and repurpose server technology. But the trend towards generalization is more complicated than that.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
2 min read

Systems are getting more general-purpose. At least in terms of units sold, servers with two x86 processors dominate the landscape.

Wikimedia Commons

And it's more than just servers. For example, on Tuesday Vyatta announced a new series of network appliances, the Vyatta 3500. These systems, like the other appliances that Vyatta sells, combine standard off-the-shelf x86 server hardware with an integrated software subscription that provides networking functions such as Firewall, VPN, IP address management, administration, diagnostics, and so forth. Vyatta pitches its appliance as a much lower-priced alternative to dedicated networking hardware from the likes of Cisco.

We've seen similar examples in the storage arena. Sun has perhaps been the loudest proponent of open storage; its "Thumper" is essentially a standard server with a mechanical design that's been optimized to maximize storage density. However, even beyond such a clear-cut example, storage at companies like HP and IBM has increasingly aligned with the technology and components used in their servers.

One also sees servers, storage, and networking coming together in the form of blades. This is a bit ironic because blades, as initially envisioned, were intended to explicitly disaggregate computing from networks and stored data. But outside of high-performance computing, blades have instead come to be an integration point.

That said, generalization isn't the whole story.

I'm also seeing a lot of interest in what are sometimes called "workload optimized systems" today. The basic idea is straightforward. Different types of workloads perform better on different types of systems. For example, a system that needs to handle high-volume financial transactions won't necessarily look the same as a system that is running financial models instead.

And we're increasingly seeing very high-scale applications that include different workloads of different types. If you want to get technical, you can think of them as composite applications or an interrelated catalog of services associated with a data repository. IBM favors the term "smart applications," which isn't such a mouthful. Whatever you call them, the idea is that one application has different parts as disparate as transaction processing, business analytics, and Web serving. While all of these can be handled by a single type of server, as the scale increases it can make sense to optimize individually for the different workloads.

Thus, we're seeing and will continue to see a blurring of the lines between servers, storage, and networking. The strict separation of these functions is a relatively recent development in the history of information technology and isn't an inherent requirement. At the same time, the idea that a single generic server design could be the right tool for every job would have once seemed an odd assertion. And it's one that I'm seeing increasingly challenged again.