X

How blade servers have evolved

Blade servers are popular today, but they've changed considerably from the original concept.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
4 min read

A little under 10 years ago, I paid a visit to a Boston hotel suite where Gary Stimac was showing off a new server that his company, RLX Technologies, would soon be announcing. Stimac had been employee No. 5 at Compaq. He signed on as chief executive officer of RLX to bring the company's so-called blade servers to market.

HP BladeSystem Hewlett-Packard

Blade servers are a modular, pluggable design that often lets more computing capacity be crammed into a smaller footprint than with conventional rackmount servers, reduces the number of cables needed, and shares some of the physical infrastructure such as power supplies among multiple servers. RLX was a high-profile and well-funded start-up; Stimac and his fellow executives saw blades as a revolution in the way servers were designed and operated.

RLX's first blade servers used low-power processors from another then-high-flying start-up, Transmeta. The idea was that a lot of companies wanted to rent dedicated Web servers from a service provider but didn't necessarily need a lot of processing horsepower. Transmeta's energy-efficient design would let more, albeit individually less powerful, servers be installed in a given space.

In practice, Transmeta's radical design--it dynamically translated x86 instructions on the fly and executed them on a different processor architecture--didn't turn out to be as compelling as advertised. Furthermore, even today, low-power servers remain something of a niche, perhaps all the more so given that server virtualization makes it easy to carve up more powerful systems.

RLX was also fundamentally a hardware form factor play. It was about density. It was about eliminating cables. It was about ease of service. The hardware itself was the special sauce.

Blades and other types of modular servers are an important part of the server landscape today. But the journey was at once more evolutionary and more complicated than the pioneers imagined. The initial wave of blade servers largely failed, along with the companies that made them. RLX itself became a software-only company and was subsequently purchased by Hewlett-Packard.

The heirs to those initial blades fall into three general classes--all of which break from the initial pattern.

Blades are widely used in high-performance computing (HPC). In fact, among the large supercomputers tabulated in the Top500 list, over 40 percent are blades from just one vendor, HP. Blades from the likes of IBM and Sun are also represented. One big reason is that blades can easily incorporate InfiniBand switches, which have become the de facto high-performance interconnect in HPC. Early blade concepts like high density and simplified cabling matter too, but HPC blades are far more likely to incorporate high-end processors and optimized interconnects.

We also see blade infrastructures used as part of an integration strategy. Deployed in places like retail store chains, they can function as sort of a "data center in a box." In other words, servers, storage, and networking can all be packaged together and delivered to a site that may not have much, if any, local IT expertise. Replacing a component in a blade chassis can be as easy as snapping out a modular component, so it's not hard to see why this design can be popular for low IT locations.

More recently, we're seeing blades used for integration of a different sort in enterprises. Going by terms like "converged networking," the idea is that all communications are brought together on a single virtualized network, typically 10-Gigabit Ethernet. Products such as Cisco's UCS and HP's Virtual Connect are based on this concept (as is, more generically, FCoE). Blades-as-integration-point is, in many respects, the polar opposite of the initial idea that blades would be about separating computing from storage and networking.

What of the network facing workloads that were the catalyst for blades coming to market in the first place? Ironically, that hasn't been a particular sweet spot for blade servers. Instead, we see mostly plain old rackmount servers (except in the case of Google, which builds its own)--or, increasingly, a new class of servers that go by the moniker "microservers" and other such names.

Whatever you call them, microservers are small, cheap, and often optimized for the very specific needs of a large-scale Web company. Because this type of company largely rolls their own management, they tend not to include a lot of vendor-specific management tools. In their most extreme form, they even share some physical infrastructure by sticking several motherboards into a single rackmount chassis. The HP ProLiant SL is one example of a shipping product in this mold but most system vendors have some variant on the theme.

They're not blades as they've come to evolve. But they're more like the initial concept of blades than today's blades are.