X

The language of facilities

Facilities doesn't even tend to get mentioned when bemoaning IT silos. I suspect that part of the issue is language.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
3 min read

We often talk about silos in IT. The storyline usually goes something like this. The server guys (computer gear) don't talk to the storage guys (SANs and Fibre Channel) don't talk to the network gals (all that Ethernet and other comms stuff). It's all true enough, of course. But notice something? Facilities doesn't even tend to get mentioned when bemoaning IT silos. All that HVAC and power gear is just part of the landscape. IT folks didn't need to know about bricks. Why should they need to know about power and cooling? Maybe a little UPS here and there, but the big stuff is Someone Else's Problem.

I suspect that part of the issue is language. Back before IBM did its full-court press to make the System z mainframe cool (and relevant) again, its presentations and documentation were clearly intended only for the priesthood. Whether talking CECs or DASD, FICON or CICS, or arcane pricing models, the effect (intended or not) was to hang a "No Trespassing" sign outside the mainframe tree house. When IBM began modernizing System z for new workloads and uses, one of the many challenges it faced (and still faces to a more limited degree) was to make the mainframe not just appealing, but even intelligible, to outsiders. The task was made no easier by the fact that so many of the people involved in the effort had spent their entire careers working with the mainframe in its many incarnations. Basic assumptions about the very nature of the mainframe were so deeply-held that it took real effort to externalize them in a comprehensible and meaningful way. (This presentation isn't from IBM but illustrates just how foreign-sounding deep mainframe discussions can be.)

I think we're going to see something similar happen with power and cooling. P&C are becoming an important part of the datacenter agenda. Yes, we're in a bit of an overheated hype curve about the whole topic but that doesn't mean it's not important. As a result, companies like Liebert--long-time makers of computer room power gear--are starting to show up at IT tradeshows and brief IT analysts.

I had one such briefing recently from Liebert that included much interesting material including the Liebert NX "Capacity on Demand" UPS and forward-looking discussion about datacenter power distribution. But, based on my own experience around computer systems design, I think that Liebert and other P&C vendors should understand that even electrical engineers who design servers don't know much more about analog electrical systems than the average homeowner--and probably less than the typical electrician.  

HVAC vocabulary can be arcane and truly in-depth discussions of redundant facilities power more so. (For example, by Liebert's count, high availability power configurations can come in five different bus configurations, each of which is idea for a specific type of environment.) There's a certain inherent complexity in these matters of course. However, that doesn't change the reality that if IT managers are going to be increasingly involved with power and cooling decisions and configurations, the companies selling that gear are going to have to speak the right language.