X

Supercomputing wrap-up

SC08 conference in Austin was one of the highest-energy shows in awhile, says Gordon Haff.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
4 min read

At some point during the flight over the Pacific from Tokyo, I seriously questioned my decision to take a detour rather than heading straight to Boston and home. It wasn't that I had no interest in attending the Supercomputing show, SC08, being held in Austin last week. It's just that I was coming off of what was already a two-week trip to Japan. However, Supercomputing has been getting more and more buzz in recent years--and I hadn't been able to attend previously because of conflicts--so duty beckoned.

I was glad I made it. It was an immensely interesting and educational (albeit exhausting) couple of days. What follows are a few things that caught my eyes and ears. I plan to follow up on at least some of these in more depth when I have a chance.

Energy and attendees. First of all it's worth noting the general ambience of the show. It was hopping. Economic slump you say? One wouldn't know it from walking the exhibit floor or attending the sessions. To be sure, both booth and attendance commitments are often made well in advance. Nonetheless, I find it striking that SC08 set an attendance record--over 10,000 people--and that a lot of the exhibitors I spoke with were not only happy about the level of traffic to their booths and meetings, but were, in many cases, actually closing business. I found the general feel of the show to be at least somewhat reminiscent of a long-ago UniForum--albeit with more of an academic and application flavor.

InfiniBand is very much alive. I wrote after the October TechForum '08 event that "InfiniBand may not ever markedly expand on the sorts of roles that it plays. But 10 Gigabit Ethernet is far from ready to take over when latency has to be lowest and bandwidth has to be highest." The biggest of those roles is high-performance computing (HPC) and, indeed, InfiniBand was omnipresent at SC08. No particular surprise there but certainly lots of confirmation that InfiniBand is anything but dead. Also significant was QLogic's announcement at the show of an InfiniBand switch family. What's notable is that these switches use QLogic's own chips, rather than sourcing them from Mellanox as everyone else does. That QLogic made this design investment must count as a considerable vote of confidence in InfiniBand's future.

Clusters continue their advance. Supercomputers used to be largely bespoke hardware designs specifically constructed for HPC tasks. There's still some of that. IBM's Blue Gene is one example. A start-up, SiCortex, exhibiting at the show provides another. However, in the main, supercomputing continues to be more and more about clustering together many--mostly standard off-the-shelf--rackmount or blade servers rather than creating monolithic specialized systems. This isn't a new trend, but it continues apace (and is certainly one of the reasons that InfiniBand has been regaining visibility of late).

Microsoft makes modest gains. Microsoft made it into the top 10 of the (publicly acknowledged) largest supercomputers with the Dawning 500A at the Shanghai Supercomputer Center. There was still far more Linux--and, to a lesser degree, other flavors of Unix--at the show than Windows. But this example and others help to reinforce the notion that Microsoft products are technically capable of playing in HPC. That's not to say that Microsoft will easily insert itself into environments that are predisposed to and have in-house skills aligned with Unix tools and techniques. However, as HPC in commercial environments becomes increasingly common, it means that Microsoft has an opportunity there, where Windows typically already has a footprint.

Parallel programming is still a challenge. So much so that all-around computing guru David Patterson devoted his plenary session to the topic. That said, based on Patterson's session as well as the work of a variety of companies such as RapidMind and Pervasive Software, we may be starting to see at least the outlines of how developing for processors with many cores and for amalgams of many systems might progress. The issue is that parallel programming is hard and most people can't do it. One approach is training but we seem to be developing a consensus that neither this nor new programming tools (e.g., languages) really get to the heart of the matter. Rather, the general direction seems to be toward something you might call multicore virtualization--the abstraction of parallel complexities by carefully crafted algorithms and runtimes that handle most of the heavy lifting. (MapReduce is a good example of the sort of thing I'm talking about.)

Supercomputing and HPC used to be their own world. Increasingly they illuminate the future direction of all (or at least most) computing--including the challenges ahead. That's a big reason that I find Supercomputing such a fascinating show.