X

Flexing a super (computing) muscle

The U.S. remains No. 1 in the world in high-performance computing, but IBM's Dave Turek examines whether it can maintain its current domination.

Charles Cooper Former Executive Editor / News
Charles Cooper was an executive editor at CNET News. He has covered technology and business for more than 25 years, working at CBSNews.com, the Associated Press, Computer & Software News, Computer Shopper, PC Week, and ZDNet.
Charles Cooper
8 min read
When it comes to high-performance computing, the U.S. enjoys a position of unparalleled dominance in the field of supercomputers. But there's little guarantee that today's technology leaders will occupy the same place of prominence a decade down the road.

Case in point: Japan, whose share of the so-called HPC, or high-performance computing market, has fallen precipitously since the early 1990s.

At IBM, which has engineered a remarkable climb to the top of the HPC rankings in the last decade and a half, Dave Turek is charged with thinking about present and future technologies in his role as vice president of "Deep Computing." And he has perhaps the best vantage point at Big Blue to assess potential overseas challengers in the supercomputing field.

CNET News.com caught up with Turek, who last week announced the first sale of IBM's Blue Gene/P supercomputer to Russia. The installation at Moscow State University's Department of Computational Mathematics and Cybernetics will go toward research in nanotechnology, new materials, and life sciences.

Q: I was paging through a recent report, and when they looked at the first Top500 list, IBM and Hewlett-Packard, which are the today's top ranked leaders in HPC, didn't even make that list. That was in 1993, which wasn't all that long ago. How do you account for that really dramatic change in a relatively short time?
Turek: If you think about 1993 and what was happening in the industry, it was an inflection point where pricing in the industry was moving away from the mainframe style pricing or workstation style pricing. The consequence of that was there is this big jump in price performance efficiency and it was reflected in a variety of systems.

IBM's entree into the space in 1993 was a system that became known in the vernacular as Deep Blue because of the chess matches we play with (chess champion Garry) Kasparov. But the architecture of that system was a radical departure from the way people were building systems. Until that time, if you were building a super computer, you assumed the responsibility to build a microprocessor, interconnect, system design, operating system software--you did everything.

What we did in 1993 was we borrowed technologies from a lot of areas and made new investments in a narrower set of areas and leveraged these broader investments in the industry to bring a radically improved price performance construct.

Something analogous to what IBM did with the PC in terms of taking off-the-shelf parts?
Turek: Well, yeah, and it's like what happened with Linux clusters in 2000 where it made the transition from workstation pricing to PC pricing with the x86 lower-end servers...So it was the embrace of a different kind of cost performance curve coupled with the popularization of parallel computing to attack high-end problems.

It was only after we achieved success that other people started coming down our pathway because there was a lot of debate in the '90s.

In the early '90s, experts were saying this was going to turn into a race between the U.S. and Japan. The Japanese had about 20 percent of the market back then; now its under 5 percent. Why do you think the predictions failed to live up to the advanced billing?
Turek: I think that there was a general belief by the companies operating in the marketplace that the past was a prelude to the future and it was going to be vector-style architectures, while our view was that was an absolute dead end, that you had to go parallel and make a bet along those lines. We made the bet and the Japanese did not. It was only after we achieved success that other people started coming down our pathway because there was a lot of debate in the '90s.

Do you think that explains U.S. dominance of the list? If you look at the November '07 rankings, U.S. representation is even greater than it was back in the early 1990s.
Turek: The economics had a huge amount to do with it. Further dislocations came about because there was a greater embrace of this kind of technology for strategic benefit by companies in the U.S. than we saw in Japan. You know, success begets success. So the robustness of the U.S. economy and its diversity with respect to the volume of activity going on in the automotive, aerospace, and financial services, and petroleum, etc., became the financial drivers or demand for this kind of technology and so this was a virtuous cycle.

You've gone before Congress, arguing how critical it is to extend U.S. leadership in high-performance computing. Have the powers that be in Washington fully embraced that message?
Turek: Yes. From a policy perspective in Washington, there's a focus on this.

What about the possible potential for success of challenges from overseas where there are state-sponsored rivals? Let's say Bull in France or Lenovo in China?
Turek: In a controlled and planned economy, you could try to make the argument that by setting up a protectionist kind of policy you could do that to stimulate an indigenous industry...I think that that is a pretty problematic strategy to implement...I think that one of the impacts of globalization has been that the cream rises and people are going to try to make use of the best technology they can as opposed to engaging in a very speculative endeavor of trying to embrace these other policies and practices to build an unrelated indigenous industry that's going to serve some nebulous goal under the Partisan State Commissar. That may happen, but I think the likelihood is fairly low.

The U.S. employs so many first-rank computer scientists from both China and India, and yet both of those countries still have relatively small shares in the supercomputer field. When do you expect that to change?
Turek: I think they are still some ways away from having the robust spread of industries that can effectively make use of this technology.

There is a peculiarity about American industry and its ability to give birth to new industry that I think is different than what you see in some of these other places at least today. I am not saying it's in perpetuity or anything intrinsic, it is just the way it is today.

What about 5 to 10 years from now in China and India?
Turek: China has been a little bit more forthcoming about their national strategy and they've talked about the need to build an indigenous industry. That effort is under way and we'll see what happens.

India has been somewhat different. I'm not particularly aware of any dramatic effort on the part of the government to drive anything as major as an Indian supercomputing industry. I think it's still in the hands of the private sector.

You just get some off-the-shelf Intel or AMD servers and you assemble them and you get some open-source software and you deploy and you're ready to go.

What about Russia--not just making, but buying supercomputers? You've got the announcement of the first Blue Gene supercomputer in Russia, but what about their abilities to actually make their own supercomputers?
Turek: There is no evidence that there is anything material going on in Russia to build the industry. There is, of course, a lot of interest in acquiring the technology to use it and that dichotomy is pretty stark. So the question is, what does it take to actually create an industry to do this kind of stuff? In the year 2001, the answer would have been easy and the answer (was), "It doesn't take much at all." You just get some off-the-shelf Intel or AMD servers and you assemble them and you get some open-source software and you deploy and you're ready to go.

We know that's true because that's where a lot of universities were deploying Linux clusters in the early part of this decade. What's happened differently now, and what I think constitutes a progressively greater and greater degree of challenge for people who wanted to get into this space, is the whole consequence of the limits that we're coming into contact with in respect to overall microprocessor system design.

I think it was January of 2004 that marked the point in time where Intel abandoned their more mega, megahertz-or-gigahertz-is-better approach. And they did that because pretty soon we are going to require a nuclear power plant to run your laptop. That was just the consequence of the arithmetic of Moore's Law, but physics has this nasty way of intruding and the industry finds itself working up against a lot of limits that are pretty daunting right now.

Such as?
It's what's given rise to this whole push on multicore and radical multithreading and blah, blah, blah. But all that stuff implies greater and greater complexity. It requires greater and greater sophistication of systems design, utilization approach, etc., and so there's been a de-facto rise of barriers to entry that have not been caused by any economic or political thing. It's been caused by science and engineering barriers that are proving to be progressively more daunting to overcome.

Look at the Blue Gene system for example. That project was begun in December of 1999 when the world was a radically different place and when nobody was thinking about the green computer room or any of that stuff. But to the everlasting credit of a fairly significant number of people in our own research division, we started building a massively scaled parallel system with the kinds of chips you would use in a cell phone. Everybody thought we were lunatics. And what happened? Not only is it the most powerful system in the world, but it's the greenest system at the same time.

Green meaning more energy efficient?
Turek: Green being the smallest consumer of energy on a per-unit amount of computation. So Blue Gene has an advantage over the next closest competitor by probably a factor of 5 or 6 in terms of energy efficiency. When you think about what energy costs these days and how much energy some of these systems require, that's a hugely important factor to include.

That's a big obstacle to entry to overcome.
Turek: You have to have some pretty serious engineering skills because the future design of these computers does not get easier. It gets harder as you try to accommodate the constraints of power, cooling and space, and programmability.

You can say there is a lot of brainpower in China, there is a lot of brainpower in India, there is a lot of brainpower in Russia--that's all true and nothing is forever. There may come a point in the future where (they) get organized and pursue these things and come up with some really terrific insights. The only obstacle is brainpower, some money, vision, and hard work, that's all.