X

NCSA director: GPU is future of supercomputing

The director of the National Center for Supercomputing Applications says the hybrid supercomputer in China is a harbinger of future high-performance computing that combines high-end CPUs and GPUs onto one chip.

Brooke Crothers Former CNET contributor
Brooke Crothers writes about mobile computer systems, including laptops, tablets, smartphones: how they define the computing experience and the hardware that makes them tick. He has served as an editor at large at CNET News and a contributing reporter to The New York Times' Bits and Technology sections. His interest in things small began when living in Tokyo in a very small apartment for a very long time.
Brooke Crothers
3 min read

The director of the National Center for Supercomputing Applications has seen the future of supercomputing and it can be summed up in three letters: GPU.

Thom Dunning directs the Institute for Advanced Computing Applications and Technologies and the NCSA.
Thom Dunning directs the Institute for Advanced Computing Applications and Technologies and the NCSA.

Thom Dunning, who directs the NCSA and the Institute for Advanced Computing Applications and Technologies at the famed supercomputing facilities on the campus of University of Illinois at Urbana-Champaign, says high-performance computing will begin to move toward graphics processing units or GPUs. Not coincidentally, this is exactly what China has done to achieve the world's fastest speeds with its "Tianhe-1A" supercomputer. That computer combines about 7,000 Nvidia GPUs with 14,000 Intel CPUs: the only hybrid CPU-GPU system in the world of that scale.

"What we're really seeing in the efforts in China as well as the ones we have in the U.S. is that GPUs are what the future will look like," said Dunning in a phone interview Thursday. "What we're seeing is the beginning of something that's going to be happening all over the world."

NCSA already has a small CPU-GPU hybrid system. "It's something we have been working on for a number of years. We have a CPU-GPU cluster for the NCSA academic community. Made up of Intel CPUs and Nvidia GPUs. A 50 teraflop machine," he said. (Note that Oak Ridge National Laboratories is also installing a hybrid system now.)

But it's not going to be a snap to tap into the processing potential of GPUs. "Programming these machines to do [GPU] calculations is still a very substantial effort. There will be some applications that will be rewritten to use GPUs [but] a lot of times it will be only part of an application that will use it so you won't get nearly the power and computing advantage of running it all on the GPU," he said.

The catalyst to move programmers en masse toward GPUs will be when chips appear that combine both high-performance CPU and high-level GPU functions on the same piece of silicon, Dunning said. "If they start to solve some of these other problems like putting [the GPU and CPU] together on a chip, that's when you'll start to see a lot software rewritten," according to Dunning. "That combination will address a number of the more significant shortcomings that we currently see in these CPU-GPU combinations. Basically, the way they're implemented presently is a very small pipe between [the CPU and GPU] and that really restricts the effectiveness with which you can use the GPU," he said.

He continued. "What we'll find is that the AMDs, the Intels, the IBMs, they will start incorporating some of those features into the chips that they manufacture. AMD has an architecture called Fusion. It's going to be [available] fairly soon."

Dunning also mentioned IBM's Power7 processor--which includes some vector processing units, like GPUs use--and Intel SSE technology. "If you restructure your code to work well on the GPU, you find it actually performs better on the CPU because you can take advantage of some of these new vector units," he said.

Intel, AMD, IBM, and Nvidia chips will all vie to get inside future supercomputers but Intel has one distinct advantage, according to Dunning. "They have much easier programming models. More standard programming models. The real issue in GPUs right now besides this very narrow pipe is the difficulty of programming them. At University of Illinois, we've seen pretty dramatic speed-ups in the performance of GPUs but only if you make a very substantial investment in people who are reprogramming them," he said.