X

In newest tally, supercomputing progress tapers off

Hardware, software and funding limits mean it's not easy to make the fastest computers even faster. That's too bad for the industries that rely on them.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
4 min read

The exponential increases in supercomputing performance have slowed in the last two years. The purple dots show the collective performance of all systems on the Top500 list; the blue shows the single fastest machine; and the orange shows the slowest.
The exponential increases in supercomputing performance have slowed in the last two years. The purple dots show the collective performance of all systems on the Top500 list; the blue shows the single fastest machine; and the orange shows the slowest. Top500

Steady advances in computing power can no longer be taken for granted, the latest edition of the Top500 supercomputing list shows.

The new Top500 list, released Monday, is updated twice a year by academic researchers who gather speed-test data from machines worldwide. Year after year, the list reflects improvements in supercomputing performance. Of late, though, the steady climb has slowed.

Much of the collective capability of the 500 machines lies in most powerful systems near the top of the list. In the latest update, though, only the the No. 10 system is new to the top 10.

"With few new systems at the top of the past few lists, the overall growth rate is now slowing," organizers said in a statement.

The bottom end of the list reflects similar issues. The slowest member of the Top500 list got 90 percent faster each year on average through 2008. But since then, it has increased at only 55 percent each year.

This illustration shows which companies made the machines on the Top500 list. The bigger the rectangle, the more powerful the machine. The large system labeled NUDT is the Tianhe-2 at the National Super Computer Center in Guangzhouin China, the fastest machine on the Top500 list.
This illustration shows which companies make the machines on the Top500 list. The bigger the rectangle, the more powerful the machine. The large system labeled NUDT is the Tianhe-2 at the National Super Computer Center in Guangzhouin, China. It's the fastest machine on the Top500 list. Top500

The slower growth is of some concern for a world reliant on computers for industrial, scientific, medical and military success. Supercomputers are used for tasks like testing new jet engine designs, simulating the high-energy physics of nuclear weapons explosions, analyzing financial investment risks, mapping underground oil reserves, forecasting weather and virtually testing drugs.

The most powerful supercomputers cost tens of millions of dollars and are housed in data centers the size of basketball courts. Operating them is expensive, too: the top machine, China's Tianhe-2, consumes as much electrical power as 3,400 typical US houses.

That system has now topped the list four times running. It can perform calculations at a speed of 33.86 petaflops (quadrillions of floating-point mathematical operations per second).

It won't be at the top of the list forever, though. IBM is building two machines, called Sierra and Summit, with $325 million from the US Energy Department that are designed to surpass 100 petaflops. That's a significant step on the way toward today's "exascale" supercomputing goal of 1 exaflop, which is equal to 1,000 petaflops.

Hardware and software challenges

One of the big challenges in supercomputing today is finding a way to make use of hardware. For decades, computer processors benefited from faster clock speeds, but in the last 10 years or so, they've mostly stalled at less than 4GHz. That's led chip designers to cram multiple processing cores into each chip, an approach that requires software to be broken down into subtasks that run in parallel.

Parallel computing is a staple of supercomputing, but it's still difficult and it's getting more complicated. Modern high-end machines are composed of many independent systems linked with high-speed networks; each of those systems has multicore processors and, often, graphics-chip accelerators that add an entirely new level of parallel computation ability.

One increasingly popular way to boost supercomputer performance is to add special-purpose processors like graphics chips alongside ordinary processors. That makes programming more complicated, though.
One increasingly popular way to boost supercomputer performance is to add special-purpose processors like graphics chips alongside ordinary processors. That makes programming more complicated, though. Nvidia and Intel supply the most widely used accelerators on the Top500 list. Top500

The upshot: supercomputers are hard to program, and programming tools are not closing the gap.

"The greatest barrier to improved effectiveness of HPC [high-performance computing] systems for US industry is software," concluded an October report on supercomputing from the US Council on Competitiveness, a nonprofit group that surveyed 101 supercomputer users. "It seems evident that in both the short term and the long term, there is a need for focused investment in software scalability."

One big change to supercomputers is the idea that you don't have to own one to benefit from it. Amazon Web Services, along with rival pay-as-you-go computing power from Google, Microsoft and others, lets customers effectively rent computing horsepower. Financially, it's more justifiable to pay for a bigger system that works a shorter period of time than for a smaller system that runs for a longer period. All of this, in turn, can accelerate research results.

A company called Cycle Computing specializes in facilitating this approach for its customers. Earlier this month, Cycle announced that it had helped a Fortune 500 customer use Amazon's infrastructure to reach a performance level of 729 teraflops. That's about one-fiftieth the performance of Tianhe-2 -- and would be enough to rank 71st on the new Top500 list.

A better benchmark?

The Top500 machines are ranked by how well they can perform complex mathematical calculations using a speed test called Linpack. Like many benchmarks, though, Linpack is imperfect.

"Our own research shows that many classic HPC [high-performance computing] applications are only moderately related to the measure of Linpack," said David Turek, IBM's vice president of exascale computing.

That's why one of the list's organizers, Jack Dongarra, is working on a new supercomputing benchmark called HPCG. The test is still under development, but Dongarra released the first HPCG results in June. On that list, Tianhe-2 still ranked fastest.

"The computational and data access patterns of HPL [High Performance Linpack] are no longer driving computer system designs in directions that are beneficial to many important scalable applications," Dongarra said in an interview. "HPCG is designed to exercise computational and data access patterns that more closely match a broad set of important applications."

The HPCG benchmark is also designed to push supercomputer designers to invest in hardware and software that will produce higher HPCG scores, he said.

But it will be years before HPCG becomes as popular as Linpack, Dongarra said.

"Linpack and HPL have been around and developed over a long period of time -- more than 30 years. There were many changes in that time. HPCG is still evolving and will continue to do so in the future," he said. "I don't think we will ever have 500 entries for the HPCG, I will be happy if we capture the top25 or top50 systems."

Correction at 8:10 a.m. PT: The comparison between 729 teraflops and 33.86 petaflops has been revised.