X

IBM supercomputing goes retro

Big Blue puts a new twist on an older technology. But will it steal thunder from the computing giant's other big machines?

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
6 min read
Even as IBM directs attention to the arrival of its Blue Gene/L supercomputer, the company is quietly preparing a new twist on an older technology that will let it more directly compete with rivals such as Cray and NEC.

For decades, high-performance computing customers have used machines with "vector" processors, which excel at certain mathematical operations and can quickly retrieve large amounts of data from memory. But the vast majority of business computers--and indeed most supercomputers today--use a "scalar" design better adapted to general-purpose computing.

IBM now plans to bridge that divide using a feature of its new Power5 processors. With a technology called Virtual Vector Architecture, or ViVA, the 16 processor cores of a scalar server such as IBM's Power5-based p5-570 can be yoked together to act like a single vector processor.

News.context

What's new:
IBM is trying the virtual vector idea. That is, Big Blue wants to bridge that divide between vector computing, a decades-old technology, and the more common scalar computing using a special feature of its new Power5 processors.

Bottom line:
At least one IBM competitor says the hybrid approach won't result in a machine as "effective" as a true vector supercomputer, but Big Blue is forging ahead--and a second generation of the so-called ViVA effort promises even more power.

More stories on supercomputing

"We can take a 16-way and run it as a vector," Power5 designer Ravi Arimilli said.

Such a system would have 32 parallel calculation engines, called floating-point units, and if there's sufficient demand from top-drawer customers such as national laboratories, IBM could build necessary software tools to create larger systems, he said.

IBM is trying to keep quiet about its ViVA effort. Company representatives declined to comment on details. A publicity effort Monday is devoted to a scalar machine called Blue Gene/L, with which IBM has claimed the top spot for now in a supercomputer speed test.

Hewlett-Packard and IBM, which together dominate the high-performance computing market, today sell only scalar machines, but vector machines, including Cray's X1 and NEC's SX-8, are still available. And despite threats from IBM and SGI, NEC's Earth Simulator vector system has led a list of the 500 fastest supercomputers for two years.

IBM's move has caught Cray's attention. "I'm not worried that IBM's ViVA processors will give true vector processors a run for their money, but I do think it's a good idea," said Steve Scott, an X1 chief architect. "Trying to hook multiple scalar processors into a vector processor is never going to be as effective as a real vector processor."

But the effort doesn't stop with ViVA. A sequel called ViVA-2 should be able to handle all Cray chores, said Bill Kramer, general manager of the National Energy Research Scientific Computing Center (NERSC) in Berkeley, Calif., whose researchers urged IBM to add ViVA and are collaborating in its development through a program called Blue Planet.

"ViVA-2 is basically putting a scientific accelerator very close to the central processing unit," Kramer said. NERSC has proposed to install a machine called LCS-2 in 2007 with Power6+ processors and ViVA-2 that can perform 50 trillion calculations per second.

The present trend in high-end computing is to link dozens or even thousands of scalar computers together into a massive cluster. While that approach is good for some tasks, vector machines have kept the lead in some areas, IDC analyst Chris Willard said.

"They have advantages in ease of programming," Willard said. And they excel at mathematical operations involving collections of numbers called matrixes--a "fundamental operation in a lot of technical computing."

Like scalar systems, vector systems also can be linked into a cluster--the approach used by the Earth Simulator. Its speed of 35.9 trillion calculations per second triggered U.S. government fretting about the country's loss of the supercomputing crown.

Among those with concerns is Energy Secretary Spencer Abraham, who spoke about the issue during a recent visit to christen new IBM supercomputers at Lawrence Livermore National Laboratory in California. "I hardly need to state how vital it is for the U.S. to stay on the cutting edge of computing," he said.

Congress is working on new bills that could increase supercomputing funding. "I think there is a growing recognition in Congress that the leadership, in terms of scientific pre-eminence, is going to be challenged a lot more going into the 21st century than in the 20th," Abraham said in an interview.

Seeds of the project
The ViVA project began in the fall of 2000, with about 30 people from IBM and NERSC, Kramer said.

The meeting was held "to address the fact that the current road maps of commodity-based computing were not going to serve the scientific community well," Kramer said. "In that discussion, we came up with the idea of adding to commodity processors some additional capability for no or very little cost that would make them more amenable to the scientific computing that goes on now and into the future."

That idea became what is now ViVA, he said. And although NERSC has discussed virtual vectors with other computer makers, IBM was by far the most interested, Kramer said.

IBM isn't the first to try the virtual vector idea. A Hitachi system installed at the Leibniz Computing Centre in Munich employed dozens of nodes, each with eight processors linked into a virtual vector processor. And the Cray X1 has four "single-stream processors," (SSPs), vector processors effectively linked into a single larger vector processor.

Indeed, the virtual-vector idea is more than a decade old, Illuminata analyst Jonathan Eunice said. Silicon Graphics' then-chief technology officer, Forrest Baskett, "looked forward and foretold all this," extrapolating from improvements to RISC (reduced instruction set computing) chips such as IBM's Power products.

"This is the proof of the Baskett theorem," Eunice said. "He put up charts in 1991, showing how much RISC had come up to speed with traditional vector processors (and predicted that) eventually, we're not going to really need specialized ones. We can pretend to have vector processors."

Vector machines allow efficient bulk operations when it comes to retrieving large quantities of data from memory quickly, processing it, then storing it, Willard said. While many scalar computers end up waiting for the appropriate information to arrive from memory, vector systems can stream that data in and out fast enough to keep a processor's calculation engines close to fully loaded.

Another vector advantage is through a technology called "gather/scatter," which lets vector processors easily read and write data to widely dispersed memory locations. Scalar processors, in comparison, deal best with data in contiguous patches of memory.

ViVA-2 will address this memory issue, Kramer said.

But Cray sees obstacles on the horizon for virtual vector machines. One problem is overhead--the time needed for independent processors to spend on control and synchronization tasks instead of processing, Scott said.

"Each one of the processors is operating independently, fetching its own instructions, decoding its own instructions," Scott said. "You won't get the sort of execution efficiency by having eight different processors executing independent instructions than you will by having a vector processor issuing a single instruction."

Having it both ways
But for the National Energy Research Scientific Computing Center, ViVA offers versatility, Kramer said.

"The reason we think it's important is NERSC, unlike other sites, runs a very diverse workload. We run large-scale codes for all areas of science, ranging from biology to cosmology, materials science, genomics, proteomics, climate research, astrophysics, high-energy physics, computational fluid dynamics and accelerator design," he said. "Some methods can make good use of vectorization, and those are pretty well-known. There are a lot of methods that cannot vectorize well."

Cray is betting that vector supercomputers will remain important but also has embraced clusters of scalar machines. The company is building a system called Red Storm for Sandia National Laboratories with thousands of scalar Opteron processors from Advanced Micro Devices, and has begun selling its XD1, a kind of smaller cousin to the mammoth Red Storm cluster.

Price has been a barrier for vector systems. They bring powerful abilities, IDC's Willard said, but, he added, "those advantages at current prices aren't able to really drive a big market."

Vector systems are good for a multitude of scientific problems, said Mark Seager, assistant department head for advanced technology, integrated computing and communication at Lawrence Livermore National Laboratory. Among them: studying wave shapes for image analysis; solving notoriously difficult math problems, called large partial differential equations; and predicting fluid flow and shock wave motion.

However, vector machines are no longer the necessity they once were, even at massive research labs with challenges such as a three-dimensional simulation of a nuclear bomb explosion.

"We do it all on scalar machines. We don't have any vector machines here at Livermore," Seager said. "I think we got rid of the last Cray in 1996 to 1998."