Cray's Red Storm machine, to be set up at Sandia National Laboratories, will get dual-core chips next year.
The first incarnation of the $90 million machine is expected to have a processing capacity of 41.5 trillion mathematical calculations per second, or 41.5 teraflops. By the end of 2005, the system should reach 100 teraflops after it's upgraded with AMD's dual-core chips, Sandia said this week.
The first quarter of the systems for the first phase is due in September; when complete in January 2005, the system will have 11,648 Opteron chips. Dual-core chips, a technology that's rare today but headed for mainstream use, boost performance by employing two processing units on the same slice of silicon.
Cray has specialized for years in making completely customized supercomputers such as the X1, but Red Storm uses many commonly available components such as Opteron server chips and the Linux operating system. However, Red Storm does have some special sauce in the form of Cray's SeaStar chip, which passes messages among the thousands of four-processor computers that collectively make up Red Storm.
Cray competes with Hewlett-Packard and IBM, the No. 1 and No. 2 companies in the overall high-performance computing market, as well as Sun Microsystems, Dell, SGI and a new generation of smaller companies that assemble inexpensive supercomputers out of low-end machines.
Red Storm follows in the footsteps of another Sandia supercomputer, ASCI Red, which in the late 1990s led the list of the 500 fastest supercomputers. That machine, like IBM's Blue Gene/L now under construction at Lawrence Livermore National Laboratory, used a stripped-down operating system on some computers and a fuller operating system for the control computers, said Jim Tomkins, Sandia's Red Storm project leader. Red Storm and Blue Gene/L use Linux as the operating system on the control nodes.
Red Storm uses a hybrid approach that can run software both on the master computers and the lightweight computers, Tomkins said. Programs written for the machine use the standard , or MPI, software used by many cluster computers.