Big Blue, Big Bang, big data: Telescope funds computing R&D

IBM is trying to advance the supercomputing state of the art in memory, optical links, and processing with research stemming from a massive radio telescope project.

This circuit board is covered with antennas geared to listen to radio signals of a frequency between about 450MHz and 1.5GHz. The Square Kilometer Array project aims to cover a square kilometer across the southern hemisphere of Earth with such antennas.
This circuit board is covered with antennas geared to listen to radio signals of a frequency between about 450MHz and 1.5GHz. The Square Kilometer Array project aims to cover a square kilometer across the southern hemisphere of Earth with such antennas. Stephen Shankland/CNET

HANOVER, Germany -- IBM is trying to advance supercomputing technology in processing, optical communications, and memory in conjunction with an international project to peer at the Big Bang's radio remnants.

The radio telescope, called the Square Kilometer Array (SKA), will be built from 2016 to 2024 in southern Africa and Australia. Before that, though, IBM is working to develop the necessary computing technology through a five-year partnership with the Netherlands Institute for Radio Astronomy (Astron). At the CeBIT show here, the two groups are showing off some of the fruits of the cooperation, called Dome.

The idea is to create computing systems that can handle the tremendous quantity of data from the radio telescope, said Ronald Luijten of IBM Research in Zurich. It will produce 14 exabytes of data each day -- about 14 million times as much as an ordinary PC's hard drive can hold.

And the data doesn't just sit idly; it must be boiled down into something about a thousandth the size for further processing, and a lot of that processing will happen far away from the telescope antennas themselves. Thus, IBM is engineering several technologies in hopes that they'll be ready at the time the telescope needs them.

The telescope itself, funded by a 10-nation cooperation, is spread over a tremendous area to gather radio signals from 13 billion light-years away. Those signals date from 13 billion years ago and are very faint, but they record the formation of the earliest structures of the universe, galaxies and quasars.

The project will actually involve thousands of telescopes scattered across the southern hemisphere and coordinating where they're gathering data. The radio signals will suffer from noise, but with so many telescopes, scientists expect to be able to discern the signal amidst that noise.

The unified telescope effort also is designed to investigate pulsars, black holes, dark energy, gravity waves, and more. It comprises three categories of antennas to gather radio waves in frequencies of 70MHz-450MHz, 450MHz-1.5GHz, and 1.3GHz-10GHz, said Albert Jan Boonstra of Astron research and development.

IBM microservers use tiny circuit boards with ordinary Freescale processors that slot into a 3.5-inch-tall server. The white prototype below shows how a copper cooling system attaches to the circuit board.
IBM microservers use tiny circuit boards with ordinary Freescale processors that slot into a 3.5-inch-tall server. The white prototype below shows how a copper cooling system attaches to the circuit board. Stephen Shankland/CNET

Microservers
One technology, called microservers, handles processing duties. Servers have shrunk steadily from mainframes to today's rack-mount and blade models, but IBM hopes to push the idea further with servers only about as big as two business cards side by side. Each has a Freescale-built PowerPC processor and memory. Electrical contacts along one edge of the circuit board mean many microservers can be plugged into a chassis 3.5 inches tall.

Making a small server is easy, but packing multiple servers densely enough to make them useful is harder. IBM's approach is hot-water cooling. First, the top of the chip package is removed and a copper cooling plate is attached that reaches to boths sides of the microserver. Hot water -- about 50 degrees Celsius -- runs along the edges of each row of microservers, connected closely enough

Why hot water? Cold water is nice and has a long history of use in data-center cooling, but, "It's very expensive to use chilled water," Luijten said.

Ronald Luijten of IBM Research in Zurich speaking at CeBIT 2013.
Ronald Luijten of IBM Research in Zurich speaking at CeBIT 2013. Stephen Shankland

Warmer water gets the job done as long as the chip and packaging are designed appropriately and as long as there's a direct connection between the chip and the cooling, Luijten said. The chip itself runs at about 85 degrees Celsius.

The microservers communicate over a system data pathway that can handle 10-gigabit Ethernet links and can handle communications with disks, USB devices, and other things plugged into the system.

Using ordinary Freescale chips means that, unlike supercomputers built with IBM's Blue Gene technology, commodity pricing applies and ordinary software runs, he added.

Handling the data
To get the data to the processors, IBM plans to use optical interconnects rather than copper wiring. Fiber-optic links are getting steadily cheaper, but they're still expensive. IBM expects to pull data from the antennas themselves with optical links. That's not just because of the long-distance, high-throughput transmission requirement. It's also a practical necessity given that "you start to self-pollute your system" with electromagnetic radiation from copper wires, Luijten said.

The Dome project is investigating how to economically push photonic interconnects deeper into the system, too: system-to-system connections and chip-to-chip connections.

When it's time to store data, IBM is looking at the nascent technology of phase-change memory. It's faster and more durable than flash memory and like it can store data even when power is switched off. It's not as fast to respond as DRAM, though -- something like 3 to 10 times slower -- but it takes about a tenth the amount of energy to store a bit of data, and energy consumption is a critical constraint in computer design these days.

The Dome project also is investigating use of programmable accelerator chips specialized for very fast performance on tasks such as pattern recognition, data filtering, or mathematical transformation.

About the author

Stephen Shankland has been a reporter at CNET since 1998 and covers browsers, Web development, digital photography and new technology. In the past he has been CNET's beat reporter for Google, Yahoo, Linux, open-source software, servers and supercomputers. He has a soft spot in his heart for standards groups and I/O interfaces.

 

Join the discussion

Conversation powered by Livefyre

Show Comments Hide Comments
Latest Galleries from CNET
ZTE's wallet-friendly Grand X (pictures)
Lenovo reprises clever design for the Yoga Tablet 2 (Pictures)
Top-rated reviews of the week (pictures)
Best iPhone 6 and iPhone 6 Plus cases
Make your own 'Star Wars' snowflakes (pictures)
Bento boxes and gear for hungry geeks (pictures)