X

Microsoft flexes muscles on supercomputing jobs

Microsoft unveils new work in the world of supercomputing that makes use of its high-performance computing server product. The highlight is a project that saved money and electricity by being moved to the cloud.

Josh Lowensohn Former Senior Writer
Josh Lowensohn joined CNET in 2006 and now covers Apple. Before that, Josh wrote about everything from new Web start-ups, to remote-controlled robots that watch your house. Prior to joining CNET, Josh covered breaking video game news, as well as reviewing game software. His current console favorite is the Xbox 360.
Josh Lowensohn
3 min read

Microsoft today unveiled its behind-the-scenes work on porting a popular suite of supercomputing software tools to its Azure cloud platform. It's work that culminated in an a test job that the company says would have cost an estimated $3 million if it had used traditional on-premises hardware, but it got the job done for a little more than $18,000 using a hybrid approach.

Windows HPC Server 2008 R2 Logo

That job in particular, which is part of Microsoft's focus at Supercomputing 2010 conference in New Orleans, was done as a collaboration between Microsoft and the Seattle Children's Hospital. Together, the teams ran a large chain protein sequence through BLAST, a software tool set designed to churn through databases, which in this case were all known DNA base pairs in the human genome. Drug companies frequently use it in the process of designing new drugs in order to tell how the human body will react before moving into additional phases of testing.

The twist though, is that instead of running this job on just local machines, Microsoft ported the BLAST software to work in Azure, where it could cook away not only on UW's computers, but in Microsoft's Azure cloud as well. As a result, Microsoft is now offering the findings from that test to the scientific community, as well as a version of the BLAST tool that runs in Azure.

To implement the test run, Microsoft had to port BLAST to Azure. The company has also updated its high-performance computing (HPC) server product, so that users will be able to scale up a job with extra Windows Azure nodes. The two technologies then work together, and in parallel. As Bill Hilf, Microsoft's general manager of technical computing, told CNET last week, this ends up being a "concert" of processing power when it works right, which he said it will once users grab the first service pack for Windows HPC Server 2008 R2, due out by the end of the year.

Along with the $3 million example, Hilf also pointed towards BLAST running on Azure at a smaller scale, with researchers at UW's Hardwood Lab running a 5,000 chain sequence, which was accomplished in a half an hour for $150.

The Tokyo Institute of Technology's Tsubame 2.0 supercomputer.
The Tokyo Institute of Technology's Tsubame 2.0 supercomputer. Tokyo Institute of Technology

Besides the test on the proteins, Microsoft also worked with the Tokyo Institute of Technology on its Tsubame 2.0 supercomputer to do a benchmark of how much computational power the system could push out while running the Windows HPC Server software.

"There's always this technical machismo," Hilf said of supercomputer projects. The main appeal for this one, Hilf explained, was that it uses a handful of operating systems and software. "They run our stuff on it. They run Linux on it," he said.

Microsoft and the Tokyo Institute of Technology got Tsubame 2.0 to reach a petaflop, which works out to a quadrillion mathematical equations per second--all the while working at a power level of more than a gigaflop per watt. That metric, Hilf explained, was a more efficient computing to energy ratio than a standard laptop, which becomes an increasingly important thing to keep track of with supercomputing projects that can use a high number of machines at once, for long periods of time.

"One of the most important aspects for us is not just hitting that level of performance but doing it with software anyone can buy, and it's not some special mutant variety just to hit this number," Hilf said.

Updated at 11:25 a.m. PDT with a correction about where the large-scale test was done, and additional information about where other tests were conducted.