X

HP dings IBM on server speed test

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
2 min read

After a drubbing at the hands of IBM on a server speed test, the leader of Hewlett-Packard's high-end server group launched at counterattack at Big Blue on his blog Monday. Rich Marcello, though, missed an opportunity to attack the speed test itself.

In November, IBM's new p5-595 server nearly tripled the number of transactions per minute that HP's top-end Superdome could perform on a speed test called TPC-C. But Marcello questioned the usefulness of IBM's accomplishment.

"The result has little real-world significance," Marcello said. He presented several reasons: The next-generation "Montecito" version of Intel's Itanium processor, to be used in a forthcoming Superdome model, beat out IBM's Power5 processor in the Microprocessor Report's award of server processor of the year. Power5 is expensive. IBM's test system used the company's DB2 database software, even though many customers use Oracle's. It used huge amounts of expensive memory. It used the newest edition of IBM's version of Unix, AIX 5.3, for which all software isn't yet optimized. And HP's Superdome fares quite well on a number of other tests.

Perhaps. It's hard to blame IBM too much for choosing the latest hardware and software to run its tests, even if its $16.7 million system cost twice as much as HP's $8.3 million machine. And while IBM may have indeed have ventured far from customer practices by using a gargantuan 2 terabytes of memory, HP wasn't far behind with 1 terabyte.

Marcello's blog describes how the executive's rhetorical prowess triumphs over the customer's doubts. A more surprising and helpful conclusion might have been a call for a better speed test.

Sun Microsystems was the first to drop out of the TPC-C race, calling the test obsolete. IBM offered an alternative benchmark in 2004, the Virtualization Grand Slam. That benchmark measures how well a server handles multiple jobs simultaneously, a very relevant idea, given that today's mammoth servers can be divided into separate independent partitions.

HP itself agrees with this direction. Brian Cox, product line manager for HP's high-end servers, said he also wants this approach to speed tests. Almost no customers run today's biggest servers for a single job, he said.

Every benchmark grows out of date, and it's good there's work under way to give customers a more useful yardstick. Maybe Marcello's next blog posting will describe some alternatives to TPC-C, not just to IBM.