X

IBM edges out HP in server speed test

Big Blue's top-end Unix server moves ahead of its Hewlett-Packard competition in a widely watched server speed test--but HP expects to reclaim its position by the end of March.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
3 min read
IBM's top-end Unix server has moved ahead of its Hewlett-Packard competition in a widely watched server speed test--but HP expects to reclaim its position by the end of March.

By doubling the memory in its 32-processor p690 "Regatta" server from 256GB to 512GB, IBM increased its speed measurement from 403,000 to 428,000 transactions per minute, a smidgen faster than the 423,000 from HP's 64-processor Superdome. HP's system remains less expensive, though, at $6.6 million to IBM's $7.6 million.

The test, run by the Transaction Performance Council (TPC), measures the performance of a server handling simulated database transactions such as checking inventory, placing orders and recording payments. The top-ranked machine still is a 128-processor behemoth from Fujitsu, but it uses unconventional database software. HP and IBM use Oracle 9i.

Servers--powerful networked computers that typically run 24 hours a day--are a lucrative market, with $10.5 billion in sales in the third quarter of 2002, according to research firm Gartner Dataquest. IBM, Sun Microsystems, HP and Dell Computer all are scrapping for position in the market, which has been shrinking for two years with the recession and the decline of Internet-related spending.

The TPC-C test has been dominated by servers running the Unix operating system, but those with Intel processors and Microsoft's Windows operating system have begun climbing the ladder in recent months. An NEC system with 32 Intel Itanium processors ranks sixth on the list, while a Unisys system using 32 Intel Xeon processors holds seventh place.

HP disparaged IBM's TPC-C result as a "minor performance tune" and said in a statement that its Superdome server will surpass IBM by the end of March. Future Superdome upgrades will give the system a score of more than 1 million, the company said.

Sun, which has the top market share in Unix servers, has stopped participating in the TPC-C test, arguing that it's no longer representative of true server tasks.

"It is measuring a workload that is more than 10 years old and is no longer relevant," said Chief Competitive Officer Shahin Khan. The company favors benchmarks of specific applications, such as those from Oracle, PeopleSoft, J.D. Edwards and Manugistics, which Khan said are "more immediately translated to customer requirements."

Sun's systems still are respectable, but it's probably not a coincidence that the company withdrew from that benchmark challenge.

"There's no real reason to believe that Sun wouldn't perform adequately on TPC-C, but it's a good guess that they wouldn't be the leader," said Gordon Haff, an Illuminata analyst. But Sun's tirade against the benchmark means there's no graceful way it could re-enter the fray. "They've made so much noise about how TPC-C is meaningless that they can't really reverse themselves," he said.

Sun's complaint with TPC-C more specifically is that each transaction in the test triggers just a small number of operations--for example, a debit command then a credit command. Nowadays, though, a single transaction typically requires thousands of instructions, Haff said.

While a small number of transactions can be stored in high-speed cache memory, greatly speeding benchmark results, real-world applications often require information to be retrieved from slower main memory or even slower hard drives, Khan said.

IBM, unsurprisingly, disagreed that TPC-C is irrelevant.

"Benchmarks matter," said Carole Gottleib, manager of IBM's Unix server performance. "In the mind of customers they are still an important part of the sales process. The TPC-C benchmark is a good indicator of overall system performance for transaction processing workloads. It takes into account the processing architecture and speed, memory, storage subsystems and the database engine."

Analyst firm IDC advises against relying too heavily on TPC-C, which it describes as "perhaps the most famous and frequently used" server transaction benchmark. The way TPC-C uses databases opens a "loophole" that testers can exploit but that customers aren't likely to benefit from, and IDC agrees with Sun that the TPC-C transactions are simpler than those required in the real world.

"For true benchmark validity, customers should use a variety of independent software vendor benchmarks, such as Oracle, Baan or PeopleSoft, in conjunction with industry standards from organizations such as TPC and SPEC," the Standard Performance Evaluation Corporation, IDC said in a report.