A Bear's Face on Mars Blake Lively's New Role Recognizing a Stroke Data Privacy Day Easy Chocolate Cake Recipe Peacock Discount Dead Space Remake Mental Health Exercises
Want CNET to notify you of price drops and the latest stories?
No, thank you

Repairs under way for server speed tests

Current tests haven't kept pace with server technology, but an industry consortium is coming to the rescue.

Servers, the brawny computers that power network services, have changed radically since 1992. But since then, one thing that hasn't kept pace is the speed test most widely used to measure server performance.

That problem now is being addressed by an industry consortium called the Transaction Processing Performance Council, which plans a modernized successor to its 13-year-old TPC-C test.

The sequel has the working name of TPC-E and is expected to be available in 2006, said Jerrold Buggert, a Unisys representative to the consortium. It's designed to be more representative of modern database server work, less expensive to run and less susceptible to artificially high scores caused by oddball hardware and software configurations.


What's new:
Revisions are in the works for an outdated method that ranks the performance of server computers.

Bottom line:
The upcoming TPC-E test, due next year, is meant to be cheaper, more standardized and more cognizant of today's computing demands.

More stories on this topic

Benchmark scores aren't useful just for bragging rights or for engineers evaluating new designs. They're widely used by customers evaluating new purchases as a helpful, if imperfect, way to compare widely different collections of hardware and software.

The existing speed test, warts and all, is in fact critical to many customers' evaluations. "The most widely referenced result in RFPs"--the requests for proposals by which customers solicit bids--"is TPC-C," Buggert said. One reason it's useful compared with alternatives such as is that it not only measures performance, but also provides a ratio of price to performance.

The importance of the test can be seen in the money and effort Hewlett-Packard invested in trying to find out why scores for its top-end Superdome server were lower than expected in 2001. The company spent more than $1 million trying to track down the problem, a figure it disclosed in a lawsuit that accused an employee of sabotaging the test.

The consortium isn't working just on an improvement to TPC-C. On Monday, it plans to launch a new benchmark for the midrange machines called application servers.

TPC-App and TPC-DS
The TPC-App test will measure how well these midrange machines perform typical tasks such as communicating with database servers, Web site servers and other application servers. Its score will be measured in Web-services interactions per second. And it will let people compare two dueling technologies used in application servers, .Net from Microsoft and Java from Sun Microsystems and several allies.

TPC-App will replace a test called TCP-W. That test had flaws: It was expensive, often requiring 25 servers to run, yet it tested not just application server performance but the performance of servers that cached information, as well as other ancillary machines. The process was so wide-ranging "it was hard to tell what you were measuring," Buggert said.

Companies involved in TCP-App's creation were IBM, Microsoft, HP, BEA Systems, Oracle, Dell, Unisys, Advanced Micro Devices and Intel, Buggert said.

A third test also is on the way, an alternative to the TPC-H test introduced in 1999 to measure "data warehouses," servers that process large amounts of data to extract information such as purchasing trends.

The new alternative, called, for the time being, TPC-DS, reflects more modern data warehouse usage. These machines might, for example, perform complex queries analyzing how different marketing campaigns affected sales in different regions, Buggert said. The new test also will support 135 different types of queries compared with TPC-H's 25, a change that will make it harder to build systems optimized just for the benchmark test that produced artificially high scores.

TPC-C then and now
TPC-C has survived tumultuous years in the history of servers. In 1992, mainframes topped the server pecking order and Unix servers were just catching on. Since then, the server market has been remade by the advent of inexpensive machines using Intel processors, the demise of Digital Equipment Corp. and Compaq Computer, the arrival of Microsoft Windows and Linux, and the emergence of behemoths with as many as 128 processors.

The first TPC-C scores were tiny compared with today's scores. It wasn't until 1998 that servers produced scores of about 100,000 transactions per minute. In 2001, a Fujitsu system arrived to dominate the rankings with a score of 456,000, about the time Sun withdrew after objecting that the test no longer represented reality. In 2004, though, IBM blew the roof off the test with a score of 3.2 million transactions per minute.

The price performance ratio--the system's cost divided by its throughput, or the amount of work a computer can do during a given time period--also has changed dramatically as server prices dropped. The ratio was more than $200 per unit of throughput for systems released before 1996--and nearly $1,200 in one case.

IBM published its first TPC-C result in May 1994, with a score of 485 transactions per minute and a price performance ratio of 654, the consortium said. Today, the ratio for IBM's p5-575 is $5.19.

TPC-C essentially simulates a computer database that manages a warehouse's inventory, processing basic transactions such as order placement, Buggert said. TPC-E simulates an electronic brokerage and includes much more sophisticated processes.

One such process is "two-phase commit" transactions in which a database operation can't be completed until a related operation on another database is completed, Buggert said. Another is "referential integrity," which makes sure a database isn't thrown off if one element is changed or deleted by one process while another process is using that element.

TPC-C's flaws
One problem with TPC-C was the ease with which certain servers could generate unrealistically high scores using hardware and software configurations that are highly improbable in the real world. For example, high TPC-C scores come from servers with colossal numbers of hard drives--6,548 in the case of IBM's top score.

Another problem hinged on the fact that TPC-C's test was easily distributed among relatively independent servers linked in a cluster. That gave the impression that a number of inexpensive machines were as good as a single multiprocessor behemoth, leading the consortium to list results separately for clustered and non-clustered systems.

Real-world database tasks today don't divide so easily, a fact that the upcoming TPC-E will reflect, Buggert said. "It should be a more realistic representation of what you get when you cluster things in the real world," he said.

A real comparison between clustered and non-clustered results should be useful to customers evaluating clustered databases, which are becoming more powerful with the gradual maturation of technology such as Oracle's Real Application Clusters and the InfiniBand high-speed communication link hardware.

Buggert said TPC-E is being created by the major sellers of databases, processors and servers--including Sun.