When Microsoft announced Monday that it had surpassed America Online and Yahoo as the most popular Web destination, critics relying on internal data instead of third-party information. Typically, companies hoping to score points with investors tout numbers from Media Metrix, PC Data or Nielsen/NetRatings, among the most popular Web site ranking services.
But some skeptics argue that independent rankings are not necessarily more accurate than companies' internal numbers. Because no one has created industry-wide usage standards, and several companies purport to be the most credible ranking services, experts say that gauging Web traffic is an imprecise craft in need of an operational overhaul.
"If there is a good standard, I haven't seen one," said Anthony Carbone, an advertising specialist and chairman of the technology group at New York law firm Richards & O'Neil. "There's still a lot of movement to rate and test these things. I think there's going to be lots of changes on this front, and you're just starting to see it."
To be sure, the ratings game has become a lot more manageable than it was in the early to mid-'90s, when dot-coms threw out statistics that had pomp but little merit.
Back then, no one agreed on what was the most worthwhile measurement of Web use--hits, unique visitors, duration of time a visitor spent at a site, and so on. The industry now agrees that unique visitors--the actual number of individuals who visit a site at least once, regardless of how many times they return--is the most telling measure of a site's worth.
But there are still discrepancies between ratings companies on the exact definition of "unique visitor" and the best way to track them. And there are even more discrepancies in the published rankings.
For example, Media Metrix ranked About.com sites as the seventh largest among U.S. visitors who logged on from home in June, with 13.03 million visitors.
For the same period, About.com did not even figure in the top 10 at Nielsen/NetRatings, which ranked it 11th. According to Nielsen, About.com sites had 11.93 million visitors.
Bruce MacEvoy, director of research for Yahoo, found many more discrepancies between rankings and detailed his findings in a scathing report on the industry. The survey, published at an online advertising conference in October, found "huge differences in site traffic," ranging from about 20 million to 10,000 page requests per day.
The report also found that companies that conduct Web surveys report 137 percent of a site's domestic traffic, compared with the audited server blasted the software giant for measurements of logfile page requests--themselves a hotly contested means of determining traffic. The report concluded that measuring traffic from international visitors is even more difficult because geographic location is unknown for about 15 percent of all Internet users.
Backed by the Future of Advertising Stakeholders (FAST), the Internet Advertising Bureau (IAB) and the Advertising Research Foundation (ARF), the report found that rankings over two months became more divergent after the top six sites. Over a single month of data, rankings diverged after the top three sites.
That led many experts to conclude that rank discrepancy hurts smaller Internet companies more than the largest ones. The mismatch can prove especially difficult for companies that are struggling for new visitors, new investors or both.
For example, executives at Salon.com, an online magazine whose stock price has slumped more than 80 percent since March, have touted internal estimates that indicate the site had 3.7 million unique visitors this spring. Widely used data from Media Metrix put the company's usage at about half that.
Rich LeFurgy, chairman of the New York-based IAB and general partner at San Francisco-based venture capital firm Walden VC, said discrepancies between rankings harm more than just small companies: Anyone who advertises on the Web or invests in Internet companies has a stake in the rankings.
Because data from different companies often conflicts, LeFurgy said, it is difficult to determine valid pricing models for Internet ads. If a site has dramatically more or fewer visitors than the numbers suggest, shouldn't it cost more or less than the advertiser paid?
Similarly, when venture capital firms trawl for the next hot property on the Internet, they often refer to rankings companies, which publish lists of the fastest-growing sites in a given week or month. If these sites are not necessarily as popular as the rankings suggest, they may not be as strong an investment.
"When you look at the essential issues on the Internet today, there's only a few of them--privacy and measurements," LeFurgy said. "Those are the key issues that need to be resolved and accelerated to get us to the next level. It's acerbated and compounded by the proliferation of platforms, the expansion of the digital domain from the Internet to wireless, and broadband and enhanced TV. We're just beginning to get visibility into the issue."
Experts blame the problem in part on the fact that there are several large ranking companies, primarily Media Metrix and Nielsen/NetRatings, and each company calculates rankings differently. For other media, one large ranking company provides undisputed data: Nielsen for television, Arbitron for radio and the Audit Bureau of Circulation for many print publications.
Media Metrix uses a sample of more than 100,000 Internet surfers in nine countries, including 55,000 in the United States. PC Data has about 120,000 people worldwide. Nielsen/NetRatings has 150,000 sample Internet users in 15 countries, with 65,000 in the United States--the largest Internet media research sample.
To properly measure traffic on the Web, according to MacEvoy's report, companies may need to sample as many as 1 million Internet users.
Rankings companies say consolidation will pare the industry until one giant ranking company emerges. They also insist that the industry is definitely headed toward a unified standard.
But who creates the standard and which company executes it is a fierce battle. No company wants to disclose its methodology, and no company wants to admit it is redundant.
A cry for standards
Companies also may have a financial interest in not having a uniform standard. Most big Internet companies, including AOL and Yahoo, pay hundreds of thousands of dollars to numerous data crunchers; they typically contract with at least two rankings companies to ensure that discrepancies are minimized through the law of averages.
Jim Carey, director of public relations for PC Data, said the solution lies in an independent consortium of measuring experts.
"Being in Washington, where there's a trade association for everything, I think we need a group to say, 'Yes, there are differences, but we need to settle those,'" Carey said.
"We all acknowledge it needs to happen, but no one has stepped up to the plate to handle it. Right now, it's disadvantageous to come up on one side or the other," Carey said. "It's about who's winning the rhetoric war. To step aside and say we need to clear this up, you have to expose yourself and start from scratch."
Stacie Leone, director of marketing and communications at New York-based Media Metrix, said she would like to see a standard. But she does not want any other company to create it.
"Yes, everyone wants a standard. We believe that Media Metrix has become the standard," Leone said. "When we go to a new country...there's a huge sigh of relief: 'Finally, the third-party objective measurer is here,' people say."
"I would characterize the market right now as in bake-off mode. Clients are trying to get a handle on which data source they believe (is) most accurately describing what's going on," Meadows said. "Part of our battle cry is...that our sample is much more representative of what's going on in the marketplace and is therefore more accurate."
Online marketing and advertising expert Thomas Bailey Jr., president of Alliance-Strategies in Portola Valley, Calif., said the fierce battle lines between rankings companies do not bode well for a standard anytime soon. When asked how to solve the ratings problem, Bailey was blunt.
"You don't," he said. "You just realize that none of them are perfect. The numbers are almost meaningless--like trying to compare apples with bananas."