X

Microsoft: Google's Bing test was 'good subterfuge'

CNET talks to Bing Director Stefan Weitz about Google's claims that Microsoft copied search results from its index.

Josh Lowensohn Former Senior Writer
Josh Lowensohn joined CNET in 2006 and now covers Apple. Before that, Josh wrote about everything from new Web start-ups, to remote-controlled robots that watch your house. Prior to joining CNET, Josh covered breaking video game news, as well as reviewing game software. His current console favorite is the Xbox 360.
Josh Lowensohn
3 min read

Microsoft wants to make it very clear that Bing is no copycat.

"I want to make sure you understand that we were not copying results from any of our competitors, period," Bing Director Stefan Weitz told CNET in an interview this afternoon. "It's almost insulting a little bit, because we've got all these guys and gals that are working their butts off to do this, and it's categorically not accurate. It's an illogical statement to make," he said.

Following yesterday's claims by Google that Microsoft had been copying the company's search results in its Bing search engine, Microsoft denied the allegations publicly, both on stage at its Bing Farsight conference in San Francisco, as well as in a blog post by Harry Shum, Microsoft's corporate vice president of Bing.

So why the follow-up post issued this morning by Yusuf Mehdi, Microsoft's senior VP of its Online Services Division, saying more of the same?

"I think we were still seeing these reverberations," Weitz said. "I think part of it is that we want to make sure people and engineers who worked on this project get credit for what they've done. And it's important to make sure people are clear about what's happening with this data."

As for that click stream data, Weitz was adamant about pointing out that Google's "honeypot" test, which used a test batch of synthetic search terms in IE and the company's Bing bar, was not how Bing gets its ranking or overall search index. "It's a constantly evolving set of signals that we use to weight it, so it's hard to even say how much any of this stuff weighs unless you run a particular query and run probes against it to see what's happening there," Weitz said.

Bing screenshot

Weitz went on to explain that some of these minor queries can get weight in Bing's relevancy engine, even if they're something obscure, which is what happened during Google's copycat test.

"When the ranker looked at all the signals that we had and said, 'well OK, there might not be much there, and that might not only be one and returned that result,' the lesson we learned here is maybe we shouldn't be firing results at all if we only have one signal," Weitz said. "One signal is too few to triangulate on, to do a decent job with results. In that sense, it's kind of nice to have them help us refine our algorithm, to go with their intent."

As for why Google's 100 query honeypot test managed to make an impact on Bing's index, Weitz said the same procedure would not have an impact on more established search terms. "What the Google folks did, as you probably saw, they choose words that would never be issued by a human, a bunch of gobbledygook basically, and then they artificially ranked those pages they indicated highly in the Google index, so they basically faked the ranking in Google," Weitz said. "It doesn't scale up to a popular term in the same way that we have tremendously sophisticated systems to detect clickfraud, and they were pretty clever in how they did it. Good subterfuge."

So does all this mean Microsoft is going to change how it uses click stream data? Not necessarily. "This is a very common practice because it has enormous user benefits," Weitz said. "That's the whole point, right? You're actually able to say 'I know 84 percent of people who go to AlaskaAir.com, the first thing they do is go to flight status.' So now, when I'm building a product like Bing, I can make flight status more prominent, and answer Alaska Air with a flight status answer."

Ultimately, Weitz said that Google only getting back a handful of the 100 honeypot terms it sent out as part of its test verifies that Microsoft's system is not just a copy. "If it was a copy you'd have 100 percent fidelity," he said. "The 93 that didn't fire either we felt the signal wasn't strong enough or we had something else going on there, which is why they didn't fire. That's exactly the point. And that's why it's so perplexing that they keep using the word 'copy' when they know very well how ranking works, and how the system works, and that a signal like that is one of many. It's perplexing."