X

Outsmarted: Captcha security not much of a gotcha

Web sites including Blizzard, eBay, and Visa's Authorize.net use a flawed antibot mechanism, Stanford researchers say. They say companies should take security more seriously.

Declan McCullagh Former Senior Writer
Declan McCullagh is the chief political correspondent for CNET. You can e-mail him or follow him on Twitter as declanm. Declan previously was a reporter for Time and the Washington bureau chief for Wired and wrote the Taking Liberties section and Other People's Money column for CBS News' Web site.
Declan McCullagh
3 min read
Elie Bursztein says he was able to decode 66 percent of the anti-bot tests used by Visa's Authorize.net payment site.
Elie Bursztein says he was able to decode 66 percent of the anti-bot tests used by Visa's Authorize.net payment site. Declan McCullagh/CNET

PALO ALTO--A team of Stanford University researchers has bad news to report about Captchas, those often unreadable, always annoying distorted letters that you're required to type in at many a Web site to prove that you're really a human.

Many Captchas don't work well at all. More precisely, the researchers invented a standard way to decode those irksome letters and numbers found in Captchas on many major Web sites, including Visa's Authorize.net, Blizzard, eBay, and Wikipedia.

This chart shows how successful Decaptcha was in decoding each Web site's anti-bot mechanism. The column marked "precision" shows the success rate.
This chart shows how successful Decaptcha was in decoding each Web site's anti-bot mechanism. The column labeled "precision" shows the success rate. Stanford University

Their decoding technique borrows concepts from the field of machine vision, which has developed techniques to control robots by removing noise from images and detecting shapes. The Stanford tool, called Decaptcha, uses these algorithms to clean up the image so it can be split into more readily recognized letters and numbers.

"Most Captchas are designed without proper testing and no usability testing," Elie Bursztein, 31, a postdoctoral researcher at the Stanford Security Laboratory, told CNET yesterday. "We hope our work will push people to be more rigorous in their approach in Captcha design." Captcha stands for Completely Automated Public Turing test to tell Computers and Humans Apart.

Decaptcha was able to decode 66 percent of the Captchas used by Visa's Authorize.net payment site, 70 percent of Blizzard Entertainment's Captchas -- the company's games include World of Warcraft and Diablo -- and 25 percent of Wikipedia's. About one-fifth of Digg.com's Captchas and almost that many of CNN.com's were decodable. Any decoding rate over 1 percent, the Stanford team says, means that particular Captcha is too broken to continue to use.

A representative for Blizzard said that they use more than just Captchas to secure their systems. "It's common knowledge that Captchas are fundamentally unable to fully guarantee application security, but they do protect against certain threats," Shon Damron told CNET yesterday. "While we use Captchas as an initial layer of security, primarily to minimize spam with regard to new account creation, they represent one of many different security technologies that we employ to protect our infrastructure and customers." (Representatives eBay and Visa did not comment.)

The security of Captchas is important because they're used to defend against malicious 'bots, including operators of botnets who try to automatically create accounts on Web e-mail services to send spam. Captchas are also used to curb bot-generated comments and automated ballot-stuffing in online polls.

Examples of popular Captchas (Completely Automated Public Turing test to tell Computers and Humans Apart)
Examples of popular Captchas. Click for larger image. Stanford University

The only tested Captchas that withstood the researchers' attacks were Google's. The researchers ran into a remarkable zero percent success rate when trying to decode Google's slanted-red-letters Captcha, used in Gmail, and the fuzzy-lettered ReCaptcha, which was created at Carnegie Mellon University and acquired by Google in 2009.

ReCaptcha, which is free, is used by what Google estimates to be over 100,000 Web sites including Twitter, Facebook, Craigslist, Ticketmaster, and Microsoft. The research team did not test Yahoo, Amazon, and LinkedIn's Captchas because it was too difficult to get them to appear consistently.

Bursztein hopes to encourage Web developers to think about Captchas more systematically -- as a computer science challenge, not just a simple security problem that can be solved without adequate testing. He likens it to the state of encryption research in the 1980s, when developers tried to invent their own algorithms. Over time, researchers realized that peer review and a security analysis by someone trying to break the code was necessary.

"It is important to not roll your own Captchas unless you know what you are doing," Bursztein says.

A paper earlier this year written by three Newcastle University researchers, Ahmad Salah-El-Ahmad, Jeff Yan, and Mohamad Tayara, reported more success against attacking Google's Captchas. They said their research "implies" a success rate of 33 percent against ReCaptcha, but it's not clear that they tested the current version.

The Stanford paper, co-authored by Matthieu Martin and John Mitchell, was presented at a computer security conference this month in Chicago.

The researchers say they have no plans to release Decaptcha.

"We don't want bad guys to use it against companies," Bursztein says. "Decaptcha is not meant to be released to the general public. We do provide it to companies that wish to test their Captchas. Our goal is to make the Web a better place, not to harm users."