Spam, zombie robots, and the rest of the dark underbelly of the Internet has led to one of the Web's big annoyances: the captcha. That's the barely readable block of random letters you must translate in order to prove your humanness, and it's supposedly the one thing that separates us from the machines. It's also used in nearly every site registration process--and more recently at site logins. The bottom line is that it's annoying but also utterly necessary to keep evil at bay.
Enter reCAPTCHA, a project of the School of Computer Science at Carnegie Mellon University. A mix between disease-curing Folding@Home, and MyCroft[review], reCAPTCHA requires users to solve two jumbled words: one is the actual captcha, the other is just a word that needs to be translated into text. These words come from various scanned books and documents residing on the Internet Archive. Many of those books were written before computers and in their current state (PDFs and image files) are just glorified photographs--a medium that is still hard to sort through. Once complete, they'll be digital text, and completely searchable.
Words for translation are not just chosen by random. Documents that have been scanned, get checked by an Optical Character Recognition (OCR) engine, which is able to pick up many of the words. Those that are misspelled by OCR, or are impossible to read, are plucked and put into the ReCaptcha word pool. Sites can implement ReCaptcha several ways. There are plug-ins for WordPress, MediaWiki, phpBB, and PHP.
I've embedded a sample ReCaptcha below. You'll notice both words look similar, as ReCaptcha is using both words from the same source, so you can't tell which one has already been solved.
[found on del.icio.us]