X

Please don't steal this Web content

Movement is afoot to stifle "scraper sites," which copy content of blogs and repost it on other sites to profit from ad impressions.

Elinor Mills Former Staff Writer
Elinor Mills covers Internet security and privacy. She joined CNET News in 2005 after working as a foreign correspondent for Reuters in Portugal and writing for The Industry Standard, the IDG News Service and the Associated Press.
Elinor Mills
6 min read
Lorelle VanFossen is passionate. An author, travel writer and nature photographer, she also has a popular blog about, well, blogging. Her pet peeve is online plagiarism, which she encounters nearly every day.

"It's one of my favorite subjects," she said. "I make my living from my writing, and when people take it because they are ignorant of copyright laws--or think that because it's on the Internet, it's free--it makes me really mad. It's stealing content, in my mind."

VanFossen isn't referring to the kind of plagiarism in which a lazy college student copies sections of a book or another paper. This is automated digital plagiarism in which software bots can copy thousands of blog posts per hour and publish them verbatim onto Web sites on which contextual ads next to them can generate money for the site owner.

Such Web sites are known among Web publishers as "scraper sites" because they effectively scrape the content off blogs, usually through RSS (Really Simple Syndication) and other feeds on which those blogs are sent.

VanFossen's Lorelle on WordPress blog is an authority on the Internet for blogging dos and don'ts. One of the no-nos is using content from other sites without getting permission.

"I make my living from my writing, and when people take it because they are ignorant of copyright laws--or think that because it's on the Internet, it's free--it makes me really mad."
--Lorelle VanFossen,
blogger

VanFossen has several ways of checking to see if other sites have scraped her posts. She puts full links in her posts to other articles of hers so that when one of her stories is posted on another Web site, it will link back to her story, and she can see the Trackback. Trackback is a "linkback" method Web publishers use to identify who is linking to or referring to their articles.

She has set up Google Alerts with her byline so that she will get notifications any time Google comes across a news site or blog with a reference to her. She also does a keyword search for her name on Google search, Google Blog Search and Technorati. In addition, she uses a WordPress plug-in that allows her to insert a digital fingerprint, a series of unrelated words, into her posts that she can search on in case her byline is stripped.

Invariably, VanFossen comes across her posts on other sites.

If she hasn't had a previous problem with a site, she will send the site publisher an e-mail asking them to not use her content without her permission. If she doesn't get a response, or she has had problems with the site in the past, she sends a "cease and desist" letter that informs the owners that they are violating her copyright and warns them she will take legal action under the Digital Millennium Copyright Act, or DMCA, unless they remove her content.

VanFossen also contacts the company that hosts the Web site, as well as advertisers on that site and search engines, providing the necessary evidence via mail or fax, as required. "The DMCA puts the onus on advertisers, Web hosts and search engines to remove copyright violations," she said. "I have a form letter I use."

In December, Michelle Leder, editor of Footnoted.org, used a cease-and-desist order to get her content taken off a site that was continuously republishing her posts. "Even the post I wrote about him stealing my content was posted on his site," she said with a laugh.

"It wasn't the issue of money," Leder added. "When other people's business model is based on stealing content, that's a significant problem."

One site that offers a free service for tracking copyrighted content online is CopyScape. About 200,000 Web site owners use the free service every month, and thousands pay for a higher-level service, said Gideon Greenspan, chief technology officer of Indigo Stream Technologies, which offers the service.

There are many aggregator Web sites that collect content from a variety of sources, often related to a specific topic area, like real estate or cars, around which they can serve contextual ads. While some of the sites reproduce entire blog posts or articles from other sites (CNET News.com included), others offer just headlines or the first paragraph or a few paragraphs. Many include attribution and a link back to the original article. But providing attribution does not preclude a copyright violation, experts say.

While most publishers of scraper sites stay underground, Michael Gray, a search optimization consultant who runs GrayWolf's SEO Blog, outed himself as a Web scraper in a blog post about a year ago.

"I've moved away from this. It wasn't worth the time and effort of doing it," he said in a recent interview. He said he aggregated "snippets" of others' content so he could flesh out his sites and make money off Google ads.

Gray also downplayed the significance of scraping. "Bloggers have a tendency to overreact to things and make mountains out of molehills," he said.

Gray said his sites fell under the "fair use" provision of the DMCA, which allows people a limited use of a copyrighted work without having to get permission. But the nature of the use should be noncommercial, said Dennis Kennedy, an information technology lawyer knowledgeable of intellectual-property issues.

"It's extremely difficult to track down the people doing this. And even then, you're probably not going to be able to establish jurisdiction, if they are outside the U.S.," he said. "It could be more expensive than it's worth, and you have to show damages."

Pretty much any site that puts out an RSS feed is going to get scraped, said Jonathan Bailey, Webmaster of Plagiarism Today. Typically, it's the same people sending out the herbal Viagra junk e-mail, he said.

"The black-hat SEOs (search engine optimizers) are doing this to build up Google juice (improve search engine rankings) or display Google AdSense ads," Bailey said.

Not only do scraper bots allow people to grab thousands of posts an hour, but there is software that can give it a pseudonym by replacing certain words with synonyms, such as "feline" instead of "cat," Bailey said. This makes it harder for bloggers to track their scraped content.

The scraped site can even appear on Technorati before the original content, he said. And in some cases, images are getting scraped and "hotlinked" back to the original site, thus depriving that site of bandwidth and costing them money, he added.

Some people point the finger at Google. "They've been slow to shutter a lot of these accounts. It's in their best interest to keep them open for as long as they can, Bailey said.

"Google should do something about this," said Footnoted's Leder. "The entire revenue model for these sites is based on Google ads misdirecting content."

But Google has worked to cut back on the problem of Web spam over the last year, said Matt Cutts, a senior software engineer at Google.

"It's true: people can scrape very easily. But it's also much harder to spam than it has been in the past," he said. "For months and months, we've kicked people out of AdSense because they violated our quality guidelines."

Sites being scraped can report it to Google using the tools section on Google's Webmaster Central site and by clicking on an "Ads by Google" ad, Cutts said.

For sites that syndicate their content through feeds, adding a link to the original source of the article at the top or the bottom of a page with wording to the effect of "this article was originally printed here" will help ensure that Google's search engine displays the original item, not a reproduction, on a scraped site, he said.

Not every blogger is worried about scraper sites. Om Malik, executive editor of GigaOM, a blog that analyzes Net access and telecommunications services, said he doesn't waste time going after scrapers. Why not? "There are so many of those sites. Like (the Lernaean) Hydra's head, kill one, and more pop up."