Specifications for future Dell notebooks werebefore the content was pulled from a Dell FTP site and from Google's cache.
Google, like the other major search engines, has an automated search engine that sends software robots called "spiders" out to crawl the Web and find sites to add to the index of Web sites it maintains. Because the spiders follow links running from one Web site to others, they pick up sites on their own without Webmasters having to manually submit them to search engines.
Webmasters also can provide the URL, or numerical Web address, for pages they want crawled, and they can submit detailed site maps to Google, according to Google's "information for Webmasters" pages.
Webmasters who want to keep some or all of their site private from the Googlebot can put a standard document called "robots.txt" at the root of the server that instructs the crawler not to download content. If the removal request is urgent, the Webmaster can submit a request via Google's automatic URL removal system, but must provide an e-mail address and password first.
Content that has been removed can still be viewed through Google's cache, which is a "snapshot" and archive of each page crawled. Webmasters can prevent pages from being cached by inserting specific code on them.
Webmasters must remember that Google's is not the only search engine crawler they have to worry about. Removing content from Google's cache does not mean that other search engines won't index and cache it.