X

Search prototype gets the picture

Researchers develop a search engine that retrieves results based on a sketch: Draw a wing nut, and the engine retrieves all database images that match the drawing.

Michael Kanellos Staff Writer, CNET News.com
Michael Kanellos is editor at large at CNET News.com, where he covers hardware, research and development, start-ups and the tech industry overseas.
Michael Kanellos
5 min read
Think of it as Google meets Etch A Sketch.

Researchers at Purdue University have developed a search engine that retrieves results based on an image or a sketch. Draw a picture of a wing nut, and the engine will search a database and retrieve all the images that match the drawing.

News.context

What's new:
Researchers at Purdue have developed a search engine that retrieves results based on a sketch. Draw a picture of a wing nut, and the engine will search a database and retrieve all the images that match the drawing.

Bottom line:
Though currently geared toward corporate settings, where it could help companies save redesign costs by enabling them to quickly find their already existing widgets, the shape search technology may eventually make its way to the Net and the general public.

More stories on this topic

Although the shape search engine now works only on confined databases that contain a few thousand images, the technology is intended to handle greater capacities. Its earliest appearance outside the research lab is expected to be in industrial databases rather than in commercial search tools.

But in 10 to 15 years, image searches will likely be taking place on the Internet, according to Karthik Ramani, a professor of mechanical engineering at Purdue and director of the university's Research and Education Center for Information Systems in Engineering.

That would take search well beyond its roots in text queries, such as those that Google and Yahoo allow.

"You can search on something that you have in your mind," Ramani said. "A shape has so many details in it."

The search engine will be detailed in a paper to be presented Thursday at the International Conference on Data Engineering in Boston.

At the moment, searching for images on the Internet remains a text-based activity. General Web search engines, such as Google and AltaVista, do let people sift through millions of images online, but the companies base their tools largely on keywords. That means images must be tagged with a handful of descriptive words, or what's known as metadata, to match up with users' keyword queries.

That method can be limited, because many images are not associated with text, and labeling them can be costly. Also, it can be difficult to square users' vocabulary with the relatively few terms associated with a given image. For this reason, stock photo Web sites such as Corbis invest heavily in metadata for images.

Companies such as Virage, Vima Technologies and LTU Technologies have made progress in visual search that's closer to what the Purdue researchers plan to unveil. For example, Vima, based in Santa Barbara, Calif., licenses technology that can search and classify images based on their features, without using text. Virage, the biggest company in the market for image and video search technologies, was recently bought by Autonomy, one of the largest players in corporate-focused search.

David Telleen-Lawton, CEO of Vima, said image search is coming along but is still in its early days. Several years ago, only AltaVista offered a rudimentary text-based image search; now, all the general engines offer one. Image analysis is the next frontier, Telleen-Lawton said. But he's not sure it will take the form Purdue envisions.

"I have not seen a convincing number of examples of people that would likely draw out the shape ahead of time, and I'm not sure ahead of time what tools they would need to do that," he said. "We're focused more on providing an interface that lets a user ask for an image 'more like this or more like that,' based on the image features rather than text."

ImgSeek, on the other hand, takes the sketchy approach. The downloadable tool manages and searches photo collections by letting users draw a rough sketch of what they're looking for. ImgSeek displays the best matches in a thumbnail view.

Losers, weepers
The Purdue project came about as a way to help large manufacturers keep track of the plethora of components they've designed or bought, Ramani said. Often, large companies simply lose track of these parts, forcing them to search manually for machine drawings--or even redesign a part.

Purdue studies have shown that design engineers as a whole cumulatively spend close to six weeks a year looking for lost parts. The image search engine could cut the time by 80 percent.

"The corporate memory is quite short," Ramani said.

The search engine could help a company gravitate toward standardized components and design fewer parts in-house. "In another 10 years, you could have someone build an entire car by sourcing" the relevant information through database searches, Ramani said.

A large construction equipment company is conducting field tests with the engine to compare the prices of different parts from different suppliers, he said. Imaginestics has agreed to license the technology.

The shape search engine essentially looks at the texture and geometry of a submitted sketch and then matches it against digitized 3D images that have been cut up into cubes, or volume elements, called voxels. The sketches can be scanned pen drawings, computer-aided design (CAD) drawings, pictures or other graphical representations.

As with text-based search engines, the more detailed the drawing, the better the chances of getting accurate results.

The project's proverbial secret sauce is the algorithms that convert the voxels into searchable skeletal graphs and "feature vectors," or numbers that digitally represent the physical shape. Unbeknownst to the user, a single query actually generates multiple searches.

Results for a quick search based on a simple image can be retrieved fairly rapidly, but searches containing a large amount of detail, such as a multi-megapixel image or a CAD drawing, will take much longer.

Besides expanding the range of the shape search engine, researchers are looking at how to perform multimodal searches that combine image elements with text. In other words, someone could submit a sketch with text, stating "like this, but with four holes, not two" to refine the search.

The engineering team has also linked up with Zygmunt Pizlo, an associate professor of psychological sciences at Purdue, to tune the search engine more acutely to how people actually perform searches.

The shape search paper was written by Ramani, doctoral student Kuiyang Lou and Sunil Prabhakar, an assistant professor of computer science.