It's an idea that should be familiar to anyone who's used Google, which doesn't understand content but which can often point people seeking particular information in the right direction. But the CMU researchers are applying the concept to the tasks of adding information to or removing it from digital photos.
The researchers plan to present their results in two papers at the Siggraph computer graphics conference in August. In one technique, called Photo Clip Art, the computer plucks images of people from a database, selecting only appropriate ones based on image parameters such as lighting. In the other, called Scene Completion, the computer can patch over unsightly elements in a photo by finding appropriate substitute material from a database of 2.3 million pictures stored at the Flickr photo-sharing site.
"We're trying to match the properties of the input photo with all the objects of our collection," said Alexei A. Efros, an assistant professor of computer science and robotics at CMU. But the researchers are doing so without the computer actually having to understand the photo's content, geometry, or other aspects.
"Just by having lots of data, we are implicitly understanding the picture even though we don't have any notion of what is there," Efros said.
However, Efros also has conducted research about giving computers a deeper understanding of image properties such as geometry, illumination, motion blur and perspective. His group now is working on uniting that analysis with the new work on matching images.
A graduate student, Jean-Francois Lalonde, led the Photo Clip Art research, and Microsoft Research employees contributed as well, Efros said. Another graduate student, James Hays, led the Scene Completion work.