The scene in Blade Runner is famous: taking a grainy photo, Rick Deckard zooms, enhances and moves around corners just as you would a 3D space. It's impossible for many reasons -- not least of which is that the camera can only see what is visible at the time the photo is taken.
However, what if software could reasonably extrapolate a 3D shape from a 2D image? That's what researchers from Carnegie Mellon university, led by associate researcher of robotics Yasser Sheikh, have created, using freely available stock 3D models of everyday objects, such as furniture, cookware, cars, clothing and appliances.
"In the real world, we're used to handling objects -- lifting them, turning them around or knocking them over," said Robotics Institute PhD student and study lead author Natasha Kholgade. "We've created an environment that gives you that same freedom when editing a photo. Instead of simply editing 'what we see' in the photograph, our goal is to manipulate 'what we know' about the scene behind the photograph."
It's not perfect -- stock models will not always match the objects in a photo exactly, and some objects are simply not available as stock models -- but it can be used for a variety of objects, not just in digital photographs, but in paintings and historical photos uploaded as image files.
Some imperfections occur in soft objects that distort as they are moved, and sometimes lighting or ageing can cause a shift in the appearance of the object. To correct for these issues, the researchers created a technique that semi-automatically aligns the model to the shape of the object in the photo. It then extrapolates lighting and what the hidden parts of the object might look like.
The researchers also believe that, as 3D scanning and printing becomes more widespread, more stock models will become available for the software to tap into, filling the gaps in its database.
"The more pressing question will soon be, not whether a particular model exists online, but whether the user can find it," Sheikh said.
Kohlgade will present the research (PDF), partially funded by a Google Research Award, at the SIGGRAPH 2014 Conference on Computer Graphics and Interactive Techniques in Vancouver, Canada on August 13. You can read more about the project on its official website.