Today, if you want to trim all the distracting background out of a picture--say, the crowd behind your daughter playing soccer--you have to do a lot of artful selection with high-powered software such as Photoshop. But what if your computer understood the depth of the image, just as you did when you took the picture, and could be told to just erase everything that's a certain distance behind your kid?
That's one possible way to use technology that Adobe Systems has begun showing off--and that can be seen in video of a news conference posted by the Audioblog.fr site last week.
Dave Story, vice president of digital imaging product development at Adobe, showed off aspects of how the technology worked. First comes a lens which, like an insect's compound eye, transmits several smaller images to the camera. The result is a photograph with multiple sub-views, each taken from a slightly different vantage point at exactly the same time.
From this information, the computer reconstructs a model of the scene in three dimensions.
Story then showed a video with significant transformations of an image based on this 3D understanding. The image had three major elements--a statue in the foreground, a statue in the middle distance, and a wall in the background. The video showed a simulation of a person shifting vantage point left and right--natural enough given that the multiple views captured that information.
Then, however, the video showed a more unusual transformation: an artificial shift of focus from the original picture, which was aimed at the middle-distance statue, to both the foreground and the background. It took the engineer who developed the technology a week to write the software, and another week to run the simulation, Story said.
Story suggested that the perspective-shifting idea would be useful for dealing with a news photograph of a subject who, you find later, is standing directly in front of a pole. But the 3D comprehension could lead to more useful transformation: "Why don't we have a 3D healing brush and, say, get rid of everything behind his head?"
He didn't demonstrate that idea, but he showed another application of the 3D technology. "If we know the 3D nature of every pixel, what if we could make a focus brush? What if I had a three-dimensional brush where I could reach into the scene and adjust the focus?"
He then showed what he said this focus brush--along with a corresponding defocus brush--might look like. (To my jaundiced eye he could have just been copying from one focus layer to another, but creating the multiple focal planes from a single image is impressive.)
"This is something you cannot do with a physical camera. With the combination of that lens and your digital darkroom, you have what we call computational photography. Computational photography is the future of photography," Story said. "The more things we can do that are impossible to do in a camera, the more powerful people's ability to express themselves becomes."
(Via The Online Photographer)