X

Photo industry braces for another revolution

Replacing film with digital sensors was a major change, but new combining computing with photography is beginning to transform the industry more radically.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
7 min read

Think of it as digital photography 2.0.

In the last decade, photography has been transformed by one revolution, the near-total replacement of analog film cameras by digital image sensors. Now researchers and companies are starting to stretch their wings by taking advantage of what a computer can do with sensor data either within the camera or on a full-fledged PC.

Some elements of this new era, which researchers often call computational photography, are refinements of existing technology. For example, some cameras can wait to take the photo only when subjects are smiling and not blinking, in effect placing the shutter release button in the hands of the subjects rather than the photographer.

But more dramatic changes could shift the definition of a camera more dramatically. One major area of research, for example, uses computational processing to create a 3D representation of a scene rather than just the two dimensions of traditional photography.

"There's a shift in thinking going on," said Kevin Connor, who manages professional digital imaging products for Adobe Systems. "People are starting to see the broader possibilities and where we can push things...People are realizing that maybe we shouldn't just be trying to make the best traditional photography experience."

What changes will the new era bring? It's hard to say for sure, but if history is anything to judge by, it'll be a rough but fun ride. On the unpleasant side, I expect market disruption, accelerated product obsolescence, and customer confusion. But I also anticipate genuinely exciting technology that could open up new creative and practical possibilities.

Digital photography 1.0 already has meant hard times for the photography industry. The film business expired almost completely overnight; Polaroid closing its film plants this year is only the most recent example, and Konica Minolta, a venerable camera maker, sold its camera assets to electronics giant and image sensor manufacturer Sony. People can share photos online rather than mailing prints. And camera makers no longer have years to recoup research and design investments in a particular model: although SLR (single-lens reflex) cameras hold their value reasonably well, compact cameras have a shelf life not much longer than a banana.

Early phases
Depending on your definitions, you can argue the computational photography revolution already has begun.

For example, editing software can correct camera lens flaws such as barrel and pincushion distortion, which makes parallel lines bow outward and inward, respectively, or chromatic aberration, which causes colored fringes along high-contrast edges. But that's generally a largely manual process.

At the 6sight conference in Monterey, Calif., last year Adobe's Connor showed computational photography techniques that lets a photo's depth of field be expanded or changed, or the photographer's vantage point be shifted. You can see Connor give a demo of that in the video at right.

More sophisticated possibilities are emerging. Hasselblad's high-end cameras come with software that can perform what it calls Digital Auto Correction, which fixes chromatic aberration and various other problems based specifically on the setting of the lens when the photo was taken.

Because it's a tough computational problem, though, and there's only so much horsepower in the camera, Hasselblad relies on post-processing in software to perform some of the fixes. In essence, the computer has become an extension of the act of pushing the shutter-release button.

Another early area for computational photography involves using a computer to combine multiple photos into one composite shot of the same scene.

Two well-established examples are panoramas and high-dynamic range (HDR) photography. With panoramas, computers can stitch multiple photos together to create a much larger view of a scene than a camera could take on its own. Taken to its extreme, work such as Carnegie Mellon's GigaPan project can produce images gigantic enough to get lost in, at least figuratively.

HDR is more complicated. With it, photographers take multiple pictures of the same scene at different exposure levels then use particular software to produce a composite image that doesn't suffer the common problems of blown-out bright areas and murky shadows. With HDR, photographers can create an image that shows both a cathedral's brilliant stained-glass window and its subdued stonework.

HDR is a painstaking process today. But that might not always be the case. Panasonic is working on an image sensor that takes three separate images of the same scene for better dynamic range. And it's certainly possible that a camera itself could take several images, align them, and create its own HDR image.

A more radical example is merging multiple images to take the best of each. For example, the high-end version of Adobe's Photoshop CS3 can convert multiple pictures of a tourist attraction, each picture cluttered by visitors, into a single scene with the ephemeral humans gone. In one sense, it's fiction, because the moment never happened, but seen another way, it's capturing some of the essence of a scene.

Another way multiple images can be combined is by using MotionDSP, whose software can be used to help intelligence agencies and movie-phone videographers get more out of their imagery. The technology relies on the fact that multiple frames of a video captured the same subject matter, and processing that can produce an image of higher fidelity than what any individual frame possesses.

MotionDSP CEO Sean Varah said it could be possible for a camera to take a burst of five or six images, then computationally combine them into a single, higher-resolution shot. "I think camera guys would love to have that in the camera because they're always trying to sell you a better camera or keep the price point up," Varah said.

Software that can sharpen edges in digital photos has been around for years, but more sophisticated processing is possible, too. MIT researcher Rob Fergus has been working on software to deblur photos marred by camera shake, analyze photos to infer exactly how your camera jiggled when you took it, then back out those changes.

Go deep
It's the 3D realm where some of the more dramatic changes appear. Stereo photography, otherwise known as stereoscopy, has been around since the Victorian age, but that technique relied on taking two images of a scene and letting the human brain reconstruct a 3D image.

adobe_light-field_image.jpg

The array of subtly different images of the same scene that Adobe's plenoptic lens produces.

Adobe

Research under way now could let the camera, or a computer afterward, understand the third dimension. That could be useful as a way to help the camera figure how best to focus and expose the a shot. More dramatically, it could lead to three-dimensional hologram shots, assuming somebody crafts economical way to view such data.

One 3D idea comes from Stanford University, where Keith Fife and colleagues have created a camera image sensor that can gauge depth. That sensor works by using hundreds of tiny lenses over the sensor pixels; by comparing the subimages from each subarray of pixels, a computer can judge how far away various features are.

A related technology, from start-up Refocus Imaging, produces data files that can be processed to focus the camera after the photo has been taken. It also can be used to deliberately bring a background into focus or to blur it so it's not distracting.

adobe_light-field_camera.JPG

Adobe's plenoptic camera lens can help create a 3D representation of a scene.

Adobe

Essentially, Refocus Imaging substitutes a computer for camera optics. "Computational optics is the next frontier...We can process in software to do what the hardware usually has to do," said Chief Executive Ren Ng.

Making that change could mean the centuries-old, highly refined, sedate optics field could be replaced by breakneck computer industry rates of change.

"You get the ability to scale performance much faster--a curve that looks like Moore's Law," the famous and largely accurate observation by Intel co-founder Gordon Moore that computer chips get double the number of transistors every two years.

The Refocus Imaging technology is based on a concept called the light field, a much richer description of light entering a camera. Capturing the light field requires very different processes from conventional cameras, but Adobe thinks it will be built in.

"If light field photography becomes much more prevalent, which we believe will happen over time, we think will be much more convent to have it built into your camera," Connor said in a recent speech at the 6sight conference on digital imaging. "We're trying to be a catalyst to get this to happen."

Adobe also is working in the new domain. It's been showing a prototype camera with a "plenoptic" lens--one made of many smaller lenses. A computer processing the subimages, each with a slightly different perspective, can reconstruct 3D attributes.

Adobe, seeing things in perspective to its image-editing business, envisions a tool that could let you edit only areas of a photo that were close to the photographer. For those who have struggled for hours with detailed masking operations to separate foreground from background, that sort of idea probably sounds like a potential godsend.

But such technology currently exceeds the power of ordinary computers, Connor said in an interview: "It's definitely more computationally intense than the stuff we're typically doing in Photoshop."

But as so many industries have discovered, it's generally a bad idea to bet against Moore's Law.