X

Researchers take the blur out of shaky photos

An MIT researcher and others develop a technique that can restore some sharpness to photos blurred by camera shake.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
5 min read
Researchers have unveiled an image-processing technique that shows promise for fixing images spoiled by camera shake.

The technique is based on an algorithm that deduces the path that a wobbling camera took when a photo was shot, then uses that path to reverse much of the resultant blurring. The method isn't a miracle cure, but researchers at the Massachusetts Institute of Technology and the University of Toronto have used it to significantly help a wide variety of sample images.

One example shows a bird with black, white and rust-colored feathers, blurred to the point where its legs are barely discernable. After processing, it's possible to see not just the legs, but also a dark patch around its eyes, white patterning amid the black feathers and other details.

Un-blurring images

"This is the first time that the natural image statistics have been used successfully in deblurring images," lead researcher Rob Fergus of MIT said in an interview after a demonstration at the Siggraph computer graphics convention last week. The authors have filed for a patent on the process.

The technique, which takes 10 to 15 minutes for typical images, uses a statistical property that describes transitions from light to dark in the photo, Fergus said. That property is the same for all real-world images, so by seeing how it varies in a particular photo, the process can infer the camera motion.

Image processing is a big business, and compensating for human and camera error is a significant part of this. Image-editing software products, and even some cameras, can routinely remove red-eye problems caused by flash photography. A big selling point in new cameras is technology to counteract the unsteady hands of photographers as the photo is being taken. And numerous plug-in modules exist to help Adobe Systems' Photoshop with photo improvement tasks such as removing the speckles of image noise, or sharpening edges to make images crisper.

Right now, Photoshop's latest version, CS2, comes with some deblurring technology in its "smart sharpen" filter. It can compensate in a limited way for focusing problems and for image blur, if the camera was moved in a straight line.

The researchers' approach, in contrast, deals with more complicated jiggling motions. "The real patterns really are weird," Fergus said.

Dave Story, vice president of digital imaging product development at Adobe, believes the group's work is a step in the right direction. It's a little nicer that what we've seen before," he said. "You can start to more accurately and more automatically judge which way the blur was coming from, and it can handle nonlinear paths."

However, the process still leaves some artifacts in the image, meaning more work is necessary, Story said. "We've been exploring this area for three or four years, and there continue to be challenges in making this predictable enough that people will want to use it and it doesn't produce any unnatural artifacts," he said.

For example, when testers are shown sample images, they say that people look "creepy and kind of unnatural," he added.

A study in contrasts
Fergus' technique takes advantage of a statistical property of snapshots that remains constant even across widely varying photos. The property is the collective measurement of the differences in brightness from each pixel to its neighbor.

"It turns out that images in the real world tend to have a distinctive distribution (of light-to-dark gradients)," Fergus said. "If you take lots of different photos, the distribution (of gradients) is very similar--it doesn't change a huge amount between what you would think are different images."

But random images generated by a computer, for example, have a different distribution, Fergus said. "Real images just aren't like random points in million-dimensional space. They have a certain structure," he said.

Specifically, photos of actual scenes all have a similar amount of sharp-contrast transitions from bright-to-dark pixels on the one hand, and of smooth transitions between similar neighboring pixels on the other, Fergus said.

Blurry images, though, have a different collection of such contrast gradients. "All those crisp transitions have been smeared out," Fergus said.

The heart of the process is estimating how the camera moved based on the missing contrasts. At first, a very coarse estimate of the camera motion is calculated from a low-resolution version of the original image. The process is repeated with versions with gradually higher resolutions.

"We use the distribution of sharp gradients to guide what the true, sharp image should look like," Fergus said. "By breaking it up into smaller steps, we can successfully get out really complicated patterns that are characteristic of real camera shake."

The result is called a blur kernel, a grid that shows where the camera spent its time pointing.

That blur kernel is then used as the basis for the second phase of the technique, a process developed in the early 1970s called "deconvolution." This attempts to reverse the specific blurring effect.

Ups and downs
Overall, the process takes about 10 to 15 minutes to fix typical digital camera images. However, Fergus hasn't spent time trying to make the process as efficient as possible, so significant improvements in the lag are possible.

"In Photoshop, you want a plug-in that runs in maybe a minute, tops," Fergus said. With more time spent "doing a really careful, efficient implementation, I'm sure it would become much faster and nearer to the speed you need," he added--perhaps even fast enough to run in a camera.

A little manual input is required, as well: A person must select a rectangular region of the original image where there are edges. Too small a patch yields poor results, but too large a patch takes too long, Fergus said.

In addition, the process doesn't work well for pictures with extremely bright or dark patches--a problem known as "clipping" in image-editing parlance. For that reason, the algorithm works better on "raw" images from higher-end digital cameras, which possess a greater range of light intensities.

Another hurdle comes from noisy images, such as those taken from lower-end cameras with small image sensors--especially when they've been shot in dark conditions. The speckles of noise can look like edges to the deblurring process, Fergus said.

People should not expect to see the technology in software or cameras soon. Fergus has no pretenses about the maturity of the process. "This is a first effort. There's quite a bit of work to be done before it gets into a real application," he said.