Colonial Pipeline ransomware hack, gas shortage Lego Friends apartment Tesla Model S crash investigation Broadway to reopen Sept. 14 Stimulus check updates

Stanford camera chip can see in 3D

Chip designers develop image sensor that can judge the distance of different elements in a scene, but it takes a good deal of computing brawn to process the image.

Most folks think of a photo as a two-dimensional representation of a scene. Stanford University researchers, however, have created an image sensor that also can judge the distance of subjects within a snapshot.

To accomplish the feat, Keith Fife and his colleagues have developed technology called a multi-aperture image sensor that sees things differently than the light detectors used in ordinary digital cameras.

Each subarray on the multi-aperture sensor captures a small portion of the overall image, a portion that overlaps slightly with that of the neighboring subarrays. By comparing the differences, a camera can judge the distance of elements in the subject. (Note that this mock-up differs from reality, in which each subimage would be rotated 180 degrees, but this makes the idea easier to grasp.) Keith Fife/Stanford University

Instead of devoting the entire sensor for one big representation of the image, Fife's 3-megapixel sensor prototype breaks the scene up into many small, slightly overlapping 16x16-pixel patches called subarrays. Each subarray has its own lens to view the world--thus the term multi-aperture.

After a photo is taken, image-processing software then analyzes the slight location differences for the same element appearing in different patches--for example, where a spot on a subject's shirt is relative to the wallpaper behind it. These differences from one subarray to the next can be used to deduce the distance of the shirt and the wall.

"In addition to the two-dimensional image, we can simultaneously capture depth info from the scene," Fife said when describing the technology in a talk at the International Solid State Circuits Conference earlier this month in San Francisco.

The result is a photo accompanied by a "depth map" that not only describes each pixel's red, blue, and green light components but also how far away the pixel is. Right now, the Stanford researchers have no specific file format for the data, but the depth information can be attached to a JPEG as accompanying metadata, Fife said.

Recording photos in three dimensions is a pretty radical overhaul of the concept. Depending on your preferences, it could be anything from an exciting new frontier to the latest annoying digital gimmick.

Either way, you'd best start thinking about the implications because Fife isn't the only one working on the challenge. Image-editing powerhouse Adobe Systems has shown off some 3D camera technology too. It should be noted, of course, that stereoscopy itself is an old and respected photographic subject.

Even if you don't want to print holographic pictures of your new kitten, I suspect that 3D technology could help with some traditional photography challenges. Just as face detection can make a camera decide better where to focus and how to expose a shot, having a depth map could make this sort of calculation that much more sophisticated.

This diagram shows the multi-aperture sensor, which puts a small lens over a group of image sensor pixels. Each subarray gets its own microlens. Keith Fife/Stanford University

Other advantages
Depth isn't the only potential advantage of the multi-aperture approach, Fife said. It could also help reduce noise, which in digital photography takes the form of colored speckles that are a particular plague when shooting at higher ISO sensitivity settings.

The noise is reduced because multiple subarrays capture the same views. It's therefore easier to distinguish true color of the subject from off-color noise. In addition, each subarray can be set to record a specific color, which could reduce the "color crosstalk" of current image sensors, he said. Today's "Bayer" pattern sensors employ a checkerboard of red, green, and blue pixel sensors, but bright red light captured by a red pixel can, for example, leak out a bit and affect the neighboring blue and green pixels.

Each subarray gets its own microlens. Although that complicates the manufacturing of the sensor, it could simplify the lenses used in existing cameras, Fife said. And lens manufacturing today certainly has no shortage of difficulties with a variety of exotic glass and even fluorite crystal elements, aspherical elements, and other avant-garde optics.

"There is opportunity for most of the complexity of the lens design to sit at the semiconductor rather than at the objective lens," Fife said. "Although the local optics (on the sensor) may be challenging, it is possible that the optics can be better controlled with lithography and semiconductor processes than with the injection molding and grinding that is used in the conventional camera lenses."

The microlenses might even be all that's needed for some applications, such as taking super-closeup "in vivo" photos inside plant and animal subjects where there's no room for a camera, Fife said. "The multiaperture sensor can form images at close proximity...because no objective lens is needed," Fife said.

This photo shows the prototype chip with 12,616 subarrays. Each pixel on the chip is 0.7 microns on edge, and the chip consumes 10.45 milliwatts of power. Keith Fife/Stanford University

No free lunch
Lest you get carried away by the technology, you should be aware of a number of caveats:

• Because the same subject matter is captured redundantly by multiple pixels, the ultimate sensor resolution is lower than the raw number on the overall sensor.

• Processing the image, both to figure out how to merge the subimages into one overall image and to create the depth map, takes about 10 times as much processing horsepower as conventional on-chip image processing. Cameras already are battery hogs, and nobody wants to draw any more power or slow down camera performance.

• 3D images are possible only with subjects that have texture and other detail. "If a picture is captured of a perfectly smooth white wall, it is impossible to estimate the distance to that wall," Fife said.

So those are the downsides, but that's par for the course with new technology. And even if the technology never materializes, it's a strong indicator of the radical transformations that are in store for digital photography.