X

Google's Pixel 3 camera rewrites photo rules with nifty new tricks

Attention, photo nerds. Here's exactly how Google's pioneering photo technology helps both mainstream shooters and enthusiasts.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
8 min read
Stephen Shankland/CNET

Every digital camera is flawed. Image sensors can't capture light perfectly, lenses distort scenery, and photos so often seem blah compared with what you remember seeing.

But Google , with its Pixel 3 and Pixel 3 XL smartphones , has found new ways to use software and hardware to overcome those flaws and get you better pictures. Its Pixel and Pixel 2 phones had already advanced the state of the art for smartphone photography, but the Pixel 3 goes even further.

The Pixel 3 camera holds its own against Apple's iPhone XS despite having one camera tied behind its back. It all but dispenses with the camera's flash, using new low-light shooting abilities instead. And it offers enthusiasts a radically new variety of raw image that opens up photographic flexibility and artistic freedom.

Watch this: Pixel 3's stellar camera ups the ante again

It's all possible because of a field called computational photography, a term invented in 2004 by Google distinguished engineer Marc Levoy while he was at Stanford, before he moved full-time to Google Research. Long gone are the days when photography was all about glass lenses and film chemistry. Fast receding are first-generation digital cameras that closely mirror the analog approach.

Now our cameras rely as much on computers as optics. And what we've seen so far is only the beginning.

Here's what that means specifically for Google's Pixel 3 and its larger sibling, the Pixel 3 XL.

Super Res Zoom for pushing those pixels

The term "digital zoom" has a bad reputation, because you can't just say "enhance," zoom into an image and expect new detail to appear that wasn't captured in the first place.

That's why it's worth paying a premium for optical zoom methods -- notably the second (or third, or fourth) camera in phones from companies including Apple , Samsung and LG Electronics. The Pixel 3 comes with a feature called Super Res Zoom that sports a new way to capture detail in the first place. The upshot is that Google's single main camera has image quality that "comes very, very close" to a second camera optically zoomed in twice as far, Levoy said.

Pixel 3 vs iPhone XS at 2X zoom
Enlarge Image
Pixel 3 vs iPhone XS at 2X zoom

The Google Pixel 3 Super Res Zoom feature, used to take the photo at left, comes "very close" to the image quality of a shot from a camera with 2X optical zoom, Google says. The shot at right is taken with an iPhone XS Max at 2X, and both are zoomed into to 100 percent.

Stephen Shankland/CNET

Here's how it works -- but first, buckle up for a little background on the innards of digital cameras.

All image sensors have an array that records the intensity of the light that each pixel sees. But to record color, too, camera makers place a checkerboard pattern of filters in front of each pixel. This Bayer filter, invented at Eastman Kodak in the 1970s, means each pixel records either red, green or blue -- the three colors out of which digital photos are constructed.

Enlarge Image

This shot flips back and forth between a Super Res Zoom photo taken with a Pixel 3 and an ordinary photo digitally zoomed by 2X with a Pixel 2.

Google

A problem with the Bayer filter is that cameras have to make up data so that each pixel has all three colors -- red, green and blue -- not just one of them. This mathematical process, called demosaicing, means you can see and edit a photo, but it's just a computer making its best guess about how to fill in color details pixel by pixel.

Super Res Zoom gathers more information in the first place. It combines multiple shots, counting on your imperfectly steady hands to move the phone slightly so it can gather red, green and blue color data -- all three colors -- for each element of the scene. If your phone is on a tripod, the Pixel 3 will use its optical image stabilizer to artificially wobble the view, Levoy said.

The result: sharper lines, better colors and no demosaicing. That offers the Pixel 3 a better foundation when it's time to digitally zoom.

Those who shoot at the camera's natural focal length might be eager for the extra quality, too, but Super Res Zoom only kicks in at 1.2X zoom or higher, Levoy said. Why not at 1X zoom? "It's a performance thing," he said. Super Res Zoom slows photo taking and takes more power.

And Super Res Zoom doesn't work with video, either, so if you want telephoto there, a second camera still can be worth paying for.

New computational raw for flexible photos

More than a decade ago, a generation of digital photography enthusiasts and pros discovered the power of shooting with a camera's raw photo format -- data taken directly from the image sensor with no extra processing. Google's Pixel 3 smartphones could expand that revolution to mobile phones, too.

Android phones have been able to shoot raw images since 2014, when Google added support for Adobe's Digital Negative (DNG) file format to record the unprocessed data. But limits in smartphone image sensors have hobbled the technology.

With an SLR or mirrorless camera with a large sensor, shooting raw offers lots of advantages if you're willing or eager to get your hands dirty in some photo-editing software like Adobe Lightroom. That's because "baking" a JPEG locks in lots of camera decisions about color balance, exposure, noise reduction, sharpening and other attributes of the image. Shooting raw gives photographers control over all that.

Raw has been a bit of a bust on mobile phones, though, because tiny image sensors in phones are plagued by high noise and low dynamic range, or the ability to capture both bright highlights and murky details in the shadows. Today, advanced cameras sidestep the problem by combining multiple shots into one high-dynamic range (HDR) image. Google's approach, HDR+, merges up to nine underexposed frames, an approach Apple has mimicked to good effect with its new iPhone XS and XS Max.

Computational raw DNG image from Google Pixel 3
Enlarge Image
Computational raw DNG image from Google Pixel 3

The Pixel 3 camera merges multiple shots and applies other tricks to create a single "computational raw" photo file that has less noise and better color than the standard raw file at left taken with Adobe's Lightroom app. To be fair, Adobe also offers an HDR option, and its noisier image also retains some detail.

Stephen Shankland/CNET

With the Pixel 3, Google's camera app now also can shoot raw -- except that it applies Google's own special HDR sauce first. If you enable the DNG setting in the camera app's settings, the Pixel 3 will create a DNG that's already been processed for things like dynamic range and noise reduction without losing the flexibility of a raw file.

"Our philosophy with raw is that there should be zero compromise," Levoy said. "We run Super Res Zoom and HDR+ on these raw files. There is an incredible amount of dynamic range."

There are still limits. If you zoom in with the camera, your JPEGs will have more pixels than your DNGs. For JPEGs, the Pixel 3 zooms in with a combination of Google's own RAISR AI technology and the more traditional Lanczos algorithm, Levoy said, but for raw, you'll have to do the digital zoom yourself.

Another caveat to Pixel 3 raw: Although Google could use Super Res Zoom's wealth of color data to bypass demosaicing, most photo-editing software only can handle raw files that haven't been demosaiced yet. The Pixel 3 supplies a Bayer-pattern DNG file as a result.

"The JPEGs from the Pixel camera may actually be more detailed than the DNGs in some cases," Levoy said.

Google's images also get a dynamic range boost with an image sensor that performs better than the one in last year's Pixel 2, said Isaac Reynolds, Google's Pixel camera product manager.

Watch this: 10 tips and tricks for the Pixel 3

Seeing in the dark with Night Sight

All Pixel models use HDR+ by default to produce images with a good dynamic range. The Pixel 3 will take it a step further with a tweak of the technology called Night Sight for shooting in the dark, though the feature won't be released for some weeks yet, Google said.

"Night sight is HDR+ on steroids," Levoy said, taking up to 15 frames in as long as a third of a second. The camera combines these multiple frames into one shot and handles things like aligning the frames and avoiding "ghosting" artifacts caused by differing details between frames.

Pixel 3 camera test: Google's phone delivers again

See all photos

A 1/3-second exposure is pretty long, even with optical image stabilization. To avoid problems, the Pixel 3 uses "motion metering," which monitors images and the camera gyroscope to shorten the shutter speed when motion blur is a problem for the camera or the subjects.

"In practice, it does take detailed images," Reynolds said.

Google also had to come up with a new way to gauge the proper white balance -- correcting for various tints a photo can have depending on lighting conditions like daytime shade, fluorescent lightbulbs or sunset. Google now uses AI tech to set white balance, Levoy said.

The company plans to make the feature available in the camera app's More menu, but could make Night Sight more accessible, too, Reynolds said. "We realize that might be a pain, that you might forget it when in very low light," he said. "There will be an easier way to get in."

AI brains for portraits and more

Last year's Pixel 2 was the first Google phone to ship with the Pixel Visual Core, a Google-designed processor for speeding up AI tasks. The Pixel 3 has the AI booster, too, and this year Google is using it for new photo purposes.

Pixel Visual Core helps with HDR+ is instrumental for the camera app's Lens feature that lets you search based on a photo or recognize a phone number to dial.

A shot taken with the Google Pixel 3 XL portrait mode.
Enlarge Image
A shot taken with the Google Pixel 3 XL portrait mode.

A shot taken with the Google Pixel 3 XL portrait mode.

Stephen Shankland/CNET

And it plays a big role in this year's updated portrait mode, which mimics the background blur possible with conventional cameras that can shoot with a shallow depth of field. Apple pioneered portrait mode by using two cameras to calculate how distant from the camera parts of a scene were. Google did it with one camera and a "dual pixel" image sensor that produced similar depth information.

But now Google is doing it all with AI smarts analyzing the depth information it says works better.

"The background will be more uniformly defocused, especially for subjects in middle distances, like 5 to 10 feet away," Levoy said.

Another advantage to AI: Google can train the system more to get better results and ship them in software updates. And Google doesn't just train the system on faces, Levoy said. "Our learning-based depth-from-dual-pixels method works on all scenes, including flowers. Especially flowers!"

The Pixel 3 embeds the depth information into its JPEG file so you can edit the depth and focus point after the fact in the Google Photos app.

AI also figures into Top Shot, the feature that kicks in when the camera detects faces and then tries picking a winner out of a sequence. It's been trained on a database of 100 million images of with people smiling, showing surprise and not blinking.

New chip horsepower also allows the Pixel to detect where human faces and bodies are and brighten both slightly for a more pleasing photo, Reynolds said.

"We dubbed that synthetic fill flash," he said. "It emulates what a reflector might do," referring to the reflective materials that portrait and product photographers use to bounce more light onto a photo subject.

Our computational photography future

It's clear computational photography is reaching ever deeper into all smartphone cameras. The term has risen to such a level that Apple marketing chief Phil Schiller mentioned it during the iPhone XS launch event in September.

But only one company actually employs the guy who coined the term. Levoy is modest about it, pointing out that the technology has spread well beyond his research.

"I invented the words, but I no longer own the words," he said.

He's got plenty of other ideas. He's particularly interested in depth information.

Knowing how far away parts of a scene are could improve that synthetic fill flash feature, for example, or let Google adjust the white balance for nearby parts of a scene in blue-tinted shadow differently from farther parts in yellower sunlight.

So you should expect more in the Pixel 4 or whatever else Levoy, Reynolds and their colleagues are working on now.

"We have just begun to scratch the surface," Levoy said, "with what computational photography and AI have done to improve the basic single-press picture taking."

Watch this: iPhone XS vs. Pixel 3 camera comparison

NASA turns 60: The space agency has taken humanity farther than anyone else, and it has plans to go further.

Taking It to Extremes: Mix insane situations -- erupting volcanoes, nuclear meltdowns, 30-foot waves -- with everyday tech. Here's what happens.