X

How Google's Pixel 2 camera outpaces last year's photo tech

Better hardware and new AI smarts means this year's Google phone is better with low-light photos while adding a new portrait mode.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
6 min read

What do you do to top Google's highly regarded first-generation Pixel phone camera? The same, only more.

More HDR+ image processing. More chip power. More artificial intelligence . And more image stabilization. The result for photos: "All the fundamentals are improved," said Tim Knight, head of Google's Pixel camera engineering team. On top of that are new features including motion photos, face retouching and perhaps most important, portrait mode.

In the days of film, a photo was the product of a single release of the camera's shutter. In the digital era, it's as much the result of computer processing as old-school factors like lens quality.

The Google Pixel 2 portrait mode blurs backgrounds using a single camera, machine learning and a dual-pixel sensor to help judge depth.
Enlarge Image
The Google Pixel 2 portrait mode blurs backgrounds using a single camera, machine learning and a dual-pixel sensor to help judge depth.

The Google Pixel 2 portrait mode blurs backgrounds using a single camera, machine learning and a dual-pixel sensor to help judge depth.

Stephen Shankland/CNET

It's a strategy that plays to Google's strengths. Knight hails from Lytro, a startup that tried to revolutionize photography with a new combination of lenses and software, and he works with Marc Levoy, who as a Stanford professor invented the term "computational photography." It may sound like a bunch of technobabble, but all you really need to know is it really does produce a better photo.

It's no wonder Google is investing so much time, energy and money into the Pixel camera. Photography is a crucial part of phones these days as we document our lives, share moments with our contacts and indulge our creativity. A phone with a bad camera is like a car with a bad engine -- a deal-killer for many. Conversely, a better shooter can be the feature that gets you to finally upgrade to a new model. 

Watch this: The brains behind Google's Pixel 2 camera

Cameras are a big-enough deal to launch major ad campaigns like Apple's "shot on iPhone" billboards and Google's "#teampixel" response.

Your needs and preferences may vary, but my week of testing showed the Pixel 2 to be a strong competitor and a significant step ahead of last year's model. Be sure to check CNET's full Pixel 2 review for the all of the details on the phone. 

AI brains

Some of Google's investment in camera technology takes the form of AI, which pervades just about everything Google does these days. The company won't disclose all the areas the Pixel 2 camera uses machine learning and "neural network" technology that works something like human brains, but it's at least used in setting photo exposure and portrait-mode focus.

Neural networks do their learning via lots of real-world data. A neural net that sees enough photographs labeled with "cat" or "bicycle" eventually learns to identify those objects, for example, even though the inner workings of the process aren't the if-this-then-that sorts of algorithms humans can follow.

"It bothered me that I didn't know what was inside the neural network," said Levoy, who initially was a machine-learning skeptic. "I knew the algorithms to do things the old way. I've been beat down so completely and consistently by the success of machine learning" that now he's a convert.

Google's Pixel 2 XL has a single camera, unlike rival flagship phones from Apple and Samsung. The second circle is a fingerprint reader.
Enlarge Image
Google's Pixel 2 XL has a single camera, unlike rival flagship phones from Apple and Samsung. The second circle is a fingerprint reader.

Google's Pixel 2 XL has a single camera, unlike rival flagship phones from Apple and Samsung. The second big circle is a fingerprint reader.

Stephen Shankland/CNET

One thing Google didn't add more of was actual cameras. Apple's iPhone 8 Plus, Samsung's Galaxy Note 8 , and other flagship phones these days come with two cameras, but for now at least, Google concentrated its energy on making that single camera as good as possible.

"Everything you do is a tradeoff," Knight said. Second cameras often aren't as good in dim conditions as the primary camera, and they consume more power while taking up space that could be used for a battery. "We decided we could deliver a really compelling experience with a single camera."

Google's approach also means its single-lens camera can use portrait mode even with add-on phone-cam lenses from Moment and others.

Light from darkness 

So what makes the Google Pixel 2 camera tick?

A key foundation is HDR+, a technology that deals with the age-old photography problem of dynamic range. A camera that can capture a high dynamic range (HDR) records details in the shadows without turning bright areas like somebody's cheeks into distracting glare.

Google's take on the problem starts by capturing up to 10 photos, all very underexposed so that bright areas like blue skies don't wash out. It picks the best of the bunch, weeding out blurry ones, then combines the images to build up a properly lit image.

Compared to last year, Google went even farther down the HDR+ path. The raw frames are even darker on the Pixel 2. "We're underexposing even more so we can get even more dynamic range," Knight said.

Google also uses artificial intelligence to judge just how bright is right, Levoy said. Google trained its AI with many photos carefully labeled so the machine-learning system could figure out what's best. "What exposure do you want for this sunset, that snow scene?" he said. "Those are important decisions."

In challenging conditions like this dawn sky, the Pixel 2 photo at left is sharper, with better shadow details and a less washed-out sky compared to the shot from 2016's first-gen Pixel. This image is zoomed in, with shadows boosted to show details.
Enlarge Image
In challenging conditions like this dawn sky, the Pixel 2 photo at left is sharper, with better shadow details and a less washed-out sky compared to the shot from 2016's first-gen Pixel. This image is zoomed in, with shadows boosted to show details.

In challenging conditions like this dawn sky, the Pixel 2 photo at left is sharper, with better shadow details and a less washed-out sky compared to the shot from 2016's first-gen Pixel. This image is zoomed in, with shadows boosted to show details.

Stephen Shankland/CNET

HDR+ works better this year also because the Pixel 2 and its bigger Pixel 2 XL sibling add optical image stabilization (OIS). That means the camera tries to counteract camera shake by physically moving optical elements. That's a sharp contrast to the first Pixel, which only uses software-based electronic image stabilization to try to un-wobble the phone.

With optical stabilization, the Pixel 2 phones get a better foundation for HDR. "With OIS, most of the frames are really sharp. When we choose which frames to combine, we have a large number of excellent frames," Knight said.

Testing the Pixel 2's standout camera

See all photos

New camera hardware

Image stabilization, along with an f1.8 lens that lets in a bit more light than last year's f2 Pixel, helps compensate for another change: a smaller image sensor.

Last year's Pixel used an unusually large light-gathering chip, a move that improves dynamic range but that makes the phone's camera module bulkier. This year, Google again chose a Sony image sensor, but for the Pixel 2 it's a bit smaller.

The Google Pixel 2 uses a Sony image sensor and a lens that gathers more light than last year's Pixel.

The Google Pixel 2 uses a Sony image sensor and a lens that gathers more light than last year's Pixel.

Google

The reason: Google wanted a dual-pixel sensor design, and only the smaller size was an option. Dual-pixel designs divide each pixel into a left and right side, and the separation helps the phone judge the distance to the subject. That's crucial for one important new feature, portrait mode, which blurs backgrounds similar to how a higher-end SLR camera works.

Apple uses two lenses for its portrait mode, introduced a year ago with the iPhone 7 Plus and refined this year with the iPhone 8 Plus and the forthcoming iPhone X . The two lenses are separated by about a centimeter. Combining the data yields distance information the same way your brain can if you shift your head from side to side just a little bit.

Google's dual-pixel approach needs only a single camera, but the separation of the two views is only about a millimeter. That's still enough to be useful, Levoy said, especially because Google gets a boost from AI technology that predicts what's a human face. It also can judge depth better because the Pixel's HDR+ images are relatively free of the noise speckles that degrade 3D scene analysis, he added.

Portrait mode smarts

Google's machine-learning smarts also mean it offers a portrait mode with the front camera, too. There, it's based only on machine learning. Without the distance information, the Pixel 2 front camera can't blur elements of the scene more if they're further away, a refined touch you might not miss for quick selfies but that's necessary in some other types of photography.

Machine learning has its limits, though. Google's training data has improved, which helps with the real-world results, but you can't train a neural network for every possible situation. For example, the Pixel 2 technology misjudged where to place focus in one unusual scene, Levoy said.

Google's Pixel 2 portrait mode works on a dog, even though it can't use machine learning that recognizes human faces. At left, portrait mode is on.
Enlarge Image
Google's Pixel 2 portrait mode works on a dog, even though it can't use machine learning that recognizes human faces. At left, portrait mode is on.

Google's Pixel 2 portrait mode works on a dog, even though it can't use machine learning that recognizes human faces. At left, portrait mode is on.

Stephen Shankland/CNET

"If it hasn't seen an example of a person kissing a crocodile, it might not recognize the crocodile is part of the foreground," he said.

The Pixel 2 also includes a custom-designed Google chip called the Pixel Visual Core. But here's a curiosity: Google doesn't actually use the chip for its own image processing -- at least yet. "We wanted to put it in so Pixel 2 will keep getting better," spokeswoman Emily Clarke said. One way it'll get better is by letting other developers besides Google take photos with HDR+ quality, the company said. That change will come through a software update in coming months.

For now, you'll have to be satisfied with moving ahead of last year's phone. The Pixel 2 doesn't match everything you can do with a bulky SLR, but it's a few steps closer for many photographers.

The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter.

Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.