A background-blurring portrait mode arrived last year in Google's Pixel 2 smartphone, but researchers used a hacked-together clump of five phones to improve how the feature worked in this year's Pixel 3.
The portrait mode simulates the shallow depth of field of higher-end cameras and lenses that concentrate your attention on the subject while turning the background into an undistracting blur. It's tricky to do that simulation well, though, and mistakes can stick out. For the Pixel 3, Google used artificial intelligence technology to fix the glitches.
To get it to work, though, Google needed some photos to train its AI. Enter the quintet of phones sandwiched together to all take the same photo from slightly different perspectives. Those slight differences in perspective let computers judge how far away each part of a scene is from the cameras and generate a "depth map" used to figure out the background material to blurred.
"We built our own custom 'Frankenphone' rig that contains five Pixel 3 phones, along with a Wi-Fi-based solution that allowed us to simultaneously capture pictures from all of the phones," researcher Rahul Garg and programmer Neal Wadhwa said in a Google blog post Thursday.
The technique shows how profoundly new image-processing software and hardware are changing photography. Smartphones have small image sensors that can't compete with traditional cameras for image quality, but Google is ahead of the pack with computational photography methods that can do things like blur backgrounds, increase resolution, tweak exposure, improve shadow details and shoot photos in the dark.
So where does the Frankenphone come into it all? As a way to give a view of the world more like what we see with our own eyes.
Humans can judge depth because we have two eyes separated by a short distance. That means they see slightly different scenes -- a difference called parallax. With its iPhone 7 two years ago, Apple took advantage of parallax between its two rear-facing cameras for its first crack at portrait mode.
Google's Pixel 2 and Pixel 3 only have single rear-facing cameras, but each pixel in a photo from the phones is actually created by two light detectors, one on the left of a pixel site on one on the right half. The left-side view is slightly different from the right-side view, and that parallax is enough to judge some depth information.
But not without problems, Google said. For example, it can judge only left-right parallax in a scene, not up-down parallax. So Google gives the Pixel 3 a leg up with AI.
The AI is good at adding other information into the mix -- for example, slight differences in focus, or an awareness that a cat in the distance is smaller than one that's close up. The way AI works today, though, a model must be trained on real-world data. In this case, that meant taking quintets of photos from the Frankenphones with the left-right and up-down parallax information.
The trained AI, combined with data from another AI system that detects humans in photos, produces the Pixel's better portrait-mode abilities.
CNET's Holiday Gift Guide: The place to find the best tech gifts for 2018.
Cambridge Analytica: Everything you need to know about Facebook's data mining scandal.
reading•How Google's 'Frankenphone' taught the Pixel 3 AI to take portrait photos
Dec 17•Apple lied about iPhone X, XS and Max screen sizes and pixel counts, lawsuit alleges
Dec 16•The 30 worst phone names of all time
Dec 16•The iPhone XR in blue looks stellar under a macro lens
Dec 16•The best iPhone X cases