Apple has introduced the first truly major update to its iPhone camera that promises an improvement in photo quality beyond the incremental annual changes (beyond ancillary updates, such as flash or image stabilization) since the original model. The iPhone 7 Plus brings new capabilities -- some more accurately described than others -- which you'll need to sort through. Here's what they mean.
The two-camera-module implementations are a form of computational photography, which is using in-device algorithms to do what cost and size constraints prevent from being done with a single sensor and lens. It's been used as far back as the introduction of automatic panorama stitching, and includes popular multi-shot capabilities like auto HDR. So dual-camera systems are just another step in a long line of updates that, taken to its throw-more-cameras-at-the-problem extreme, looks something like Light's 16-camera-module approach.
Apple's dual-camera implementation doesn't look like golden awesomeness, but it does seem pretty good; the only computational aspect of the system is the background defocus it will be able to perform once the software's updated (see depth of field below). The phone's got one 12-megapixel camera with a 28mm f1.8 lens and a 12-megapixel camera with a 56mm f2.8 lens.
That "telephoto" lens
Apple refers to the camera with the 56mm lens as "telephoto." That's not telephoto, it's just twice the magnification. It's a "normal" angle of view lens; around 70mm or longer is considered telephoto. However, 56mm is a good length for portraits and other scenes where you don't want the distortion and shrinking of the subject that you get with the typical wide-angle phone camera lens.
Apple is a bit confusing on this point; it doesn't claim "2x optical zoom", and instead the specs are worded "optical zoom at 2x" which is a subtlety lost on many people. I suppose technically the system could be construed as optical zoom: you have a lens for 28mm and a lens for 56mm, so you're getting two different magnifications using lenses. (And the LG G5 got here first.) But "zoom" implies you get get from one to the other with stops in between; the only reason "zoom" may make sense in this context is because 56mm is the next step up from 28mm.
If the second camera had a 70mm lens, for instance, the jump from 28mm to 70mm wouldn't practically be optical zoom. In practice, it's bi-focal length. Multiple-camera systems can sometimes zoom between the two focal lengths computationally, but Apple's simply switching from one camera to the other with a tap and calling them 1x and 2x. The Hasselblad True Zoom Moto Mod for the Moto Z is a true optical zoom solution, for example. Past 2x it will do 10x digital zoom, and you might get slightly better results than you do now with a wide-angle lens because you're starting with optical magnification on the 56mm camera.
The number of lens elements in the absence of other technical information tells you zero about the quality or performance of a lens. It is a meaningless specification in this context (though not in others).
Raw image support
The JPEG photos you're used to getting from an iPhone are automatically compressed and processed, which decreases the number of colors in the photos and clips the bright and dark areas. That makes them hard to retouch without exacerbating the imperfections (called artifacts).
Raw image data comes straight from the sensor -- or at least is minimally processed -- so you can edit them yourself without making the artifacts worse. In theory. The reality is that when you're dealing with photos off such a small sensor, or even a pair of small sensors, you can't gain that much when it comes to editing photos in order to improve exposures or reducing noise to your taste rather than the company's. You do get access to the uncompressed colors, but even then the sensors aren't capturing the complete range because they're tiny.
There's just too much sensor noise and not enough tonal range for you to get better results than in-camera processing, except in a limited number of situations. However, access to the raw files means third-party photo-app developers can access the data so they can deliver better JPEGs and give you control over settings that either they didn't have before or that made photos look worse than the stock camera app's.
Apple highlighted Adobe Photoshop Lightroom raw editing on the new phone; now it can have feature parity with the Android version. And since the raw files use the semi-standard DNG format, they're readable by tons of apps and applications on the desktop and other mobile platforms.
"Wide color capture"
I'm not quite sure what this means in practice. Apple has a programming interface for app developers to perform "wide color capture", so I guess they have access to more bits of data so the color gamut doesn't get compressed.
Shallow depth of field
This is a variation on a feature that some mirrorless cameras have, which simulates a defocused background and a sharp foreground by using the second camera (or a second shot in the case of real cameras) to capture information that lets the camera understand where things are in the scene relative to the subject (a depth map). The device then algorithmically isolates the subject from the rest of the image and blurs out everything that's not-subject. And because the blur is algorithmic rather than optical, it's easier to produce round out-of-focus highlights and smooth defocused areas (together referred to as bokeh).
Computational depth of field looks different than that produced optically, because optical defocus occurs when elements of the scene don't share the same focal plane as the subject; that means elements that do share the same focal plane can be sharp when you don't want them to (among other things). You can sometimes get better results computationally. Apple's initial implementation looks limited, though, relying on a specific portrait mode and only able to produce the effect in scenes with people because it's based on the company's face- and body-detection algorithms.
A new image signal processing engine (ISP)
Every new sensor incorporated into a camera requires a new image-processing engine, because each combination of sensor and lens (and flash and stabilization and so on) has different characteristics. The fact that it's new provides no information, and the list of features that it enables is far more important. In this case, it sounds like Apple refined some of its processing algorithms to deliver better results. As happens with almost every iteration of a camera.
...One more thing?
There's a lot of important stuff we still don't know. For instance, what size sensors are in the modules? For instance, it's possible the 56mm module uses a smaller sensor, because that's an easy way to get more magnification. But that wouldn't be good.
In spite of its enthusiasm for the new camera, Apple was fairly cautious with its claims -- something I really appreciate. "What we are saying is this is the best camera we've ever made in a smartphone." That's indisputable.
We await the comparisons to see if it finally makes strides against competitors.
reading•The iPhone 7 Plus camera is 'Apple's best' but what does that mean?
Dec 1•HomePod, Apple's $349 Siri-enabled speaker, hits in 2018
Nov 17•iPhone X power in a friendly, familiar package
Nov 3•iPhone X goes on sale, bringing out the true Apple superfans
Nov 3•Apple opens its doors for world's first iPhone X sales