X

iPhone 11 Pro's new Deep Fusion feature is coming to boost your photos, take on Google

Apple's hardware and software prowess is on full display in the upcoming feature.

Eli Blumenthal Senior Editor
Eli Blumenthal is a senior editor at CNET with a particular focus on covering the latest in the ever-changing worlds of telecom, streaming and sports. He previously worked as a technology reporter at USA Today.
Expertise 5G, mobile networks, wireless carriers, phones, tablets, streaming devices, streaming platforms, mobile and console gaming
Eli Blumenthal
2 min read
20190910-deep-fusion-001

Apple's Deep Fusion on the iPhone 11 Pro takes advantage of machine learning to improve photos. 

Apple/Screenshot by Stephen Shankland/CNET

Apple's three-camera system on the new iPhone 11 Pro and Pro Max, announced Tuesday, are set to showcase the latest of what Apple can offer for mobile photography. Combining the improved sensors, new ultra-wide lens and the company's A13 Bionic chip, the phones  look to bring a number of improvements over last year's iPhone XS and XS Max

A new night mode and an improved portrait mode are two of the highlights available when the phones go on sale on Sept. 20, but Apple also teased a new feature coming in the fall that seems poised to take on Google's impressive artificial intelligence-based photography. 

Called Deep Fusion, the new software feature takes advantage of Apple's progress in machine learning to allow people to take better photos. Like Google's camera on the Pixel , the feature uses machine learning to better decipher images and produce better-looking shots. 

Watch this: Apple launches its new advanced photography system, Deep Fusion

On stage at Apple's launch event, Apple's Senior Vice President of Worldwide Marketing Phil Schiller called the mode "computational photography mad science." He touted how the system begins taking four long exposure and short exposure photos before you press the shutter button, then takes a longer exposure shot once you do press the button. 

All nine images are then combined in a second to produce the best possible image that has the least amount of noise and the sharpest details. The company says it is using machine learning to do "pixel-by-pixel processing of photos, optimizing for texture, details and noise in every part of the photo." 

Google, of course, has been taking advantage of artificial intelligence in its Pixel line for years and that computing prowess has helped make the Pixel 3 and 3A's respective cameras arguably the best on a phone

With Google seemingly set to unveil the Pixel 4 in the near future, it remains to be seen how the search giant counters Apple's latest move into its area of strength. But with Google already teasing the Pixel 4's improved camera capabilities, it seems a new battle is brewing between the two heavyweights. 

Apple Event: See the iPhone 11 and Apple Watch Series 5

See all photos