Microsoft's Surface event live coverage iPad 9th gen review iPad Mini (2021) review iPhone 13 and 13 Mini review iPhone 13 Pro and 13 Pro Max review Google Doodle welcomes fall

iPhone 11 Pro's new Deep Fusion feature is coming to boost your photos, take on Google

Apple's hardware and software prowess is on full display in the upcoming feature.


Apple's Deep Fusion on the iPhone 11 Pro takes advantage of machine learning to improve photos. 

Apple/Screenshot by Stephen Shankland/CNET
This story is part of Apple Event, our full coverage of the latest news from Apple.

Apple's three-camera system on the new iPhone 11 Pro and Pro Max, announced Tuesday, are set to showcase the latest of what Apple can offer for mobile photography. Combining the improved sensors, new ultra-wide lens and the company's A13 Bionic chip, the phones look to bring a number of improvements over last year's iPhone XS and XS Max

A new night mode and an improved portrait mode are two of the highlights available when the phones go on sale on Sept. 20, but Apple also teased a new feature coming in the fall that seems poised to take on Google's impressive artificial intelligence-based photography. 

Called Deep Fusion, the new software feature takes advantage of Apple's progress in machine learning to allow people to take better photos. Like Google's camera on the Pixel, the feature uses machine learning to better decipher images and produce better-looking shots. 

Now playing: Watch this: Apple launches its new advanced photography system, Deep...

On stage at Apple's launch event, Apple's Senior Vice President of Worldwide Marketing Phil Schiller called the mode "computational photography mad science." He touted how the system begins taking four long exposure and short exposure photos before you press the shutter button, then takes a longer exposure shot once you do press the button. 

All nine images are then combined in a second to produce the best possible image that has the least amount of noise and the sharpest details. The company says it is using machine learning to do "pixel-by-pixel processing of photos, optimizing for texture, details and noise in every part of the photo." 

Google, of course, has been taking advantage of artificial intelligence in its Pixel line for years and that computing prowess has helped make the Pixel 3 and 3A's respective cameras arguably the best on a phone

With Google seemingly set to unveil the Pixel 4 in the near future, it remains to be seen how the search giant counters Apple's latest move into its area of strength. But with Google already teasing the Pixel 4's improved camera capabilities, it seems a new battle is brewing between the two heavyweights.