Apple's three-camera system on the new , announced Tuesday, are set to showcase the latest of what Apple can offer for mobile photography. Combining the improved sensors, new ultra-wide lens and the company's A13 Bionic chip, the phones look to bring a number of improvements over last year's iPhone XS and XS Max.
A new night mode and an improved portrait mode are two of the highlights available when Google's impressive artificial intelligence-based photography., but Apple also teased a new feature coming in the fall that seems poised to take on
Called Deep Fusion, the new software feature takes advantage of Apple's progress in machine learning to allow people to take better photos. Like Google's camera on the Pixel, the feature uses machine learning to better decipher images and produce better-looking shots.
On stage at, Apple's Senior Vice President of Worldwide Marketing Phil Schiller called the mode "computational photography mad science." He touted how the system begins taking four long exposure and short exposure photos before you press the shutter button, then takes a longer exposure shot once you do press the button.
All nine images are then combined in a second to produce the best possible image that has the least amount of noise and the sharpest details. The company says it is using machine learning to do "pixel-by-pixel processing of photos, optimizing for texture, details and noise in every part of the photo."
Google, of course, has been taking advantage of artificial intelligence in its Pixel line for years and that computing prowess has helped make the Pixel 3 and 3A's respective cameras .
With Google seemingly set to, it remains to be seen how the search giant counters Apple's latest move into its area of strength. But with Google already teasing the Pixel 4's improved camera capabilities, it seems a new battle is brewing between the two heavyweights.