My name is Timothy Knight, I'm an Engineering Director at Google for the Android camera team.
Okay, so first off the fundamentals.
So image quality, video quality, speed, shutter latency, time to open.
We've doubled down all of those, right, so everywhere we're better than last year, better photos, better videos.
On top of that we added some new features.
So we added EOS for even crisper photos and more stable videos.
We added portrait mode.
We added motion photos.
Face retouching, I think the experience has really evolved.
So the technique we use is we capture a burst of photos And then you combine them together software to make a really [INAUDIBLE] grain, top quality final photograph.
So with OAS every single frame in that birth is sharper and cleaner.
So the final result is even sharper and cleaner than before.
In video mode, so a problem if you don't have OAS is that if there's motion blur within a frame, you get a little bit of a bubbling So jiggly, looked at the video, so by running optic stabilization within video recording, too, it's even smoother.
We get rid of that motion shake, and it's a more stable feel.
We're capturing much darker versions of the scene, where the highlights are not blown out and the sky is still blue.
And then we do some very sophisticated noise reduction by combining frames together.
And then [UNKNOWN] not being able to get to the final rendition, so we were able to preserve the highlights and the blue skies.
And also in the duck area, we see detail in the shadows.
So, there are actually two techniques, the first technique is machine learning, so by training a model on like a million images, like a lot of images, we're able to understand foreground background segmentation, on both the front and rear camera.
Do a really nice background blur.
Additionally, for the rear camera, we have a special sensor technology referred to as dual pixel.
Where every single pixel have both a left and a right half.
And conceptually, it lets you have two slightly different viewpoints of the scene as if you like, move your head a little bit left to right and that is enough to give you actually a depth map of the scene and combining that with [UNKNOWN] learning model you can get an even more accurate portrait photo as well as photos of things that aren't people.
There are a lot of dual cameras on the market, you know, they may not all be that great [LAUGHS].
You know, the dual camera is a, you know, it brings a lot of trade-offs with it.
You know, for example, it takes more space, maybe the battery is smaller.
You know, often the second camera is not so good in low light And for us the light is super important.
It has smaller pixels and a narrower aperture.
But in the end I think that the image quality, the video quality, the capability's [UNKNOWN] bring to the table were really, like I said, a single camera experience.
And I believe that we We met that goal.
Best photos in the world best videos in the world.
Fastest capture time fastest open time.
CES 2019: What tech to expect
Taking a ride with Elon Musk inside Boring Company's tunnel
Biggest hacks of 2018
The huge Marriott cyberattack may have been the work of Chinese...
How to cut the cord like a pro
Representative slams colleagues, defends Google
US congressman demands to know if Google is tracking him