Today we are announcing a new initiative called Google Lens.
Google Lens is a set of vision based computing capabilities that can understand what you're looking at and help you take action based on that information.
We'll ship it first in Google Assistant and Photos.
And it'll come to other products.
So how does it work?
So for example, if you run into something and want to know what it is.
Say a flower.
You can invoke google hands from your assistant, point your phone at it and we can't ell you what flower it is.
It's great for someone like me with allergies.
Or if you ever been at a fence place and you crawl under a desk.
Just to get the user name and password from a wi-fi around it, you can point your phone to it.
And it can automatically do the hard work for you.
Or if you're walking in a street, down town and you see a set of restaurants across you You can point your phone because we know where you are and we have our knowledge graph and we know what you're looking at.
We can give you the right information in a meaningful way.
As you can see we are beginning to understand images and videos.
All of Google was built because we started understanding [UNKNOWN] web pages.
The fact that computers can understand images and videos has profound implications for our [UNKNOWN] mission.