Google CEO Sundar Pichai on Wednesday announced a new initiative called Lens that uses computer vision and image recognition to help you with everything from editing photos to identifying a flower, simply by taking a picture.
"Google Lens is a set of vision-based computing capabilities that can understand what you're looking at and help you take action based on that information," Pichai said at the Google I/O developers' conference. "We will ship first in Google Photos and Assistant and it will come to other products."
Pichai explained that using computer vision, you will be able to remove noise from low-light photos as well as remove an obstruction between you and your subject. He showed, for example, how you can edit out the fence you were forced to shoot through at your child's baseball game.
Using Google Lens in Assistant, you'll be able to point your phone's camera at a flower and have it automatically identify the flower for you. Pichai showed how Lens could also be used to automatically join a wireless network by simply pointing your camera at a network name and password label on a router. In another example, he used Lens to quickly show information on a restaurant by, again, pointing your camera at the actual restaurant.
Google Lens will be rolled out in Assistant and Photos later this year.
Google I/O 2018
reading•Google Lens vision recognition tells you what you're looking at
Oct 15•7 ways Google Assistant just got better
Aug 23•Android Pie works like the iPhone X these two ways
Aug 21•JBL Link Bar brings Google Assistant, better TV sound for $400
Aug 7•Android Pie: 5 new features coming to your phone