Google CEO Sundar Pichai on Wednesday announced a new initiative called Lens that uses computer vision and image recognition to help you with everything from editing photos to identifying a flower, simply by taking a picture.
"Google Lens is a set of vision-based computing capabilities that can understand what you're looking at and help you take action based on that information," Pichai said at the Google I/O developers' conference. "We will ship first in Google Photos and Assistant and it will come to other products."
Pichai explained that using computer vision, you will be able to remove noise from low-light photos as well as remove an obstruction between you and your subject. He showed, for example, how you can edit out the fence you were forced to shoot through at your child's baseball game.
Using Google Lens in Assistant, you'll be able to point your phone's camera at a flower and have it automatically identify the flower for you. Pichai showed how Lens could also be used to automatically join a wireless network by simply pointing your camera at a network name and password label on a router. In another example, he used Lens to quickly show information on a restaurant by, again, pointing your camera at the actual restaurant.
Google Lens will be rolled out in Assistant and Photos later this year.
Google I/O 2017
reading•Google Lens vision recognition tells you what you're looking at
Aug 22•How to download Android Oreo right now
Aug 21•Android Oreo wants to make your phone twice as fast
May 26•Google's three-step plan to make you love VR
May 23•Bye, Siri: 6 tips for using Google Assistant on the iPhone