Patrick Holland has been a phone reviewer for CNET since 2016. He is a former theater director who occasionally makes short films. Patrick has an eye for photography and a passion for everything mobile. He is a colorful raconteur who will guide you through the ever-changing, fast-paced world of phones, especially the iPhone and iOS. He used to co-host CNET's I'm So Obsessed podcast and interviewed guests like Jeff Goldblum, Alfre Woodard, Stephen Merchant, Sam Jay, Edgar Wright and Roy Wood Jr.
Patrick's play The Cowboy is included in the Best American Short Plays 2011-12 anthology. He co-wrote and starred in the short film Baden Krunk that won the Best Wisconsin Short Film award at the Milwaukee Short Film Festival.
Watch this: Google Lens is smart enough to identify flower species
There were plenty of cool announcements at Google I/O, the company's annual developer conference. But the one that got us really excited is Google Lens. Lens is not a piece of hardware, but rather a behind-the-scenes piece of software that can recognize text and objects from a picture or camera. It analyzes and contextualizes what it sees in real time and shares that info quickly.
It sounds pretty dry on paper (
uses phrases like "machine learning," "vision-based computing" and "artificial intelligence" to describe it), but when it was demoed at the conference, it was actually pretty neat -- even garnering some "oohs" and "ahhs" from the audience.
Here are the three ways Google Lens works, as well as a few real-world scenarios where it may come in handy, and why we're excited about it.
With Google Lens, you can point your phone at an unknown object (say, a flower), and it will help identify what it is. In the example Google used, Lens identified a flower species named Milk and Wine Lily.
We're not too sure how extensive this feature is, but since Google Assistant can already identify monuments and landmarks from photos, we wouldn't be surprised if you could point your phone at a building and Google Lens could identify it as, say, the Eiffel Tower in Paris or Brandenburg Gate in Berlin.
In addition to flowers, it'd be great if it could identify birds for the amateur ornithologist or cars for the car enthusiast.
You can sign into Wi-Fi without breaking your back
The biggest audience reaction Lens got at I/O was when it read the name and password of a Wi-Fi network on a router, then automatically signed and connected the phone to the network. The idea that Lens can carry out a multistep task (not to mention solve that familiar first-world problem of crawling under a desk and taking a picture of someone's 17-character-long password) is exciting and makes us speculate what else it could do.
Perhaps it can autoconnect to Bluetooth after you scan an object's product number, or carry out a purchase on
after you scan a barcode, or add an event to your
after scanning a flyer.
You can get the lowdown on nearby places
Lens can also scan the facade of nearby businesses and call up info and reviews of that particular place. This makes a lot of sense given Google's expansive database of places, photos and streets. We wouldn't be surprised if Lens could identify text and signages of other things, like how Google Translate can already translate different languages of signs when you point your camera to them. Calling up more info for other things like wine labels (third-party apps like Vivino and Samsung's AI Bixby Vision already do this), food packaging and medicine labels.
We haven't tried Google Lens out for ourselves, but from what we've seen so far, it's pretty nifty and easy to use, and we're looking forward to when it rolls out (whenever that will be).