Image is you could search for anything and then have it pop into AR in your world like the Mars Curiosity Rover or what if reality were browseable. Well that's kind of what Google is doing here, Google IO with AR and with lens. For instance, Google is introducing AR into google search, so you can find 3D objects on AR Core and AR Kit, I phone and Android phones, And then be able to pop those models into the real world at scale. Like NASA's Curiosity Rover which is right next to me. On the Lens side, Google is exploring all sorts of things that will browse the world, such as translating things, menus. Paintings and finding ways to analyze that and pop it up on the fly for you to look at. AR is being integrated into Google search, both on Android and iOS and the way it works Is that when you search for something, if there is a 3D object, it will appear if you have an AR core or AR kit supported Android or iPhone. When you tap on that, it will bring up the 3D object. And then it will launch, if you want into the real world in AR. It's interesting because Google search is kind of an extension of thought and if you could get that seamless enough, maybe future AR experiences or headsets could conjure things on the fly as needed. Google is working with a few partners right now including NASA which has the curiosity rover and certain things will pop-up in Google knowledge graph like this tiger complete with animation and sound Google Lens is another spin on AR. Google's been exploring ways in which cameras can analyze the real world, kind of like what we thought Google Glass was going to do way back in 2013. Lens can capture and analyze and bring up information that it sees. The new spins to Google Lens are adding even faster and more AR-like experiences to that app, both for Android and IOS. Using old ATM- [CROSSTALK] Google Lens' new feature are aiming to make The real world more browsable. So for instance, Google's got menus that can now pop up with highlights of commonly ordered dishes. You could translate something and have it actually map to the object so that you can move around in space and see the language stick to whatever it's on. [MUSIC] Google's also exploring ways to bring up Harry Potter-like experiences in photos or in posters So that you can look at and have it magically come alive. Or in museums, schools exploring ways that lands can pop up information in location specific instances. For instance at the new young museum bringing up curated information. On paintings that's different than on Google search would provide. One of the most interesting use of lens is on low end phones. Google's putting translate into Google lens on Android one. Devices, and has been exploring this already in tests in Indonesia and India. What it does is use very low bandwidth to be able to translate on the fly, based on a snapshot, so you could actually Translate what you are looking at, but also have it read back. For your safety, use a low light during the day. [SOUND] [FOREIGN] This could be useful not only for translation, but for people who need visual assistance. [MUSIC] Google Lens is supposed to work automatically when you launch it. But Google's also launching a number of filters that will help her interpret things in specific instances. For instance, shopping, dining and translating. Shopping can be useful, for instance, if you're looking at plant. Where you could analyze what it is if you're using auto mode, but in shopping, it'll actually help you find out where to buy that plan. AR is entered Google search and AR is increasingly on Google lens will those. start to duff tail and intertwine, and Google io, it looks like there's a lot of exploration of what that future would be. Right now, it's on the phone, Google does not have a hearsay yet, but at some point it might, and this looks like the beginning of that ai magic glue.