Speaker 1: Hey everyone. And thanks for joining us for inside the lab. We work on a lot of different technologies here at meta everything from virtual reality to designing our own data centers. And we are particularly focused on foundational technologies that can make entirely new things possible. And today we are going to focus on perhaps the most important foundational technology of our time, artificial intelligence. We're gonna share some breakthroughs in our AI research and some of [00:00:30] the problems that we need to solve as we build for the metaverse. The kinds of experiences that you'll have in the metaverse are beyond what is possible today. It's an immersive verse of the internet, instead of just looking at something on a screen, you're gonna actually feel like you're inside or right there present with another person. And that's going to require advances across a whole range of areas from new hardware devices, to software for building and exploring worlds.
Speaker 1: [00:01:00] And the key unlocking a lot of these is advances in AI. So let's take a look at some of the challenges that we are working on first, creating a new generation of assistance that will help us explore new worlds today. A lot of AI research is focused on understanding the physical world, but in the metaverse we're going to need AI that is builder on helping people navigate virtual worlds, as well as our physical world [00:01:30] with augmented reality. And because these worlds will be dynamic and always changing, AI is going to need to be able to understand context and learn in the way that humans do. And when we have go glasses on our faces, that will be the first time that an AI system will be able to really see the world from our perspective. See what we see, hear what we hear and more so the ability and expectation that we have for AI systems is going to be much higher.
Speaker 1: Now we [00:02:00] are already using simpler machine learning systems to parse information for us today. Every time you get a recommendation or search for something, or even take a photo on a phone, there is machine learning in the background. Computing is also becoming increasingly contextual instead of this static experience. That's the same, no matter where you are, the way that we use computers now adapts much more to you're doing. And as devices have gotten better at understanding and anticipating what we want, they've also gotten more useful. [00:02:30] Now I expect that these trends will only increase in the future. The metaverse will consist of immersive worlds that you can create and interact with with all the visual information that includes like your position in 3d your, your body language, facial gestures, and so on. And this is all from your first person perspective. So you experience it and move through it as if you are really there.
Speaker 1: And all that adds up to a lot more input to be processed [00:03:00] and a lot more content to be generated. So we're gonna need help navigating all of this efficiently. And the work that we do to build this is gonna pave the way for assistance that can move between virtual and physical worlds too. A key part of this effort is building better models for richer and deeper communication between people and AI. So today we are announcing project karaoke, which is a fully end to end neural model for building on device assistance. It combines [00:03:30] the approach behind blender bot with the latest in conversational AI to deliver better dialogue capabilities. And from there to support true world creation and exploration, we need to advance well beyond the current state of the art for smart assistance. So we are working on two areas of AI research to make this possible egocentric perception, which is about seeing worlds from a first person perspective and a whole new class of generat AI [00:04:00] models that help you create anything that you can imagine. Now here's an AI concept that we created called builder bot, which showcases this work. It enables you to describe a world and then it will generate aspects of that world for you. So let's take a look at how this works. Hey, builder bot, let's start with the scene. Let's go to a park.
Speaker 1: Actually. Let's go to the beach. [00:04:30] Pretty good. Let's add some clouds, Huh? That's all AI generated. Actually. Let's add some Alto Cumulus clouds. All right. And let's add an island over there.
Speaker 2: That's cool. How about we add some trees out here by the, by the sand. Let's get a picnic blanket [00:05:00] down here. Let's put up a table. Let's put a stereo. Let's get some drinks as well. Let's get the sound of some waves and seagulls.
Speaker 1: Does that speaker work? Let's play some tropical music And let's [00:05:30] add a hydrofoil. You gotta have a hydrophone.
Speaker 2: You gotta teach me how to ride one in VR.