Speaker 1: Hey everyone. And thanks for joining us for inside the lab. We work on a lot of different technologies here at meta everything from virtual reality to designing our own data centers. And we are particularly focused on foundational technologies that can make entirely new things possible. And today we are going to focus on perhaps the most important foundational technology of our time. Artificial intelligence. A lot of AI research is focused on understanding the physical world, but in [00:00:30] the metaverse we're going to need AI that is built around helping people navigate virtual worlds, as well as our physical world with augmented reality. And because these worlds will be dynamic and always changing, AI is going to need to be able to understand context and learn in the way that humans do. And when we have glasses on our faces, that will be the first time that an AI system will be able to really see the world from our perspective. Speaker 1: See what we see here, what we hear and more [00:01:00] so the ability and expectation that we have for AI systems is going to be much higher. Now we are already using simpler machine learning systems to parse information for us today. Every time you get a recommendation or search for something, or even take a photo on a phone, there is machine learning in the background. Computing is also becoming increased singly contextual. Instead of this static experience, that's the same, no matter where you are, the way that we use computers. Now adapts much [00:01:30] more to what you're doing. And as devices have gotten better at understanding and anticipating what we want, they've also gotten more useful. Now I expect that these trends will only increase in the future. Metaverse will consist of immersive worlds that you can create and interact with with all the visual information that includes like your position in 3d space, your, your body language, facial gestures, and so on. Speaker 1: And this is all from your first person perspective. So you [00:02:00] experience it and move through it as if you are really there. And all that adds up to a lot more input to be processed and a lot more content to be generated. So we're gonna need help navigating all of this efficiently. And the work that we do to build this is gonna pave the way for assistance that can move between virtual and physical worlds too. A key part of this effort is building better models for richer and deeper communication between people and AI. So today we are announcing project karaoke, [00:02:30] which is a fully end to end neural model for building on device assistance. It combines the approach behind blender bot with the latest in conversational AI to deliver better dialogue capabilities. And from there to support true world creation and exploration, we need to advance well beyond the current state of the art for smart assistance. Speaker 1: So we are working on two areas of AI research to make this possible egocentric perception, which [00:03:00] is about seeing worlds from a first person perspective and a whole new class of generative AI models that help you create anything that you can imagine. Now here's an AI concept that we created called builder bot, which showcases this work. It enables you to describe a world and then it will generate aspects of that world for you. So let's take a look at how this works. Hey, builder bot first, let's start with the scene. Let's [00:03:30] go to a park. Actually. Let's go to the beach. Pretty good. Let's add some clouds, Huh? That's all AI generated. Actually. Let's add some Alto Cumulus clouds. All right. And let's add an island over there. Speaker 2: [00:04:00] That's cool. How about we add some trees out here by the, by the sand. Let's get a picnic blanket down here. Let's put up a table, let's put a stereo, let's get some drinks as well. Let's get the sound of some waves and seagulls Speaker 1: Does that speaker work? Let's play some tropical [00:04:30] music And let's add a hydrofoil. You gotta have a hydrofoil. Speaker 2: You gotta teach me how to ride one in VR. Speaker 3: In the future, we aim to integrate our project karaoke model with augmented and virtual reality devices, enabling even more immersive and multimodal [00:05:00] interactions with AI assistance. Speaker 4: How's my Poso coming. Speaker 3: For example, your assistant could help you make your mom's delicious PO sole listing out ingredients as you need them and proactively guiding you through the recipe. Speaker 5: You already added salt to this recipe, and I noticed you are running low. So I've put in an order for more Speaker 3: By combining augmented and virtual reality devices with our project karaoke model, we hope the future of conversational AI will be more personal and seamless. Speaker 6: Here [00:05:30] you go. Speaker 7: Thank you. Mom always liked Speaker 8: This recipe spicy. Speaker 9: What was the pepper she recommended Speaker 5: Your mom used have an Aeros pepper. Speaker 10: Hmm. Speaker 5: And don't forget to slice it really thinly. Like she does Great job, Jose, is it smelling like one? Your mom makes Speaker 10: It Speaker 11: Smells like the real thing. Speaker 1: There are four basic pillars of our work on AI. First, there is foundational research, which is where [00:06:00] teams can do original and unbounded research to advance the state of the art. This is where we're looking to further understand and push the whole field forward. It's completely open. Our researchers can work on whatever they want and we publish a lot of our work. So that way anyone can access it. Second. There's the AI for product team, which is about taking what we've learned and building it into products at scale third responsible AI, which focuses on the implications [00:06:30] of technology and what it means to build responsibly and is home to our teams who work on fairness and privacy, preserving AI and fourth AI infrastructure, which covers everything from our AI platform to our compute efforts, to pie torch, uh, the leading open source machine learning framework that we developed and which is used in tens of thousands of projects around the world. Now in each of these areas, we have some pretty amped is just hiring goals. So Jerome, [00:07:00] uh, do you wanna say more about the kind of opportunities that are open to folks who wanna work at meta AI? Speaker 12: Thanks mark. We have opportunities everywhere in north America and in Europe for research scientists, software engineers, data scientists, designers, user research, program managers. We have opportunities at all levels of organization inventing new algorithms to improve the experience for billions of our users, developing new best practices in responsible [00:07:30] AI or creating with AI completely new experiences for the S in augmented or virtual reality. Speaker 1: In speaking of opportunities, we are pretty excited about our future technology roadmap here, too. You know, if you're focused on the space, you probably saw that last month we announced, uh, we've designed and built our first super computer. And we think it's going to be one of the fastest supercomputers in the world with almost five ex flops or 5 billion, billion operations per second. And it it's a, it's [00:08:00] a beast and Jerome can, can you tell everyone here, uh, what kinds of tasks we're gonna be putting, uh, this supercomputer to work on? Speaker 12: Well, we're really, really excited about this one, mark. Uh, we wanna give AI researchers and developers the best environment possible so that they can come up with unique breakthroughs in AI and build awesome products, um, powered by AI. So this AI supercomputer is a major step forward in this regard with 16,000 GPS, that by the end of this year, you'll [00:08:30] be able to use to train a single model. It will enable us to push the step of the art in scaling AI, keep making progress in self supervised learning, and advance our efforts to create a unified world model that as we have shown today, we'll unlock the metaverse. Speaker 1: All right. So thanks very much for, for tuning in today. Everyone. I, I hope you enjoyed this. Look inside the lab. And if you're interested in pushing the state of the art and AI, and whether that's building the next generation of assistance for the metaverse or [00:09:00] creating the universal language translator, I hope that you'll consider joining us on this journey and be part of building the future.