Speaker 1: The RTX 30, 90 tie, a monster GPU 40 shader Terra flops 78 RT Terra flops 320 tens or Terra flops. 24 gigabytes of 21 gigabit per second, G six X memory, the fastest ever the RTX 30 90 tie. Our next BF GPU tune in later this month for more details today, we are announcing the RTX 30 50, the GForce RTX 30 [00:00:30] 50 brings the performance and efficiency of the APU architecture to more gamers than ever before. It powers the latest games at over 60 frames per second, based on our APU arch texture, the 30 50 comes equipped with second generation RT cores for Ray tracing and third generation tensor cores for DLSS and AI. For the first time, you can play Ray trace games on a 50 class GPU at over 60 frames per second. The RTX 30 50 comes equipped with eight gigabytes of G [00:01:00] six memory and start at just 2 49.
Speaker 1: It will be available worldwide on January 27th from all of our partners. Today, we are announcing the RTX 30, 80 tie laptop GPU bringing the flagship 80 tie class of GPUs to laptops for the first time, featuring 16 gigabytes of the fastest GDR six shipped in a laptop. The RTX 30, 80 tie delivers higher performance than the desktop Titan RTX [00:01:30] RTX 30 80 tie laptops start at 24 99. Introducing the RTX 30, 70 tie it's 70% faster than the RTX 2070 super laptops and delivers a hundred frames per second at 1440 P RTX 30 70 tie laptops start at 1499 and laptops powered by both of these new GPS will be available starting February 1st for creators looking for the best laptop for your work. [00:02:00] We are also announcing new Invidia studio, laptop ops with the latest RTX GPS. These laptops are seven times faster than the newest MacBook pro 16 in 3d design with RTX hardware accelerator rate tracing AI and videos, high performance video processor.
Speaker 1: They are the perfect tool for any creator workflow. Today. We are now announcing the fourth generation max Q technologies. Let me show you what we've done on [00:02:30] laptops. Power is shared between the GPU and CPU CPU efficiency is critical for maximizing performance. So we developed CPU optimizer. We've worked with CPU vendors to create a new low level framework, enabling the GPU to further optimize the performance, temperature, and power of next generation CPUs. As a result, CPU efficiencies, improved and power is transferred to the GPU for more [00:03:00] gaming performance for creators and students who rely on compute heavy apps like Adobe, premier, blender, or MATLAB we've developed rapid core scaling. It enables the GPU to sense the real time demands of the application and use only the cores. It needs this frees up power that can be used to run the active course at higher frequencies.
Speaker 1: Delivering up to three times more performance for intensive creative work on the go battery boost. 2.0 has been totally rearchitected. [00:03:30] Now AI controls the whole platform, finding the optimal balance of GPU and CPU, power usage, battery, discharge, image, quality and frame rates. All in real time. The result is great playability on battery with up to 70% more battery life. Today, we are announcing that Invidia omniverse is out beta and generally available to GForce RTX studio creators. Omniverse brings over 20 years of Invidia's groundbreaking work graphics, [00:04:00] AI simulation, and compute into a single platform to transform 3d workflows. Let me explain how it works. Omniverse connects independent 3d design worlds together into a shared virtual space that connecting language of omniverse is U S D or universal scene description, which you can think of as the HTML of 3d worlds today, a 3d artist typically works sequentially across multiple applications like 3d S max for modeling, [00:04:30] then substance painter for texturing, and finally unreal engine to arrange the scene exporting and importing large files many times along the way with, with omniverse artists, connect their apps and then compose the combined scene using omniverse create once in omniverse, an artist can draw on invidious superpowers like physics, which can let artists use true to reality simulations that obey the laws of physics and RTX renderer [00:05:00] to see the scene in real time, fully rate raced or path traced omniverse also lets you collaborate with another artist from across the room or across the globe connecting their favorite apps into a single shared scene.
Speaker 1: Changes made by one designer are reflected back to the other artist like working in a cloud shared document, but in 3d, this is the future of 3d creation and how virtual worlds will be built today. We are announcing an even easier way for artists to [00:05:30] collaborate with nucleus cloud. Now in early access nucleus cloud simplifies omniverse scene sharing it's one click to collaborate and your entire 3d scene is online. 3d marketplaces are now featuring omniverse, ready assets, check out the collection of free assets. Now available in the omniverse launcher, omniverse Machinima has been a big hit with creators who love to gain it, lets you remix and recreate your own game cinematics with thousands [00:06:00] of game assets and environments. We are now adding me warrior five and shadow warrior three assets to the Maima library. Audio to face is a revolutionary AI enabled app that easily animates a 3d face with just an audio track. We are now some supporting blend shape and direct export to epics metahuman.
Speaker 2: I'm here to talk about autonomous vehicles. It's perhaps the most intense AI challenge, but it's also one with the greatest benefits to society. We've [00:06:30] created this end to end automotive platform and modules so that our partners can use exactly what they need to speed time to market and build a product that can stay true to their brand. We have some partners that just buy our chips and core operating system while developing their own software applications. Other partners like Mercedes-Benz rely on us across this entire stack from our self-driving software, running on Invidia, drive computers in each car to training AI models in the cloud, synthetic data generation vehicle [00:07:00] validation and testing of new features through simulation, which finally get pushed over the air into each Mercedes-Benz vehicle. Both the hardware and software must be comprehensively tested and validated to ensure they can handle the harsh conditions of daily driving with the string safety and security needs of an automated vehicle.
Speaker 2: This is why Invidia has built and made open the drive Hyperion platform, which specifies the high performance computer and sensor architecture [00:07:30] that meets the safety requirements of a fully autonomous vehicle. Today we are in our eighth generation of Hyperion. It has been adopted by hundreds of, of automakers truck makers, tier one S and robo TAXII companies. Hyperion a is designed with redundant drive, orange computers, 12 state-of-the-art around cameras, nine radar, 12 ultrasonics, one front facing LIDAR and three interior sensing cameras. It's architected to be functionally safe so that if one com computer fails, [00:08:00] there's a backup available to ensure that the autonomous vehicle can drive its passenger to a safe place. Invidia has built drive SIM replicator, a synthetic data generator for autonomous vehicle development. Replicator helps our AI engineers to build up heart, to label ground truth data by synthetically generating them from virtual cameras, LIDAR and radar sensors within our omniverse simulation platform.
Speaker 2: In this way, engineers can train AI models even before any real [00:08:30] data has been collected and replicator can label ground truth in ways that humans cannot tracking moving objects across sensors, velocity, distance, occlusion, and severe weather conditions. It's a really powerful tool for the AV developer. It's accurate low cost and it fills in gaps for data, not easily found in the real world, but there's another equally important computer that is needed for a next generation software defined vehicle. And we call it drive [00:09:00] concierge with drive concierge, vehicle occupants have access to intelligent services that are always on using Invidia omniverse avatar for real time, conversational AI
Speaker 3: Welcome Cheryl, your calendar or shows your CES hotel check in for tonight. If we leave now we can avoid traffic on eight 80, perfect. Drive me there.
Speaker 2: Omniverse avatar connects speech AI, computer vision, natural language, understanding recommendation [00:09:30] engines and simulation. The technology of omniverse avatar enables dry of concierge to serve as everyone's digital assistant, helping them with recommendations to make reservations safely, use mobile devices and provide alerts. Like if a person's left behind in the vehicle, drive chauffeur and concierge together can do really beautiful things. As an example, they can serve as a valet to let you out at the entrance of your destination and enable the car to search for a parking spot [00:10:00] on its own. And then when you're ready to leave, you simply summon your concierge who will ask your chauffeur to drive your car back to you.