Speaker 1: Google tensor is the biggest mobile hardware innovation in the history of the company. It's the culmination of years of investment in AI. And Google's deep experience in Silicon. The name is a nod to tensors, a building block of machine learning computation. The tensor in also connects it to Google technologies like TensorFlow, our open source, AI and ML software library, and our tensor processing [00:00:30] units, which Google developed to power machine learning in our data centers. The tensor chip is specifically designed to offer Google latest advances in AI directly on a mobile device. This is an area where we've been held back for years, but now we're able to open a new chapter in AI driven smartphone innovation. Tensor also gives us a hardware foundation that will be building on for years to [00:01:00] come. So you get the personal, helpful experiences you'd expect from a Google phone. Monica's here to explain. What's so different about tensor.
Speaker 2: Every couple of years, Google comes out with something that completely changes how people you is, technology in their lives. We started in search and kept going with Google translate, Google photos assistant, and these innovations are [00:01:30] built around our machine learning research. It's in Google's DNA and drives everything we do while Google is known for groundbreaking work and ML. There's one place. We haven't always been able to bring it. And that's a smart phone, mobile chips simply haven't been able to keep pace with Google research and rather than wait for them to catch up, we decided to make one ourselves. We needed a chip that was engineered to fulfill our vision of what should be possible on pixel. So a few years ago, Google's [00:02:00] team of researchers came together to collaborate hardware, software, and ML. The result of that work is Google tensor. We approach tensor differently.
Speaker 2: Every aspect of tensor was designed and optimized to run Google's ML models. This permeates our entire chip. We're fortunate to have great insights when it comes to ML and built our chip based on where ML models are heading, not where they are today. Starting with the integrated ML engine, the TPU, it was custom made by [00:02:30] Google research for Google research, for the image signal processor or ISP. We brought key algorithms directly into the Silicon for power efficiency, even our choices for CPU and GPU. We're designed to compliment our ML to deliver advanced computational photography. The CPU cluster is a two plus two plus four configuration. It includes two big arm X one cores. The 20 core GPU also delivers a premium gaming experience for the most popular Android games. There's a context [00:03:00] hub that brings machine learning to the ultra low power domain. It enables ambient experiences like now playing and always on display to run all the time without draining your battery.
Speaker 2: Tensor was designed for total performance and efficiency. When it comes to running Google experiences, it, it has to be really good at heterogeneous computing. Here's what that means. As software applications on mobile phones become more complex. They run on multiple parts of [00:03:30] the chip. This is heterogeneous computing to get good performance for these complex applications. We make system level decisions for the SOC. We ensure different sub systems inside tensor work really well together rather than optimizing individual elements for peak speeds, peak CPU and GPU speeds look great in benchmarks, but they don't always reflect real world user experience. Pixel five is a good example of our approach. Google software delivered a great experience, even on a chip that didn't went on benchmarks. Don't [00:04:00] get me wrong. Tensor CPU and GPU are much faster compared to any past pixel.