Autoplay: ON Autoplay: OFF
Cooley On Cars
Car Tech 101: Three layers of autonomous drivingThe biggest tech story in cars for the next few decades will certainly be self-driving cars, but it won't happen in one step. CNET's Brian Cooley shows you the building blocks for the future of autonomous driving.
The first layer of this cake is what you might call V2E, vehicle to environment, where vehicles use sensors not on like these to figure out where they are and what surround them. They're sort of on their own right now to read the world as they navigate it. Just count the array of sensors on Audi's current self-driver that uses V2E technology to handle its own driving up to a full 40 miles per hour. Let's count up some of the sensors that make an Audi autonomous car of the future. Autonomous -- First of all, around the bumpers, you've seen these before. We've got these sensors here that do sonar for those pretty conventional. Back in these body panels hidden, there are also radar devices that are doing the same thing we see in the front on radar. Now, in the windshield, camera, actual optical camera that we've seen before, radomes, left and right up here. You might think those are fog lights, they are not. More sonar sensors here. Laser here is new in this technology demonstration. That's not in production yet, but this allows high-bandwidth, detailed 3D modeling of the shapes that are out in front of the car. Here's what Audi has done. They've combined adaptive cruise control which maintains distance and speed ahead of you. They've combined active lane departure which keeps the car in a lane using active steering. And they've added very sophisticated rear sensing technology to monitor what's going on behind the vehicle. Put that together and you get complete perimeter awareness. Now, in a pure V2E approach, the vehicle's responses are making sense of a lot of data. You can see on that screen behind me how the car sees me and that is not an easy bunch of bits to make sense of. That puts a lot of processing power in the car. In a lot of the past prototypes I've seen, that computing power fills the trunk of a self-driving car. But more recently, I'm seeing it fill up, maybe, a space the size of a shoebox. The next layer of self-driving is V2V, vehicle to vehicle, where cars report their position and trajectory to each other. Ford recently showed us a very clear demo of how this moves the ball forward. The gray car behind me is about to blow a red light. The blue car can't see it because of that truck. But because of this guy's ring communication, the gray car told the blue car's driver, "Something's wrong." He was able to hit the brakes. Nobody got into a collision. Here's another sphincter tightening scenario we all know. This car has stopped. The car, two vehicles behind it, can't tell because of traffic in the middle. But thanks to vehicle-to-vehicle communication, the driver in the back gets a warning to brake, even though he couldn't have seen it humanly. Also, current in-vehicle sensors wouldn't have been able to help here either, only vehicle to vehicle can see through other cars. Now, here's where the rubber or RF as it were hits the road. The Shark Fin antenna you've seen before is doing a lot more now. There's a flat antenna right here facing the sky, picking up GPS satellite coordinates. There's a vertical antenna here in the sail that is broadcasting information out to other cars within about a thousand-foot radius. It broadcasts an update of this car's speed and position every 10 seconds. The speed data comes from the car's own internal computers and data bus. That's been in vehicles for decades. Now, a key part of V2V that carmakers don't entirely control is setting up the standard language and protocols that all vehicles of all brands will use to talk to each other. That needs to be done in concert with regulators. And so far, DOT and NHTSA are kind of late. They'd promised some early guidelines by late 2013. As of right now, early 2014, we still haven't seen them which brings us to V2I, vehicle to infrastructure. That's the ultimate layer where cars, roads, traffic signals, and network centers all talk to each other to really take self-driving to a rich level. Now, this stuff is pretty green, but it rolls up the concept of the car getting signals not just from other cars, but also from, let's say, the Metro Traffic Control Center or sensors in and around roads. It can manifest itself several ways. Traffic signal sequencing can be sent live to a car, so it knows the next signal and the ones after that and what they'll be showing and when. This helps traffic move more quickly and use less fuel due to less stop and go. Congestion management -- This is the vision where a central road authority can direct cars' connected nav systems during a commute to spread out from the main route and use alternate paths to alleviate congestion and yet get everyone there in less time. Intersection management -- Like we saw on the Ford cars a moment ago, that can be accomplished via smart intersections that tell cars who it sees approaching as well as V2V as we saw, sort of a digital return to the old days with traffic cop sitting in the middle of an intersection with white gloves on. Okay, some reality checks. Vehicle to environment -- That's already happening today. We test lots of cars at CNET where you see things like adaptive cruise control or active lane departure correction that is basically V2E and it's in inactive form. Then, you get to V2V. That's a little different thing. Vehicle to vehicle is not really on the market yet. Ford says that technology we showed you earlier can be retrofitted to cars in the near future if it's passive. If it's supposed to take over brakes and acceleration and steering, that requires factory integration. That's a little tougher. Finally, there's vehicle to infrastructure. This is the big dig if you will. We have millions and millions of relatively dumb cars and dumb roads out there right now. They're gonna have to be refreshed and that means the current stock of both has to age out and be replaced.