Autoplay: ON Autoplay: OFF
The Next Big Thing
The Next Big Thing: Gesture Control in CarsButtons, knobs, touchscreens and voice commands are all making room for the next way we'll control our cars.
Gesturing at your car, even when it's not broken down. I'm Brian Cooley from C/Net in search of the next big thing. One of the most interesting trends I've seen lately at the Consumer Electronics Show in Las Vegas and the North American International Auto Show in Detroit is a wholesale move to at least trying gesture control in cars. Take BMW for example. They're looking at a technology that would mount up in the headliner or up in the sunroof and look down at your hand. Being able to figure out where it is and what course gesture it's taking like a rotating motion, for example, that could simulate or replace turning a volume nob or a fan control. Volkswagen's technology flips that idea on it's head, puts the sensor down in the console, behind the shifter, looking up. And it can tell which way your hand is moving, and whether it's flat, or bladed, as well as picking up push gestures, that would indicate a selection. More futuristically, Mercedes F015 concept car, sees a world were the immersive infotainment system for the passengers, would be gesture controlled. And Audi, even a couple of years ago, was showing us a technology where they would change the entire windshield into a three zone head-up display. And new gesture control to click and slide panels back and forth between driver and passenger. Now notice all these gesture control technologies are going after a fairly coarse set of movements and controls. I don't see anyone out there yet thinking you're going to write out a complicated navigation address in mid air with gesture. That seems to still be best left to hopefully improve voice command. Touchpads or the good old fashioned touch screen. But what I want to see is gestured controlled proof that it is a better way of controlling things, not just different. So here's the three part litmus test. Gesture control has to be positive. In other words it has to work the first time, not require two or three attempts like voice command often does. It has to be affirmative. That means I don't have to glance away all the time to see if it got. What it thinks I gestured for. And third, it has to be context sensitive. Smart enough to ignore my motions when I'm just talking to a friend or reaching coffee, and then monitor my motions and interpret them when it does matter and I am trying to communicate something. Know what's next at CNET.com/NextBigThing. I'm Brian Cooley.