X

Cars could learn to drive themselves from human behavior

At the GPU Technology Conference developer conference, Nvidia CEO Jen-Hsun Huang explains how deep neural networks enable autonomous cars.

Wayne Cunningham Managing Editor / Roadshow
Wayne Cunningham reviews cars and writes about automotive technology for CNET's Roadshow. Prior to the automotive beat, he covered spyware, Web building technologies, and computer hardware. He began covering technology and the Web in 1994 as an editor of The Net magazine.
Wayne Cunningham
3 min read

NVidia CEO Jen-Hsun Huang
Nvidia CEO Jen-Hsun Huang shows off the Drive PX autonomous vehicle computer amidst a backdrop of the Project Dave car. Wayne Cunningham/CNET

Object recognition makes up one part of autonomous car technology, but then comes the question of what the car does in response to what it detects. At the keynote address for Nvidia's GPU Technology Conference for developers, CEO Jen-Hsun Huang said it wasn't a matter of programming if-then statements into the car, but teaching the car behaviors that will help it respond to a wider variety of situations.

Huang's keynote address focused on how Nvidia's GPU technology can be used to build deep neural networks, which enable deep learning. One practical application of this technology, which will affect most people and could make for large changes in how cities function, is autonomous car technology.

Earlier this year at CES, Nvidia announced Drive PX, a development computer for autonomous cars that uses a deep neural network. Drive PX becomes available in May, and Audi has already committed to using this technology to develop autonomous cars.

Processing the visual world

With Nvidia's roots in graphic processors, Huang spent a good part of his keynote detailing how computers can learn to recognize objects from a visual input, whether it's a still image or a video feed. For this process, Nvidia inputs millions of images, tagged with the names of objects they depict, into a deep neural network. The network processes the images by breaking them down into patterns and textures.

Nvidia Drive PX computer
Nvidia's Drive PX computer uses two Tegra X1 chips and could enable autonomous cars. Wayne Cunningham/CNET

When the network encounters an image it has not seen before, it breaks it down in a similar fashion, comparing its patterns and textures to those it has stored. As those component parts match those of tagged images, the computer can identify objects in the new image.

This research has been going on for 50 years, and Huang pointed out how Nvidia's processors helped it take a leap forward in accuracy in 2012. Using a test called AlexNet, deep neural networks are now more accurate than humans at recognizing objects in images. In a demonstration on stage during the keynote address, Huang showed how the network could not only recognize a cat in an image, but also identify its breed.

That type of object recognition will be necessary for cars of the future to recognize the wide array of things that will cross their paths in real-world environments.

Human reactions

Huang went further, proposing a deep neural network as a solution for what a car does when it identifies objects in its environment. Rather than the simple driver assistance features of today, such as forward collision prevention hitting the brakes if it detects any object in a car's path, Huang said cars could use the deep learning to cope with more complex situations.

As an example, Huang brought up Project Dave, a DARPA project in which experimenters taught a remote-controlled car to drive itself. Rather than giving the car a path to follow or programming in exactly what it should do when its sensors detected an object in its path, the team used videos of how a human drove around an environment. The Project Dave car learned from watching the human avoid specific objects in the environment. When it was set loose in the environment, it then matched those behaviors.

That deep learning technology gave the Project Dave car -- and could give future autonomous cars -- a more flexible idea of what to do when encountering an object.

For example, some forward-collision warning systems on cars now will trigger when sensors detect a dip in the road ahead, as they are unable to distinguish it from an object. A car with a deep neural network would not only recognize a dip in the road for what it is, but would also have learned how a human driver responded. As such, it might choose merely to slow down a little, to lessen the shock on passengers in the car.

Instead of having engineers trying to program if-then statements for all of the different situations that may arise in the real world, an automaker could instead feed videos of real-world driving into the network, from which the car would learn appropriate behaviors when encountering similar situations.

Acknowledging the limits of even sophisticated deep neural networks, Huang pointed out that engineers could program in a set of limits to bolster learned behaviors, such as never hit a solid object. And here we come close to Isaac Asimov's Three Laws of Robotics, where autonomous cars could be programmed to never hit a human.