Want CNET to notify you of price drops and the latest stories?

Google's self-driving cars still need that human (driver's) touch

Autonomous vehicles aren't yet ready to hit the mainstream, the company's stats reveal.

Katie Collins Senior European Correspondent
Katie a UK-based news reporter and features writer. Officially, she is CNET's European correspondent, covering tech policy and Big Tech in the EU and UK. Unofficially, she serves as CNET's Taylor Swift correspondent. You can also find her writing about tech for good, ethics and human rights, the climate crisis, robots, travel and digital culture. She was once described a "living synth" by London's Evening Standard for having a microchip injected into her hand.
Katie Collins
2 min read
Google Self-Driving Car

Google's self-driving car still needs humans.


Google's self-driving cars have been trundling around the roads of Mountain View, California, pretty safely for well over a year, but they still need a human ready to take over at a moment's notice.

The California Department of Motor Vehicles released a report this week showing that Google's cars would have hit an object on at least 13 occasions had their human drivers not stepped in.

Of the 341 times humans had to take over the cars in testing that took place in California in the 14 months leading up to November 2015, 272 of those times it was due to a failure in the autonomous technology installed in the pod-like vehicles, the report shows.

These failures mostly consisted of software or perception errors. In all circumstances where such a failure occurs, the test driver receives a warning from the car and takes manual control. Each incident is recorded and later replayed in a simulator to work out what went wrong. In only 13 of the recorded incidents would Google's bubble cars have crashed without human intervention, the simulator showed.

Self-driving cars are a priority for Google, which eventually hopes to create a business providing software to traditional car manufacturers. One of the main incentives for developing autonomous vehicles is to make roads safer by eliminating the possibility for crashes caused by human error. With companies including Toyota, Nissan and Google hoping to get self-driving cars on the roads within the next five years, the safety records of test vehicles are under intense scrutiny. If the aim is to create cars that can be driven with minimal human intervention, Google's track record may offer some encouragement.

Two of Google's near misses involved dodging traffic cones, and three saw test drivers take over to save the car from being hit by another car. When Google engineers later ran these scenarios through their simulator they found that had the human drivers not intervened, the self-driving cars would have been hit by the other vehicle.

"These events are rare and our engineers carefully study these simulated contacts and refine the software to ensure the self-driving car performs safely," wrote Google in the report. A software "fix" is tested against many miles of simulated driving then tested again on the road, the company explained. When the fix has been reviewed successfully, it is then rolled out to the entire fleet.

As the total number of miles the fleet drives add up, the number of failures it has to replicate in its simulator decrease, Google said. It added though that the failures occurred so rarely it was tricky to spot trends. The company's cars have driven 424,331 miles on California roads in total and haven't experienced one near miss since April 2015.

Google seems to be having more success than automaker Nissan, which too has been using California roads to test its self-driving cars. Its own report shows drivers had to take control of vehicles on 106 occasions, despite only driving 1,485 miles.