No matter what any manufacturer may tell you, self-driving cars are not yet a thing. Your Tesla's Autopilot isn't infallible, nor is your ProPilot Assist or your Mercedes Distronic or Cadillac Super Cruise. These systems are designed to assist drivers, not replace them, but people are still tuning out on their commute, letting technology take the wheel, and they're crashing.
One of the most common kinds of advanced driver assistance system failures that result in collisions, sometimes fatal ones, involves the system being locked on to a car in front of it, having that car change lanes quickly, and there being a stopped vehicle ahead. Usually, in these cases, the systems are programmed to ignore stationary objects, but even if the system recognizes the stopped car in front of it, it may be too late to stop.
Thatcham Research, a UK-based insurance research organization, published a video recently that shows this happening on multiple occasions in testing.
Part of the problem with these ADAS systems being misused is driver complacency. The systems work so well in most situations that drivers begin to let their attention waver for longer and more extended periods of time. Jalopnik's Raphael Orlove wrote an excellent piece that explores this phenomenon, and it is absolutely worth your time to read it.
Marketing ties into this and Tesla's Autopilot is a great example. The name Autopilot implies that you can check out and let your incredible electric car handle the chore of driving for you, even though Tesla repeatedly tells its customers that they need to stay engaged during driving, even with Autopilot on. It's a misleading name, something we've said many times.
Early in Waymo's self-driving car testing, the company found that human drivers were overly trusting of the vehicle's ability to govern itself and would repeatedly hesitate to intervene in situations that needed a human driver. This inspired Waymo to skip all the lesser levels of autonomy because they felt humans couldn't be trusted to use them safely and work exclusively on Level 4 and 5 autonomy that would require no human intervention. This is being borne out again and again with many of today's Level 2 ADAS systems on public roads.
The second major problem is the way in which emergency braking systems are designed to work. Ars Technica did a fantastic deep-dive on why adaptive cruise control (and thus automatic emergency braking, as the two technologies are inexorably linked) ignore stationary objects.
Essentially, most adaptive cruise systems -- particularly early systems from the late 1990s -- use radar to calculate the distance between your vehicle and the vehicle in front of it. As Ars Technica points out, radar is excellent at determining the speed of things, but it is terrible at defining objects that may be around the vehicle, and so the designers of these systems did the most straightforward thing and made it so the system ignored stationary objects.
The adaptive cruise and automatic emergency braking systems are typically not linked to other ADAS systems like active lane-keeping assistance, so the system lacks outside reference and is incapable of dealing with the stopped car that is all of a sudden in its path. If a driver is paying proper attention, as they should, an emergency lane change or hard manual application of the brakes would probably stop a crash from happening, but increasingly drivers aren't paying attention.
Are there instances where these driver assistance systems prevent collisions and save lives? Absolutely. We can't argue that the systems don't work as intended, but it does seem irresponsible to make and market these features in vehicles without having ways to more effectively monitor a driver's level of attentiveness beyond merely having weight on the steering wheel.